The quest to understand the minimal requirements for intelligent behavior is more relevant than ever—and here's where it sparks controversy. The '1,000 neuron challenge' is a fascinating experimental effort that pushes scientists to explore the capabilities of small-scale neural networks under strict constraints. Initiated in July by computational neuroscientist Nicolas Rougier, this competition (https://github.com/rougier/braincraft) invites participants to create simplified model brains capable of solving basic tasks within a maze, wielding only 1,000 neurons. The challenge also emphasizes rapid development: models must be trained in less than 100 seconds of real time and tested with just ten attempts, making efficiency a key factor.
Rougier’s initiative diverges sharply from the current trend of massive AI models, which boast trillions of parameters and require vast energy and resource investments—costs that run into millions of dollars for training. Instead, his focus is on small, resource-efficient models that are accessible to anyone with a laptop and a sense of curiosity. This approach is inspired by evolutionary principles: in nature, brains are energetically costly. For example, the human brain consumes roughly 20% of our daily caloric intake (https://doi.org/10.1016/j.conb.2022.102668). Evolutionary biology suggests that the most efficient brains are those that maximize functionality while minimizing energy expenditure—a challenge that Rougier aims to address through his competition.
Rougier notes that, despite the presence of complex models like large language models (LLMs), their biological relevance is limited. He points out that even algorithms with trillions of parameters wouldn’t survive in real-world scenarios without a physical body—unlike the tiny nematode Caenorhabditis elegans, equipped with only 302 neurons yet thriving in its environment. This contrast draws attention back to nature and asks: how can we design artificial systems that emulate biological efficiency and adaptability?
Historically, competitions have played a pivotal role in scientific progress. For instance, the 1980 'computer tournament' challenged researchers to develop strategies for the prisoner's dilemma, with the simplest approach, tit-for-tat, emerging as the most successful and inspiring Robert Axelrod’s influential book, The Evolution of Cooperation, which discusses how cooperation evolves—a foundation that continues to inform evolutionary theories. Similarly, the ImageNet challenge (https://image-net.org/challenges/LSVRC/index.php) revolutionized computer vision by fostering breakthroughs in image recognition over the past decade. Google DeepMind’s AlphaFold (https://predictioncenter.org/), which made headlines in 2020 by solving the protein-folding problem, exemplifies how targeted competitions can accelerate scientific discovery.
Rougier was motivated to launch his own challenge amid growing dissatisfaction with the narrow focus of existing models in computational neuroscience. Despite an abundance of models depicting various brain regions like the cortex and hippocampus, none can fully encapsulate the brain’s integrated functions—most are specialized, like models of V1 that cannot even see. He argues that much of this stems from a modeling approach that isolates parts without considering their interaction within the whole system. His competition seeks to change that by demanding models that incorporate perception, decision-making, and action in unison, even with severe resource limitations.
This perspective echoes the wisdom of cognitive science pioneer Allen Newell, who, more than fifty years ago, asserted that genuine understanding comes not from examining isolated functions but from developing models capable of handling multiple behaviors—what he called 'doing many things.' Today, neuroscience continues to fragment into highly specialized niches, complicating our holistic understanding of the brain. Rougier hopes that competitions encouraging the development of resource-efficient, multifunctional models can help unify this fragmented picture.
The competition features five tasks, with the first already completed and the second open for submissions (as of November). The initial task challenged participants to design a model brain capable of locating food in a maze. The winning entry utilized handcrafted weights—fixed connection values—and only 22 neurons, highlighting how simple, manually tuned models can succeed at straightforward tasks. A genetic algorithm also came third by adopting a brute-force strategy: circling the maze repeatedly. These early results reveal that different approaches can succeed with minimal complexity on simple challenges. However, as the tasks grow more complex and require a wider array of decisions, participants will need to innovate while maintaining the model’s compactness.
Crucially, the challenge pushes models to learn to perform entire behaviors within environments, rather than excelling in isolated functions like visual recognition. Limiting training time and model size forces contestants to develop solutions that are resource-aware, reflecting the constraints faced by real brains through evolution. Equally important is the competition’s design to foster fair comparison—different modeling philosophies and theories must be tested under the same conditions, promoting a level playing field.
Many in the field, including neuroscientists like Anne Churchland (https://bri.ucla.edu/people/anne-churchland/), see great promise in this approach. She expresses enthusiasm about how such competitions foster collaboration and rapid progress by allowing many minds to confront shared challenges simultaneously. However, not everyone is convinced. Mark Humphries (https://humphries-lab.org/), a computational neuroscience expert at the University of Nottingham, raises concerns about the competition’s design and scientific relevance. He notes that successful scientific competitions—like those in image recognition or protein folding—have clear, meaningful performance metrics directly linked to real-world insights. Conversely, Rougier’s challenge employs artificial, simplified tasks, which may limit the applicability of success to understanding actual brain function.
Furthermore, Humphries points out that the competition’s entry barriers—requiring proficiency in Python, GitHub, and systems neuroscience—may restrict participation to a niche community, potentially limiting the diversity of ideas. The key question he poses is whether the tasks truly align with the core scientific goal of uncovering principles of brain efficiency, or if they risk becoming mere exercises in engineering. Only as the five tasks unfold will we see whether Rougier has hit an optimal balance between simplicity and relevance.
Ultimately, the potential of this challenge hinges on whether it can reveal broader principles about building resource-efficient brains or simply serve as a testing ground for artificial problem-solving. Regardless, many, myself included, find the endeavor inspiring—pushing us to rethink how minimal neural architectures can emulate the sophisticated functions of biological brains, and what this means for the future of neuroscience and AI.