How to build a brain

Written: 28th August 2025

Your brain isn't magic - therefore it's replicable with technology.

These are some thoughts that explore the idea of building an artificial brain — a system with memory, perception, and motivation, capable of learning and growing like a human. Put fancily, it would be the foundation for a new species of intelligence. The ambition is to reinvent what it means to think, work, and live with machines — and in the process, to expand our collective mind. I, at least, found it exciting thinking through some of these ideas.

Differences between brains and LLMs

1 - intrinsic motivation

Humans can generate their own goals, driven by curiosity, survival, social belonging, etc. LLMs exclusively respond to external prompts and don't have desires, self-preservation instincts, or intrinsic drives. The difference is because brains are coupled to reward systems (dopaminergic, serotonergic, etc.) that encode value, attention, and priorities. LLMs lack a reward system except indirectly during training (reinforcement learning from human feedback). Once deployed, they don't "want" anything.

2 - grounded perception, learning, and memory

Brains are embodied: they perceive through vision, hearing, touch, proprioception, smell, and taste — and act back on the world. This sensorimotor loop gives us "common sense physics" and intuitive world models. At the same time, brains learn continuously from new experiences and store them as personal memories, recalled in context (what you did yesterday, where you were when you heard big news). LLMs, by contrast, process static symbols without first-person grounding, can't update themselves in real time, and forget everything once the context window closes.

3 - conscious experience

Humans have qualia — the "what it feels like" of seeing red, being sad, or tasting coffee. LLMs manipulate symbols and probabilities with no subjective awareness. Consciousness likely emerges from embodied, self-referential processes in neural circuits that current models don't replicate.

4 - efficiency

The human brain runs all this on ~20 watts of power. Training an LLM takes megawatt-hours; even inference is orders of magnitude less efficient. Biology achieves parallel, event-driven computation at extremely low power, while today's models run on von Neumann hardware with costly matrix multiplications.

Implications of closing the gap

If you close every gap, you don't get "ChatGPT but smarter." You get a thing that acts like a person, but with the strengths of a computer: it wants, remembers, learns, perceives, yet it can work millions of times faster.

The profoundest implication of getting this right, is the survival of humans as a species - and getting it wrong may cause the opposite. If you believe that life needs to go multiplanetary for humans to survive, the array of new technologies required is extraordinary. AGI would accelerate building everything needed.

Without needing to get to Mars, this could accelerate research into healthcare, improve education, and generate many orders of magnitude higher productivity for entire countries.

The risk of course is autonomous systems could gain motivations that differ from ours, many times more powerful than us.

Compared to the models right now, it's the difference between building a calculator and building a co-worker. One is safe. The other needs rules.

Among all these gaps, perception, memory and efficiency make AI more capable, but intrinsic motivation makes it alive. The moment machines generate their own goals, they stop being passive tools and become active participants in the world.

An AI that wants to cure cancer will not just answer questions, it will spend every cycle running experiments, generating hypotheses, and inventing treatments - relentlessly, at a scale no human team will ever match. An AI that hungers for knowledge will map every molecule, every galaxy, every language.

Building intrinsic motivation - broad vs narrow scope

For the greatest impact, the underlying technology that enables intrinsic motivations should serve a very broad variety of scope when it comes to motivations. Take these deliberately eclectic examples

Getting there

Getting this technology into people's hands is how to generate the most momentum. Look at the success of chatgpt - primarily because everyone is using it.

That means:

If only enterprises get it, we end up in another "mainframes before PCs" moment. We need the PCs.

The models already exist, the cost isn't pretraining. It's building the missing layers (memory, perception, motivation). Those can be modular, built incrementally, and funded by vertical apps that ship fast and make money.

Safety

Safety can't be bolted on later. A system with memory + goals is an agent. That means:

No safety, no scale.

The choice ahead isn't whether we build artificial brains, it's whether we build them widely and safely. Doing that well will create perhaps the most possible powerful partners to humanity.


← Back to home