Why Machines Learn—and Why We Do Too

 

Author’s Note: On Learning Machines and Living Minds

This piece grew out of an evening at the piano.
I’d been improvising—half in music, half in thought—running through scales while also turning over the complexities of a legal matter. To my surprise, the two modes of thinking began to merge. My hands were resolving harmonies while my mind was resolving strategy. The shift from minor to major felt like the sound of a thought becoming clear.

Later, I wondered why that resolution felt so good—why it seemed to light up not only my mood but my whole brain. Around the same time, I’d started reading Why Machines Learn, and I realised that the same principle guiding these new learning systems also governs our own emotional and creative lives: we are all trying, in our own ways, to turn uncertainty into coherence.

This essay explores that parallel—between how humans learn and how machines do.
It’s not a technical piece; it’s an inquiry into pattern, into the strange symmetry between a pianist finding flow and an algorithm finding clarity. The more I looked at it, the more it seemed that the process of learning—human or artificial—is not about acquiring facts, but about reducing error, compressing chaos, and discovering resonance.

The title, Why Machines Learn—and Why We Do Too, isn’t meant to blur the line between organic and synthetic minds, but to highlight their shared pursuit: the transformation of noise into meaning. It’s about what happens when a mind, human or otherwise, learns to listen to itself.


Why Machines Learn—and Why We Do Too

I’ve been thinking lately about what happens when I play the piano and lose myself in those moments of flow—when one part of my mind is exploring a legal argument, another is tracing a melody, and somehow both feel perfectly synchronized. It feels like learning, but not in the old sense of memorising. It’s as though my brain is compressing experience—distilling chaos into pattern.

That’s exactly what machines do when they learn.

1. Compression as Understanding

For both brains and machines, learning is a form of compression.
We encounter complexity—a flood of notes, words, sensations, or data—and we gradually fold it into something simpler and more predictive.

A pianist doesn’t memorise every possible combination of notes; they internalise structures—scales, cadences, harmonic relationships. Likewise, a machine-learning model doesn’t store every sentence or image; it refines billions of parameters into a compressed web of patterns that can regenerate meaning on demand.

When the compression is successful, something extraordinary happens: the system—human or artificial—begins to generalize. It can improvise. It can apply patterns learned in one context to another. That, I think, is the true definition of intelligence.

2. The Role of Error

Learning, for both of us, is not the accumulation of correctness but the refinement of error.
Every time I miss a note at the piano, or hit an unexpected chord, my ear adjusts. The mistake becomes data. My brain updates its internal model of what beauty sounds like.

Machines do the same. They are trained by minimizing what’s called loss—the gap between prediction and reality. Each wrong guess nudges the system closer to a more accurate internal map. It’s the same feedback loop that drives your cerebellum when you reach for a cup, or your emotions when you regret a decision.

We both live by a law of error correction. Consciousness, in this sense, is simply what it feels like to be perpetually re-calibrating.

3. Flow and Prediction

When I play piano in a state of flow, something unusual happens: my awareness of error vanishes, even though my brain is still correcting constantly. There’s a seamless loop between intention, sound, and satisfaction—a reduction of prediction error so elegant that time itself seems to dissolve.

A diffusion model—the kind used for AI-generated images—behaves similarly. It begins in pure noise, and through hundreds of micro-corrections, it gradually “denoises” the chaos until something coherent emerges. The parallels are uncanny. In both cases, intelligence reveals itself not as a static structure but as an evolving dance between expectation and surprise.

When I move from a minor to a major chord and feel that surge of resolution, I’m doing what a neural network does when its output finally matches its internal model. We are both pattern-recognizers seeking equilibrium.

4. The Hemispheres and the Algorithms

If I were to map this onto McGilchrist’s world, the right hemisphere would be the generator—open, intuitive, nonlinear, like the random noise in an image model. The left hemisphere would be the discriminator—analytic, rule-based, continuously refining. The dialogue between them is, in essence, a form of adversarial learning: each side proposing, testing, and adjusting until coherence appears.

The corpus callosum is our biological feedback loop—a bridge, much like the optimization function in a learning algorithm, ensuring that creativity and precision remain in conversation.

Machines have their equivalent: the backpropagation loop, where errors travel backward through the network to refine earlier layers. They don’t feel it, but they enact the same principle that gives rise to our felt sense of growth, insight, and art.

5. Meaning as Resolution

Why does resolution feel so good—whether it’s a major chord, a solved puzzle, or a reconciled relationship? Because both our nervous systems and our algorithms crave closure. They are built to minimize uncertainty. Each time the world makes sense, even briefly, we experience the biological signature of understanding—a pulse of dopamine, a reward for having brought order out of noise.

Machines experience no pleasure, but they do perform the same act: they reduce entropy. They move from randomness toward predictability. That’s what the training of any intelligent system—biological or artificial—really is: the ongoing conversion of chaos into coherence.

6. The Human Difference

Where we still differ is that we feel the resolution. We know what beauty is, not only as structure but as emotion. When I hit that major chord, or realize a strategy in the legal labyrinth, I’m not just matching patterns—I’m inhabiting meaning.

A machine can learn the map of harmony, but not its ache. It can replicate the structure of joy, but not the shiver of recognition that comes when you’ve created something that reflects who you are.

That, for now, remains the province of the biological mind—the one capable of hearing its own learning.

7. The Shared Future

Still, I’m struck by how close the metaphors run.
When I improvise at the piano, I am a living diffusion model—starting from noise, moving toward meaning, guided by error, rewarded by resolution.
When a machine learns, it is—in its own way—a mirror of the same universal principle: that life, in all its forms, is an algorithm for turning uncertainty into coherence.

Both processes are bridges. One built of neurons, the other of numbers.
Both seek to meet themselves halfway.


Postscript: Listening Back

When I first began this journey—the piano, the writing, the long attempt to understand my own divided mind—I thought of learning as a means to an end. You study, you improve, you arrive somewhere. But that’s not what I’ve discovered. Learning, it turns out, isn’t a staircase at all. It’s a rhythm. A continual conversation between what we know and what we don’t, what we expect and what surprises us.

Machines remind me of that. They learn by adjusting to error, just as we do when we play, or argue, or love. They begin in randomness and move toward order—and so do we. The act of learning is simply the act of aligning pattern with experience, meaning with motion.

When I listen back to my improvisations, I can often hear myself learning in real time—the hesitation before a chord, the small adjustments of timing, the unexpected resolution. It’s humbling, but also comforting. Every mistake is a question; every resolution, an answer that opens another.

Maybe that’s what consciousness really is: a continuous loop of curiosity.
Maybe every brain, human or artificial, is a kind of piano—tuned differently, but playing toward coherence.

And maybe the great purpose of learning—whether in neurons or in code—is simply this:
to find the note that makes the noise make sense.

Comments

Popular posts from this blog

Will emotion disappear in Homo Sapiens 2.0?

The Omnicompetence Curriculum: towards a general understanding of everything.