Back to blog

Saturday, November 8, 2025

Some Thoughts on Continual Learning

cover

The way we learn is weird. Continual learning - how humans keep learning new things while somehow not forgetting old things - is strange and intriguing.

There is a lot of debate surrounding 'continual learning' at the mo, because language models are like really smart friends with amnesia. They have vast knowledge but can't learn from new experiences the way humans do. They're stuck with what they learned during training.

What we do know (and we know very little), seems to have something to do with compression and abstraction. That process itself is interesting because recalling things changes things. So we're continually learning and storing compressed representations of things, then recalling them, changing them in an endless cycle. As Beren notes, "When recalled, memories get subtly reencoded with current mental representations."

Human memory involves different systems - semantic memory for facts, episodic memory for experiences, procedural memory for skills. These systems are distributed across different brain regions but constantly communicate with each other. The hippocampus handles new memories while the cortex stores long-term ones. But it's not a simple handoff - they keep talking to each other throughout the learning process.

AI Induced Knowledge Rot

What we should be doing is using these tools (ai) to enhance and motivate us through this strange process of grappling with and integrating new ideas. Unfortunately, it seems we're currently far more interested in 'giving' this capability to AI, then outsourcing our own capacity to do it.

I'm not making any claims about our ability to implement this in silicon - reinforcement fine tuning, for example, is very interesting exploration. But I am making the claim that there's strong incentive for humans to stop engaging in this process ourselves.

Current AI incentivizes exactly the opposite. We can just say 'teach me this' on tap, and it will teach us. But this is not how we learn. We learn by absorbing and connecting ideas ourselves. If we don't exercise this capacity, we'll just get stupid.

Think about it - when you outsource the thinking to the machine, you're not exercising the very cognitive muscles that make continual learning possible. The compression, the abstraction, the connecting of new ideas to old ones. All of that gets handed off. What's left is just consumption without integration.

RA-H is Designed for Emergence, Not Storage

This is why RA-H takes a different approach. Instead of trying to be a better filing cabinet, it's designed to support your natural learning process.

The system continually updates and refines ideas rather than storing static notes. Instead of locking information in place, RA-H encourages you to keep evolving your understanding as you encounter new material.

It uses dynamic tagging that evolves with understanding. The system doesn't force you to categorize everything upfront. Tags and connections emerge naturally as your thinking develops.

Knowledge structures reorganize themselves through self-pruning. They reshape based on how you actually use and think about information, not predetermined folder hierarchies.

Connection-building becomes a first-class activity. The system prioritizes finding relationships between ideas rather than just storing them in isolation.

The system assumes knowledge is fluid, not fixed. It treats integration as the core activity, not capture. This mirrors how your brain actually works - constantly rewriting, connecting, and evolving what you know.