Tuesday, November 18, 2025
You Should Be Talking to AI, Not Typing
Posted by

I don't know exactly where this is going, but it seems logical. We're heading toward a full streaming of consciousness. Everything you think, hear, see, feel will be streamed to some external medium. The incentives to do this will be too high to ignore. When I talk to people about this, it makes them understandably very uncomfortable.
This isn't science fiction. Neuralink has working brain-computer interfaces in human patients today. Have a listen to Max Hodak (ex-nueralink).
It will start with restoring sight, then the elite, then the rest of us plebs.
Short side quest - AlterEgo is a v-interesting middle ground. A wearable that detects silent speech. Even when you're just thinking about speaking. Potentially huge. Still mostly demos though.
Real-Time Audio as Cognitive Enhancement
Voice might be the best we've got, period?
I find myself increasingly pasting articles, transcripts and ideas into Grok Voice or ChatGPT's real-time audio. It's nice to step away from the computer, go for a walk. But there's something else - It feels as though the medium forces me to think and grapple with ideas without getting stuck.
Obviously - thoughts are half-baked until you speak them out and realize the gaps.
The 'phonological loop' (https://www.sciencedirect.com/topics/psychology/phonological-loop) explains why this works well in the medium of real-time speech. It's like a small tape recorder in your brain - briefly holding sounds, then repeating them to keep them active. When you speak out loud, you're forced to run every word through this system sequentially. One word at a time, in order. You can't skip around. In silent thinking, you can jump between half-formed ideas and skip logical steps. But when speaking, the phonological loop forces you to put each word in line. This reveals the gaps where your logic breaks down.
This connects to how we actually learn. Real learning requires compression, abstraction, and connecting new ideas to old ones. Voice forces this process in real-time - compressing thoughts into words, building connections as you speak.
Speech forces sequential thinking. You can't jump around like messy thoughts in your head. You must clarify one idea completely before moving to the next.
Additionally - Real-time questioning reveals knowledge gaps. The back-and-forth quickly exposes what you thought you understood. But actually don't.
Redefining the Artifact
Let me steelman the counter-argument. Writing forces clearer thinking because it demands precision. Makes you work through ideas more thoughtfully.
If thinking and expressing thought is the more important artifact, you should probably draft and brainstorm via voice first. Whether AI writes the final piece or you do is up to you.
The new pipeline: have a real conversation with AI to work through ideas. Then it transforms that dialogue into polished writing. An intelligent thinking partner who's also a great writer.
Real-Time Voice in RA-H
I originally had real-time voice in RA-H a year ago. Then stripped it out. After using other tools, I realized I missed it. Brought it back.
SuperWhisper works for direct commands when I already know what I want to say. But for grappling with ideas - which is most knowledge work - real-time audio forces you to keep moving through. With SuperWhisper you can stop. With streaming audio you have to keep thinking.
Real-time voice has an anti-procrastination quality. It keeps you engaged and moving forward. You can't drift off like with writing or silent thinking. The continuous conversation forces focus and progress.
Try RA-H's free trial and experience real-time voice thinking for yourself: ra-h.app