Brain-computer interfaces sound like science fiction. The idea that we could read thoughts from brain activity or send signals back into the brain feels impossibly futuristic. Yet companies like Neuralink, Synchron, and Merge Labs are making this technology real—with Merge Labs recently raising $252 million from OpenAI to develop ultrasound-based brain interfaces.

But what’s actually happening here? How do you translate the messy, chaotic electrical activity of billions of neurons into something a computer can understand? And how—perhaps more remarkably—can you send signals back in?

Let’s break down how brain-computer interfaces actually work.

The Orchestra Analogy

Before we dive into the technical details, let’s establish a mental model.

Imagine your brain as a massive concert hall with 86 billion musicians (neurons), each playing their own instrument. They’re not following a conductor—they’re all improvising together, creating patterns of sound that somehow produce coherent music.

A brain-computer interface is trying to do two challenging things:

  1. Reading: Listen to this orchestra and figure out what’s happening—which sections are playing, what patterns are emerging, what “song” the brain is performing
  2. Writing: Broadcast sounds into the hall to influence the performance—encouraging certain sections to play louder or softer

The challenge? You’re trying to make sense of (and potentially influence) billions of simultaneous performers, all generating signals that blend together.

What Your Brain Is Actually Doing

To understand BCIs, we need to understand what they’re trying to detect.

Neurons: The Basic Building Blocks

Your brain contains roughly 86 billion neurons—specialized cells that communicate using electrical and chemical signals. When a neuron “fires,” it generates a tiny electrical spike called an action potential.

Think of it like a microscopic spark—each one only lasts about a millisecond and involves a voltage change of about 100 millivolts (0.1 volts).

The Network Effect

Individual neurons aren’t that interesting. What matters is how they work together.

When you think about moving your hand, thousands of neurons in your motor cortex fire in specific patterns. When you see a face, different patterns emerge in your visual cortex. These patterns—the coordinated firing of many neurons—are what BCIs try to detect.

Here’s the crucial insight: your thoughts, movements, and perceptions are patterns of electrical activity across networks of neurons.

Reading the Brain: Detection Methods

There are several ways to detect these neural signals, each with different trade-offs between precision and invasiveness.

Method 1: Invasive Electrodes (The Microphone Approach)

This is what companies like Neuralink are doing: surgically implanting tiny electrodes directly into the brain tissue.

How it works:

  • Electrodes are placed right next to neurons (or even inside them)
  • They detect the electrical spikes when nearby neurons fire
  • You get extremely precise signals from individual neurons or small groups

The trade-off:

  • Incredible precision—you can detect individual neurons firing
  • Requires brain surgery with all its risks
  • Electrodes can cause scarring and signal degradation over time
  • Limited to a small area where electrodes are placed

It’s like placing microphones next to specific musicians in our orchestra—crystal clear audio, but you need to drill into the concert hall to install them.

Method 2: Surface Electrodes (The Wall Sensor Approach)

This is EEG (electroencephalography)—placing electrodes on the scalp.

How it works:

  • Electrodes sit on your scalp
  • They detect the combined electrical fields from thousands or millions of neurons firing
  • You get broader patterns but much less precision

The trade-off:

  • Completely non-invasive—no surgery required
  • Very blurry signal—the skull dampens and smears the electrical activity
  • Can’t detect individual neurons, only large-scale patterns
  • Good for detecting broad brain states (awake/asleep, focused/relaxed)

This is like trying to understand the orchestra by pressing your ear against the outside wall—you can tell when it gets loud or quiet and maybe identify major sections, but you can’t hear individual instruments.

Method 3: Ultrasound (The Vibration Approach)

This is what Merge Labs is pioneering—using ultrasound to both detect and influence brain activity.

How it works:

  • Send ultrasound waves into the brain
  • Neural activity creates tiny mechanical vibrations
  • Detect how these vibrations change the ultrasound echoes
  • Can also send focused ultrasound to influence specific brain regions

The trade-off:

  • Non-invasive like EEG but potentially more precise
  • Can target deeper brain structures that EEG can’t reach
  • Still being developed—less proven than electrode approaches
  • Can both read and write (more on this later)

This is like using vibration sensors on the concert hall walls—you detect mechanical movement rather than electrical signals, and you can potentially influence the performance by broadcasting specific vibrations.

Method 4: Functional MRI (The Metabolic Approach)

fMRI doesn’t detect electrical signals at all—it detects blood flow.

How it works:

  • Active neurons need more oxygen
  • Blood flow increases to active brain regions
  • fMRI detects these changes in blood oxygenation
  • You get a 3D map of which brain regions are active

The trade-off:

  • Amazing spatial resolution—can pinpoint specific brain regions
  • Terrible temporal resolution—takes seconds to detect changes
  • Requires a massive, expensive MRI machine
  • Not practical for real-time BCI applications

The Signal Processing Challenge

Detecting neural signals is only half the battle. The real challenge is making sense of them.

The Noise Problem

Your brain is incredibly noisy. For every signal you want to detect, there are hundreds of irrelevant signals happening simultaneously.

Using EEG? You’re picking up:

  • Signals from neurons firing all over the brain
  • Electrical activity from your muscles (especially jaw and face)
  • Electromagnetic interference from nearby electronics
  • Random electrical noise

Even invasive electrodes face noise—neurons fire spontaneously, blood flow creates electrical fields, and nearby neurons you’re not interested in create interference.

The Decoding Problem

Let’s say you’ve successfully recorded clean neural signals. Now what?

You need to figure out what those patterns mean. This is called “decoding” the neural signal.

Example: Decoding Movement Intent

When someone thinks about moving their hand, specific patterns appear in the motor cortex. But these patterns are:

  • Unique to each individual
  • Variable even in the same person
  • Affected by mood, fatigue, and attention

To decode these signals:

  1. Training Phase: The person performs actual movements while you record their brain activity
  2. Pattern Learning: Machine learning algorithms find correlations between brain patterns and movements
  3. Calibration: The system learns that for this specific person, this pattern means “move hand left”

It’s like learning a new language, except the language is different for every person and changes slightly every day.

Real-World Performance

How well does this actually work?

For invasive BCIs in research settings:

  • People with paralysis can control computer cursors
  • Some can type at 40-90 characters per minute using thought alone
  • Robotic arms can be controlled for basic movements
  • Performance is impressive but still far from natural movement

For non-invasive BCIs:

  • Much slower and less precise
  • Good for simple commands (select left vs. right)
  • Requires intense focus and mental effort
  • Practical for limited applications

Writing to the Brain: The Harder Challenge

Reading brain signals is challenging. Influencing brain activity is even more complex.

How Brain Stimulation Works

There are several approaches to sending signals into the brain:

Direct Electrical Stimulation (invasive):

  • Send small electrical currents through implanted electrodes
  • Can activate neurons in specific regions
  • Used in deep brain stimulation for Parkinson’s disease

Transcranial Magnetic Stimulation (non-invasive):

  • Generate strong magnetic fields outside the skull
  • Induce electrical currents in the brain
  • Can activate or suppress neural activity in targeted regions

Focused Ultrasound (non-invasive):

  • Send focused ultrasound beams into specific brain regions
  • Can modulate neural activity through mechanical effects
  • Still experimental but promising

Why It’s Not Mind Control

Here’s what’s crucial to understand: brain stimulation doesn’t “write” information the way you write to a computer.

When you stimulate a brain region:

  • You’re not inserting specific thoughts or commands
  • You’re nudging the existing neural networks
  • The brain’s natural patterns and organization determine what happens

It’s like our orchestra analogy: you can broadcast sound waves that make certain sections play louder, but you can’t directly control each musician. The orchestra’s existing structure and patterns determine how it responds.

Current Applications

Brain stimulation is already used therapeutically:

  • Deep brain stimulation helps control Parkinson’s tremors
  • Transcranial magnetic stimulation treats depression
  • Experimental treatments for epilepsy, chronic pain, and other conditions

These work by modulating brain activity patterns—essentially helping dysregulated brain regions return to healthier patterns.

The Current State of BCIs

Where are we actually at with this technology?

What’s Working Now

Medical Applications:

  • People with paralysis can control cursors and robotic arms
  • Deep brain stimulation effectively treats movement disorders
  • Cochlear implants restore hearing (a type of BCI for the auditory system)
  • Brain signals can control wheelchairs and smart home devices

Research Demonstrations:

  • Brain-to-brain communication (very simple signals)
  • Restoring sense of touch to prosthetic limbs
  • Using brain signals to control video game characters
  • Helping people with locked-in syndrome communicate

What’s Still Science Fiction

Not happening yet:

  • Reading complex thoughts or memories
  • Downloading skills or information into the brain
  • High-bandwidth communication with computers
  • Consumer-grade “thought control” of devices

The gap between science fiction and reality is still enormous.

The Technical Challenges Ahead

Several fundamental problems need solving:

The Bandwidth Problem

Your brain has roughly 86 billion neurons, each potentially firing hundreds of times per second. That’s an incomprehensible amount of information.

Current invasive BCIs can record from maybe 100-1,000 neurons simultaneously. Non-invasive methods capture even less specific information.

We’re trying to understand a symphony by listening to a handful of musicians.

The Stability Problem

Implanted electrodes face a hostile environment:

  • The body recognizes them as foreign objects
  • Scar tissue forms, degrading signal quality
  • Electrodes can shift position over time
  • Signals that worked yesterday might not work tomorrow

The Individuality Problem

Everyone’s brain is organized slightly differently. A BCI calibrated for one person won’t work for another without extensive retraining.

This makes it hard to create “off-the-shelf” BCI systems—each needs individual customization.

The Bidirectional Problem

For truly useful BCIs, you need both input and output—reading brain signals and providing sensory feedback.

Controlling a robotic arm is much harder if you can’t feel what it’s touching. The “write” side of BCIs is still primitive compared to the “read” side.

The Ultrasound Approach: Why It Matters

Merge Labs’ ultrasound approach represents a different strategy.

The Key Innovation

Traditional BCIs are either:

  • Highly precise but invasive (implanted electrodes)
  • Safe but imprecise (scalp electrodes)

Ultrasound promises a middle ground:

  • Non-invasive (no surgery)
  • Better spatial resolution than EEG (can target deep brain structures)
  • Can both read and write (detect activity and modulate it)

How It’s Different

Instead of detecting electrical signals, ultrasound:

  • Sends sound waves through the skull
  • Detects mechanical effects of neural activity
  • Can focus energy on specific brain regions

It’s fundamentally a different type of signal—mechanical rather than electrical.

The Open Questions

This technology is still being developed. We don’t yet know:

  • How precise the spatial resolution can be
  • How well it can decode complex neural patterns
  • What the long-term effects of repeated ultrasound exposure are
  • Whether it can match invasive approaches for clinical applications

The $252 million investment suggests confidence, but the technology still needs to prove itself.

Why This Matters

BCIs aren’t just a fascinating technical challenge—they have profound implications.

Medical Impact

For people with paralysis, locked-in syndrome, or degenerative neurological conditions, BCIs offer hope for restored communication and independence.

That’s not hype—it’s already happening in research settings and early clinical applications.

The Privacy Question

If brain activity can be read and decoded, what does that mean for privacy?

Current BCIs require extensive calibration and cooperation—nobody can secretly read your thoughts with today’s technology. But as BCIs improve, questions about neural privacy become more pressing.

Could employers require BCIs to monitor attention? Could neural data be subpoenaed in court? Who owns your brain data?

The Enhancement Question

Most current BCI work focuses on restoring lost function. But what about enhancement?

If BCIs can help paralyzed people, could they make healthy people “better”? Should they?

These aren’t just philosophical questions—they’re decisions we’ll need to make as the technology matures.

The Key Takeaways

Brain-computer interfaces are real, working technology—not science fiction. But understanding how they work helps set realistic expectations:

What BCIs can do:

  • Detect patterns of neural activity
  • Decode simple intentions (movement, selections)
  • Modulate brain activity in targeted regions
  • Help people with specific medical conditions

What they can’t do (yet):

  • Read complex thoughts or memories
  • Provide high-bandwidth communication
  • Work reliably without individual calibration
  • Replace natural sensory or motor function completely

The fundamental challenge:

  • Your brain is incredibly complex—86 billion neurons in intricate networks
  • BCIs are trying to interact with this system using relatively crude tools
  • Progress is being made, but we’re still in the early stages

Think of today’s BCIs like early computers—impressive and useful for specific tasks, but nowhere near the sophisticated devices we’ll probably have decades from now.

Looking Ahead

The next decade will likely bring:

  • Better non-invasive detection methods (like ultrasound)
  • More biocompatible implants with longer lifespans
  • Improved decoding algorithms using advanced AI
  • Clearer ethical and regulatory frameworks
  • Real clinical applications for more conditions

Brain-computer interfaces represent one of the most ambitious technological frontiers—directly connecting human biology with digital technology. Understanding how they actually work, with both their capabilities and limitations, helps us think clearly about what this technology means for our future.

The brain isn’t a computer, and treating it like one oversimplifies the challenge. But by carefully bridging the gap between neural patterns and digital signals, BCIs are opening new possibilities for human health and capability.

That’s not magic—it’s neuroscience, signal processing, and engineering working together to solve one of the most complex problems in technology.