Brain-Computer Interfaces

Use ↑↓ arrow keys to control the left paddle

Understanding BCIs

If you played the little game above, you just moved the paddle up and down with your keyboard. Your intention to move, a thought originating in your motor cortex, traveled down your spinal cord, activated muscles in your fingers, pressed the keys, and the paddle responded. A simple action, but from your brain - to your fingers - to the keyboard - to the screen it takes quite a series of abstractions, doesn't it?

Now imagine doing the same thing without the keyboard. Without your hand. Without any physical movement at all.

This is what a brain-computer interface (BCI) does: it creates a direct communication pathway from your brain to an external device. Instead of pressing keys, electrodes implanted in your motor cortex record the electrical activity of neurons as you think about moving. Algorithms decode these neural signals in real-time, extract your intended direction, and move the paddle accordingly. The keyboard and the hand that would operate it become unnecessary.

For someone with paralysis, this technology transforms what's possible. A BCI could in the near future control a robotic arm to grasp a cup, drive a wheelchair through thought alone, or trigger electrical stimulation of paralyzed muscles to restore natural movement. The effector changes - a cursor, a prosthetic, a stimulator - but the principle remains the same: direct neural control.

How the Brain Computes Movement Is A Distributed Story

When you decide to reach for an object, that decision doesn't originate in a single neuron or even a single brain region. Movement emerges from the coordinated activity of millions of neurons distributed across multiple cortical areas: primary motor cortex, premotor cortex, supplementary motor area, parietal regions. But also from many areas in the depth of your brain. Each contributes something different: planning the trajectory, coordinating the sequence, monitoring proprioception, adjusting for obstacles.

Traditional BCI research has focused heavily on recording from primary motor cortex (M1), treating it as the sole command center for movement. While M1 certainly plays a critical role, this view misses the distributed nature of motor computation.

Our laboratory takes a different approach. We study how motor intentions are represented and processed across multiple cortical regions simultaneously. We can observe how different brain areas cooperate during movement planning and execution. This reveals something fundamental: motor commands aren't simply generated in one place and executed downstream, they're constructed through dynamic interactions across a network.

This distributed perspective has direct implications for BCI design. If motor computation is inherently multi-regional, then BCIs that record from multiple cortical areas simultaneously might extract richer, more robust control signals than single-region approaches.

Natural Commands vs. BCI Commands

Here's a curious observation about most BCIs: they work, but they don't work as naturally as we like to believe.

Users can learn to modulate their neural activity to drive a cursor or prosthetic, but the neural patterns they produce often look nothing like the patterns associated with actual arm movements. The brain essentially learns a new skill, "BCI control", that's distinct from natural motor control. It's functional, but artificial.

Our research explores this boundary. We study how neural activity patterns during imagined or attempted movements relate to the patterns during actual execution (in non-injured subjects or before injury). We ask: why are BCI commands fundamentally different? Can we build decoders that recognize and respond to natural motor intentions rather than requiring users to discover arbitrary neural strategies?

From Laboratory to Life: Clinical Translation

Building a BCI that works in the laboratory is one challenge. Building one that works in daily life—reliably, safely, over years—is another entirely.

Clinical translation demands solutions to problems that pure neuroscience research can sometimes sidestep: surgical approaches that minimize risk, chronic electrode arrays that remain stable for years, decoder algorithms that adapt to gradual changes in neural recordings, user interfaces simple enough for independent use, and regulatory pathways that can bring these technologies to patients who need them.

Our program maintains constant dialogue between preclinical research and clinical reality. We don't just ask "what's scientifically interesting?" but also "what's clinically viable?"

The goal is to compress the timeline from "interesting finding" to "available therapy." Not every BCI research direction needs to be clinically translatable and our laboratory deliberately positions itself at the interface, asking both mechanistic questions about neural computation and practical questions about how to get these technologies into the hands of people who could benefit.