Will to power

Sometimes a headline catches your eye:

MONKEYS MASTER MIND CONTROL


I'm happily joining others in welcoming our new Cyborg Monkey Overlords. Did I mention I know a nice local juice bar? With bananas? Oh, that's right. I didn't need to mention it; my thoughts are yours. Here, I'll drive.

Actually, this headline isn't too far from the truth. I've noted earlier studies, where computers mapped monkey brain activity and used it help them guide robot arms with their thoughts. But these Caltech scientists distinguish their their breakthrough: "It's the difference between thinking "I want to move my hand to the right" and "I want to reach for the water".


Andersen's team recorded the neural activity during the monkeys' thinking phase and identified certain electrical signals that related to planned movement. They then used powerful algorithms to recognise these signals and translate them into the movement of a cursor on the screen. Within a day, the monkeys had learned that thinking about their plan yielded a reward, when the cursor touched the flash of light, and they stopped touching the computer screen.

The team then altered the task to include a variety of reward types, sizes and frequencies. The researchers found they were able to predict what each monkey expected to get in return for thinking about the task.

"It's an exciting study," says John Donoghue, chief scientific officer of Cyberkinetics in Foxborough, Massachusetts, who is developing similar technology for human use. "They know what the monkey is going to do before it even does it."


Taking a quick look at Anderson's article in Science, it seems that these higher-level "cognitive-based" movement plans require weeks of training before computer recognition (compared to the earlier, "trajectory based" systems, which IIRC took days). Monkeys needed 250-1100 training trials before the computer could accurately read their planned reach. Motivated humans with more organized thoughts might do better, of course. And we'll find out soon:

Cyberkinetics recently obtained Food and Drug Administration approval to implant chips in the motor cortex region of five quadriplegic patients to give them mouse control and computer access. Results will be available next year.

Implanting chips in the parietal cortex might yield unexpected side-effects, cautions Donoghue. Suppose you planned to shake your boss's hand, but thought transiently about slapping him in the face. The slap could happen.

Andersen believes that training would soon rule out unwanted responses. And the ideal brain-chip would tap into many different brain regions, coordinating planned actions with instructions for movement.

Obviously I'm enthusiastic about this, almost enough to consider specializing in neurology. But it seems to me it will take another order-of-magnitude leap in either computing power, or training time, before computers can recognize complex thoughts and movement plans. The authors opine:

Moreover, this research suggests that all kinds of cognitive signals can be decoded from patients. For instance, recording thoughts from speech areas could alleviate the use of more cumbersome letter boards and time-consuming spelling programs, or recordings from emotion centers could provide an online indication of a patient's emotional state.

The cognitive-based prosthetic concept is not restricted for use to a particular brain area, as can be seen by the finding that PRR and PMd activity could both provide goal information. However, some areas will no doubt be better than others depending on the cognitive control signals that are required...

This system, as is, could probably decode "thank you" and "the juice bar is on South Quinsigamond Ave" but not, say, "What a piece of work is man" -- unless retrieving Hamlet is a well-marked path in one's mind. And, to take Anderson's analogy further, could you train the computer to respond, not to "reach for the water" but to "quench my thirst" ? Maybe, but only if the command relied on pre-programmed subroutines.

Just like complex motions are based upon lots of trajectory-based thoughts, complex ideas are made up of smaller concepts and discrete words. If you're willing to take the time to train the computers, thought-projected movement and speech will soon be possible.