Home / Technology / Boat or panda? The US Army is now decoding human thoughts using image analysis

Boat or panda? The US Army is now decoding human thoughts using image analysis

The US Army has announced a breakthrough in image analysis — using the human brain. The technique has very narrow but powerful applicability, and it represents a whole new way of granting human insight to computers. By looking into the brains of intrepid soldiers, researchers with the Army’s MIND lab were able to use natural human neural responses to quickly and accurately categorize real-world images. It’s not exactly sentry duty — in the new Army, “GP” stands for Guinea Pig, not Ground Pounder.

In this new research, a soldier was hooked to neural electrodes for monitoring of his brain activity as he viewed a series of images. The soldiers were told to secretly pick one of five categories: boats, pandas, strawberries, butterflies, and chandeliers. After choosing, researchers showed them a rapid series of images, about one per second, and recorded the neural response to each. By looking for trends in the recognition response, the researchers could reliably determine which selection the soldier had made.

Electrodes-300x165Better, the MIND team was able to refine this process to make it faster by flashing just pieces of a photo, or “chips,” at a rate of five or more per second. If the soldier picked strawberry, then just a piece of a picture of a strawberry would cause a spike of neural interest; though the soldier will eventually be shown the full image over a series of chips, the whole process works faster when broken up and displayed more rapidly.

Companies like Google have been trying for much the same thing, but with one additional layer of abstraction: They’re building software simulations of the brain called artificial neural networks, and these neural programs can work through complex problems like image analysis much more efficiently than digital programs. The brain is far harder to manipulate and tailor than a computer program, after all, and it can only accomplish those few things evolution has happened to unthinkingly grant us.

On the other hand, even Google’s venerable cat-finding AI has to learn each component of a picture separately — a cat-finding AI won’t necessary have the slightest ability to find, say, construction cranes. But an adult human brain has an enormous catalog of objects it can quickly and accurately find even amid copious visual noise — things like missile silos, and truck convoys, for instance.

drone-2

That’s the real aim of MIND Lab’s initiative: to develop a way to directly read an analyst’s brain-waves to more quickly and accurately identify a large images in parts. Take a huge aerial shot of a patch of Pakistan, break it into 100 chips, then flash those chips at a bunch of analysts in what simply has to be a more Clockwork Orange-y situation than pictured above. The analyst will be able to do yes-no checks on each of these 100 chips much more quickly and accurately than he could scan the entire, large map for the same object.

As with any good military project, this is a very hard-nosed, results-based experiment. Rather than looking at the paths information can take through the brain so those paths can be understood and simulated, this study is developing a way to make maximum use of the abilities we already have. They hope to incorporate eye-tracking technology so they can burrow down within each chip to localize the source of the neural reaction.

Leave a Reply

x

Check Also

Caterbot or Robatapillar? Scientists create bug-like robot using origami

PRINCETON, N.J. — Scientists have taken a cue from nature’s fuzziest insects, ...