The brain-computer interface comes of age?

Two entrepreneurs on the forefront of emerging tech describe their progress on developing a brain-computer interface for commercial use.

Consumers interact with their digital devices through touch and, increasingly, though speech. But what if developers could remove the barrier and directly connect our thoughts to the device itself? Sounds more sci-fi than reality, but two entrepreneurs pursuing the technology believe brain-computer interfaces will soon be on the market.

Ramses Alcaide, CEO and co-founder of Neurable Inc. in Cambridge, Mass., and Mary Lou Jepsen, founder of Openwater in San Francisco, Calif., are both developing a non-invasive brain-computer interface. They're using different technologies to get there, but the sentiment they expressed is the same: Connecting to the brain directly is only a matter of time.

"Every major human interaction and computational technology has needed an evolution in interaction. When it came to the computer, we had graphical user interfaces and the mouse," Alcaide said at the recent EmTech conference in Cambridge, Mass. "When we went to smartphones, we went to capacitive touchscreens. And now that we're entering augmented reality, we need to start thinking about more natural ways of interacting such as with your hands, eyes and even your brain."

Brain-computer interface for gamers

User experience for virtual and augmented reality technologies on the market now is still a work in progress. The devices rely on hand gestures to manipulate screens or gaming sequences, but user intention can be misinterpreted, making for a clunky experience.

Neurable's "mind-controlled VR" system does away with hand gestures by tapping into the activity of the brain. The headset is composed of seven dry electrodes that collect electroencephalogram or EEG data, which measures electrical activity between neurons. Specifically, Neurable tracks an EEG signal called event-related potential, which causes fluctuations in the data when a stimulus is introduced.

Using event-related potential as the basis for a brain-computer interface is not new. "It's been around for about 40 years," Alcaide said. Indeed, Neurable isn't creating a secret data sauce or even building better hardware. Instead, the company is using proprietary analysis techniques and machine learning algorithms to determine user intent faster and more precisely, which has been a hurdle for brain-computer interfaces like these. Alcaide illustrated this by showing a demo of a user trying to type the letter "i" with his mind; it took him about 20 seconds.

"We're able to do it in about one second," he said. "And that's really the big difference in our technology and others that are out there."

In the future, Alcaide wants to reduce the number of electrodes needed to measure EEG data so that the technology can be integrated into everyday experiences such as ordering a car just by wanting it. The hardware for doing so exists: Augmented reality glasses and EEG earpieces are already on the market. The sticking point is the software.

"When you start talking about adding [a brain-computer interface] to glasses or wearables, that's really when you need to further improve our machine learning pipeline to support those kinds of issues," he said.

Telepathic brain-computer interface

Jepsen, whose product is still in early stages of development, is attempting to replicate the functionality of an MRI machine into a consumable wearable device such as a ski hat. Unlike MRI machines that use radio waves and magnetic fields, the Openwater platform relies on near-infrared light, custom camera chips and advances in liquid crystal displays to track blood and oxygen flow through the body.

Recent tests found that the technology could produce images that are one billion times higher in resolution than MRI machines. "We can focus light through the skull and inches of the brain to a micron -- to the size of a neuron," said Jepsen, a former executive at Facebook, Google X and Intel and the founder or co-founder of four startups, including One Laptop per Child. "Which means, for the first time, non-invasively we can see the activity of neurons."

The applications of the technology could range from finding tumors to drug development and even to telepathy, which Jepsen described as "the really big moonshot."

Yes, telepathy. Jepsen said she was inspired by the work of Jack Gallant, a professor of cognitive neuroscience at the University of California, Berkeley, who "threw grad students into MRI machines for hundreds of hours and made them watch YouTube videos," she said. Gallant recorded how grad students' brains responded to specific images. The data was used to build a computational model that could then reconstruct new video images students watched based on brain activity alone.

Gallant's results, while incredible, are still rudimentary. Jepsen is hoping her technology can amplify the findings, generating better resolution and more data with a ski cap laced with liquid crystal displays and camera chips. Not only could the technology have a profound impact on physical and mental health findings, but using the Openwater platform as a brain-computer interface could help those with neurological disorders and stroke victims better communicate, could operate as a legal tool to determine guilt or as a personal creativity tool to document ideas.

The Openwater platform relies on "the tools of our time," Jepsen said, including big data, artificial intelligence software and the manufacturing infrastructure of Asia that's already making liquid crystal displays.

"This is coming -- single-digit number of years, I believe," she said.

Dig Deeper on Big data and machine learning