The Human Interface Device

The Human Interface Device

Not the one you may know; a concept piece.

A message from the author:

“Here’s the deal. We don’t have much longer to live. Not you and I, but humanity. We’re at a critical point. The decisions we make in the next hundred years decide the fate of our species. Do we explore the galaxy and thrive? … or will we poison and murder each other in a quest for control?

Continued…

There’s a few things we need to figure out, a few technological advances to solve our immediate shortcomings:

  • A international agreement to guarantee ‘basic rights’ like speech, health
  • The means to provide emergency care, food, water shelter and safety to all
  • Managing our expansion in a way that coexists with the Earth’s natural environment

We need to figure out a way to all be happy without infringing on one-another’s rights.

Oddly, I believe the quickest way to get there isn’t what you’d think. The greatest and most explosive recent change in our course was electricity, then the computer. Now we’re ready for the next step. ”

The Beginning of the Human Interface Device (HID)

Background

The interface between a person and a computer hasn’t changed much since the 70s.

We have a screen to blast our eyeballs with light to show us information, a keyboard to finger and tap it in, and a mouse to steer around.

These aren’t native human tools – we learn them, slowly, awkwardly over time, through classes and practice. They’re bad for us – hunching over to reach and see, damaging your wrist.

There’s entire lessons on proper posture for using these “peripherals”. The coolest, “next best thing” is VR, and that damages your eyes. We’re blasting bright lights directly into our eyeballs, instead of figuring out how to make it seem like light is being blasted…

The Concept

At this moment, we have the ability to inexpensively read brain activity. From there, one can control a prosthetic limb, send a text, or even aim and shoot in a game. We all know that someday the music in our heads will be audible with the right tools — what’s stopping us? Why aren’t we having conversations with our loved ones without moving our lips? 3D Modeling with our minds? The possibilities are endless… What are we waiting for?

A harness (“a cool eeg hat”) gives you output to a computer, but the next obvious step is input. While a first output-only device would revolutionize the computer industry, the input device will allow us to revolutionize the human form. Seeing, smelling — actual sensation can be achieved with a proper “human interface device” that piggybacks the human nervous system to “trick” the brain into experiencing. Not only is that the next big step in the march of technology, It’s the first step to saving our lives.

The idea is simple. Truthfully Microsoft made it up, although not in the way that you’re probably thinking. The original concept is “Embrace, Extend, Extinguish” and while it was born in the software world it fits nicely into the idea that the human nervous system might be extended. Imagine connecting to and learning to occupy another body outside of your own. At what point are you safe to fully disconnect? While our bodies might age, a separately maintained instance could go on living forever.

That’s a long ways off for now, but having a mass-production controller based on a human-centric interface device would change the way we interact with our devices in a way that can bring our imagination to the digital reality.

Getting Started

Recently I purchased the Milton Bradley “Force Trainer” toy to see what you could get for that amount of money. A headset arrived, having 3 contact points and Bluetooth connectivity to be used with a phone app – specifically to make a ball go up and down. In truth I could only barely tell if it was working, but the idea is clearly effective to some extent. If you haven’t seen it already I suggest you watch Michael Reave’s “mind controlled car” video. It’s the same toy applied to a gas pedal.

If you take this concept to the next level by introducing multiple inputs (i.e. engaging different parts of the brain and mapping them to a function) and training a model based on those inputs you can create a software interface. We may discover variation between users that will require the software to allow individual training, generating a unique input mapping model for each user of the device. Additionally, it may take some time to learn to use in the same way operating any other interface was for the first time. It’ll likely be awkward, at first, but over time the user and the researchers can perfect the art of brain signal interpretation.

After an initial prototype is unveiled, we’ll start to understand the real world applications in a way that will excite, and hopefully interest will result in funding for continued research. In the sum of our efforts, who knows what the future may hold.

None of this is possible without the collective efforts of mankind. Want to help out? Check out the following:

If you want to join the core team drop into the Discord, and as always thanks for reading.

For more about us (and some affiliate links!) go the ‘About‘ page