Thursday 12 June 2008

It's nut* butter and jelly time! *As in "brain"

I'm not a neuroscientist. You can guess this already, by the way I spell nyorosiyance.

I'm a game designer who is full of misconceptions about how these fancy mind interfaces work. And yet I have the gall to have opinions on them, assuming my misconceptions turn out to be premonitions. How dare I?

There's a couple of different kinds of "mind control" devices being talked about, recently, so I'll differentiate. On the one hand, you've got Sony's patented "beam images directly into your brain" technology, which I pray is an April Fool's Day Joke, incase Michael Crichton smells a whiff of some new society-upheaving technology, again, and then willfully misinterprets it again, and makes lots and lots of money. Again.

It's the simpler brain-wave readers which I'd like to talk about. Have a look:





These (I believe) monitor and interpret a user's (alpha/beta/motor?) brain waves in clever ways, converting the sketchy, fluctuating frequencies into commands for the computer. The computer acts on the command, often showing its result to the user. A basic interactive feedback loop is established. That's important for later. Remember that bit what I just sed.

Here's the un-educated bit: concentrate. Working on Goo, I got to brush up against sound analysis a little. Tommy got Fourier analysis going, which basically allows you to look at the volume of the sound being played in real time. The track is made up of loads of numbers, which represent what position the cone on a speaker wants to be at. Chuck enough alternating numbers at the speakers at high frequency, and you get vibration, and then sound. You know this already. I know you do. I'm just recapping to explain where my likely misinterpretation of the mechanics of brain-readers comes from.

I imagine that in the same way that I tried and failed to get real-time beat detection working, the waves put out by the brain are equally filled with noise, and never going to come out as raw, discernible computer commands. They have to be processed, and while a lot of smarter people than I have done a much better job at it, stuff like that is still no-where near perfect (in real-time, at least). Try loading up the XBox 360 dashboard, running the audio visualizer, and see how many of your tracks make pleasingly synchronous imagery? I love Jeff Minter and all, but I'm guessing that it's certainly not 100%.

Getting beats out of a signal, then, is not easy. What is easy is the raw properties of the wave - the volume, and the pitch. We also (almost) tried some experiments in Goo where you would shout as loud as you could into a microphone to get more force into an attack. We saved ourselves some burst blood vessels by calling it what it was: a bad fucking idea. It would have worked, though, because volume is easy to grab from a sound signal. So is pitch, as this wonderful array of pitch controlled controls shows.

So here is the big deal that I think most people are going to point out at first: are these mind-control devices going to have us using our brains to do something as arbitrary and unrelated to the mind-controlled avatar, as screaming in falsetto? When I try to calibrate a verb to a brain wave, is it going to pick up on false positives? When I use my technology-aided telekinesis to raise objects aloft, am I powering it with contemplation over the human condition? When I try to lower the object again, am I imagining the taste of a peanut butter sandwhich?

In fact, the mind monitors shown in the movies above aren't watching anything even as close to specific as tastes, visions, or even abstract concepts. They're watching whether you're relaxed, concentrating, or activating motor controls in the body (though I don't believe they can make specific movements out through the noise). So in order to "use" these things, you're having to manually choose what verb to map your "relaxation", "concentration" and "general movement" to, and then try really hard to trigger those kinds of alpha, beta, and motor(?) waves.

Designers know that good "mapping" is ideal. Natural mapping is when the action you take on a system logically and intuitively matches what happens to/in the system (i.e. "pull a trigger to fire" makes a lot of sense in a video game. The physical action and the in-game metaphor of a gun with a trigger match up nicely). What the mind interfaces currently give us is a way to monitor "being sedate", and "concentrating". It's a bit hard to think of good ways to map that to game play, since meditation and concentration are typically side effects of playing many games, rather than requirements for interfacing with them.


    Consider:
    *WARNING* BIG BOSS TWO THOUSAND APPROACH *WARNING*

    Oh crap it's huge! It has guns in its guns! Bullets are taking up the screen like... Bon Jovie fans around a... CRAP! I don't have TIME for clever metaphors... how do I set off the smart bomb again?

    Oh yeah.

    Meditate.


(Actually that could be sort of cool).

However, the human brain is an amazing thing. It's able to adapt itself to even the most hostile interfaces. We learn to put up with the foibles of crappy interfaces far more often than we complain about them (assuming, as we do, that we are the idiots for using the interface incorrectly, rather than realizing when the designer is being a malevolent dick-head to his users). As we learn a new interface, we build new neural connections to be able to deal with the problem faster, next time. We keep doing this until we're well practiced, until the point that even when objective improvements are made to the bed of nails we've come to enjoy, we feel like they're snatching our hard earned expertise away.

In the same way, I imagine that no matter what thoughts the verbs in mind-controlled games are calibrated to, our brain will be able to recalibrate itself to make that neural connection a much higher frequency mental path way, which we find easier to access, and thus find control easier. Eventually, it will become almost common sense to our brains that "concentrating" is closely equivalent to "pyrokinesis".

In other words, we, as dynamically adjusting users, will have to do the grunt work of adapting to a new user interface. But then, hasn't that always been the way?

1 comment:

blueso said...

Eventually, it will become almost common sense to our brains that "concentrating" is closely equivalent to "pyrokinesis".

Or that instead of concentrating on work we think about setting things on fire.