Saturday 7 February 2009

Why I haven't posted in several months...

...I unno. I',m drunk i guess!

you know how you have like, a really good idea for a thing, but then you're like, man i'd need a phd's time and funding to do this idea justice... i don't want to shoot off some half cocked guff when there's stuff like the lost garden around doing really well with explaining the things. just does a disservice to blogging, you know? I keep having all these thoughts and little theories and i keep meanaing to spout them before will write and all that all say it first (bloody... The Wedding... I've been talking about all that "mechanics as message" stuff for bloody ages amongst my friend but then i never get a chance to do anything about it because i'm too shit and slow at making games), but then get tangled up with how much effort it is for a literary lightweight like me to get it out in a coscient, well formed, simple way that i just sorta give

Friday 22 August 2008

Lee Sheldon on the Emotional Divide

I don't know if this is old news, but I was linked to a Lee Sheldon PDF at work (Lee's site here). He covers, in an easy to understand way, how I've felt about storytelling in games for a long time.

To be sure, there's a lot of unanswered questions: How can mainstream development possibly be converted to this almost totally philosophically polar approach to storytelling in games? We tend to love to stick to fragile, film-aping storylines, just as television stuck to radio style plays, and then theater before finding its own legs.

We rarely attempt anything like systemic characterization or mission design, because it's so under explored. And it's under explored because there are so few good exemplars of the approach. It's a catch 22 which I feel can only be fixed by the indie scene and academia, or developers trying experiments in their own time*. Far Cry 2 seems to be promising a hearty stab in this direction, too, so I shouldn't entirely discount the mainstream's ability to push this issue. Ahk, what am I saying? Anyone and everyone can contribute to this direction in games. Heck, even Mercenaries 2 sounds like it's philosophically aligned with this attitude (in that they refused, from the start, to make "fragile" tasks which result in "Fission Mailed"). It's just that there's going to be a lot of trial and error before we even begin to settle on something cool, and I hope that won't dissuade people from the idea - from continuing to try this stuff out.

Personally, I feel like this approach (or something like it) is one of the more natural uses of the medium - certainly more natural than gameplay book ended by cut scenes, or trying to manipulate a player's emotion when you could simply react to the emotion they express.

I can certainly feel the shift in outlook amongst developers, but ultimately, many of them may not be in a position to do this approach justice any time soon. There are lots of growing pains to go through before we get there, and a lumbering risk averse mainstream to convert before anything big budget comes out with this approach (with notable exceptions, obviously).

I think we also have to accept that whatever comes out of this approach probably won't feel like whatever ideas the phrase "Interactive Movies" conjure up. It'll be its own thing, like like TV is a different feeling thing to books, or dance. I think we have to respect that, and not fault games for not being interactive versions of other mediums - let them be what they naturally want to be.

*Not to blow my own trumpet, but K2 will eventually go in this direction, after decent core gameplay is established - assuming I can expend the effort outside of my day job, which feels less and less likely.

Sunday 27 July 2008

Mind Mapping: Succinct Ideas made Incomprehensible

I haven't worked on K2 for a while, mainly because I really wasn't sure where I was going with it. It's hard to get excited about an idea which doesn't get you motivated. As a result, I decided that today's work would be about defining what I wanted from the game, which in turn might bring me to a better understanding of the actual tasks I'm facing.

Friend Leigh Ashton introduced me to the concept of Mind Mapping. Through this process, you essentially unload your thoughts onto the page. By starting at a root, and filling outwards, you eventually get a more and more detail picture than you ever would if you kept it in your noggin all at once. The human mind can only deal with 5-7 concepts at once, so this mind map becomes a kind of augmented RAM as you channel the ideas onto the page.

It occurs to me that since a lot of the time, developers don't like to read high level design documents*, the act of writing the design document is often far more important than creating the artifact for others to read.

While ideas remain in your mind, they are unreal - indisctinct platonic ideals. The gulfs of reality that they must cross are vast, but your brain always seems agile enough to distort reality, and make you feel like the idea is a coherent one. The act forces you to think these ideas through - to birth them into something which isn't using reality distortion as a crutch. In other words, writing documentation is just a tool which will hopefully hold up a mirror to your ideas, and show their flaws early on.

Here's the results of my Mind Mapping for K2. I've used a program called "FreeMind" which is nice and easy to use to generate this kind of brain-dump.

I doubt it is anything close to coherent upon first viewing, but the process has certainly helped me understand what I want from this game, and has made me super excited to get started.

It's a completely over ambitious project, but my hope is that by working outward from the core gameplay, toward the lofty goal of Social Mechanics and self harmonizing persistant universes, I'll be able to tackle the project bite by bite.

I'm inspired by Mount & Blade who seem to be doing the same thing. Each new release results in deeper gameplay, giving the core mechanics more and more context and depth.

*Obviously, a well written technical design is worth its weight in gold. But high level bluesky wishywashiness? Not as useful to the dev in the trenches.

Thursday 24 July 2008

Far Crying and Far Wanking

I just wanted to link to an excellent, if pretentiously titled, interview with Patrick Redding about Far Cry 2. Actually, it's pretty much impossible to not sound pretentious when talking about Interactive Storytelling, so I'll forgive all involved.

My buddies and me have been talking about this kind of thing for ages, all the way back to the Ludology vs. Narratology wars of early '04. For us, it's really encouraging to see a decent stab at it in the mainstream. We've seen the re-occurring patterns of this more granular approach to narrative in all sorts of games: Civ, Total War, X-Com - games where every verb and noun are crafted with the intent of creating a possibility space which could also be described as a story cloud (see? I told you this stuff gets pretentious).

Basically, if you build an arbitrary set of mechanics you end up with an arbitrary possibility space. That's fine - I'm not knocking that approach.

If you craft your mechanics with the intend of them being atomic elements in a bigger story, all of them helping to resonate some central theme, you end up with a possibility space which more readily generates story-like strings of events. When every element in a story is infused with a theme, then a linked path of each element is intrinsically infused with that theme - a linked path of elements being a basic (if emergent) form of a "story". There's lots of arguments about the difference between a "story" and a "string of events", but let's, *ahem*, ignore that for now.

The crafting of verbs and nouns is really where the author's control occurs in the sort of Interactive Storytelling that I'd personally like to see more of in games. A well defined sandbox where each verb is crafted, but their use is completely open to the player, giving them the freedom to explore and understand the "theme" or message behind the game. The story isn't forcing the player to listen to it. Instead, the player, simply through interacting and learning the limits of the world, grows to understand the underlying message.

I keep thinking of The Wire (watch it if you haven't) as a great example of this, even though it's not a game. Everyone's story in it is just one emergent thread from the world they're living in. Each thread weaves together a complete tapestry to explain the encompassing system of Baltimore. Each character HAS their own freedom, but their desires and free will are so thoroughly wrapped up in "the game" that their story arcs seem fated to express Baltimore-as-a-system.

In the same way, playing any game over and over, creates individual threads when build up this picture of what the designer was trying to get across. Not every game has this philosophy in mind, obviously: read Mario 3 through this lens, and the message you end up with may be something like "Jumping is pretty important in life, and touching things makes you shrink, or sometimes grow". Doesn't stop Mario 3 being a wonderful work of art in terms of its kinaesthetics, though (see, there's more than one kind of game "art", and they can all live happily side by side).

Where Far Cry 2 is concerned, I'm looking forward to it, but I'm still slightly worried that the focus on the higher level aspects (driving the emergent narrative) will take away the developers focus on core gameplay, affecting it negatively. Will we see a stunning lack of usability or rushed central controls? In other words, in chasing this (sorta) new approach to narrative, will it sacrifice the things that make games like Mario 3 great?

Or is there an argument to be made that for this approach to interactive storytelling to work, it's absolutely adamant that the core gameplay (the way in which the player actively expresses him/herself and creates the story) is easy and enjoyable to manipulate? After all, the core mechanics are, in fact, the central way in which the narrative is driven... gameplay is not just filler for long, cutsceney bookends - gameplay is the story. Focus on this narrative approach, and you intrinsically focus on the player's moment to moment ability to weave narrative, and therefore, you must make those tools enjoyable to use in the first place, or the act of weaving the story just won't be... well... fun!

God, I just love this stuff... the game IS the story! There IS no Narratology vs. Ludology debate! It was all a fallacy to begin with! I fuckin' knew it!

Thursday 12 June 2008

It's nut* butter and jelly time! *As in "brain"

I'm not a neuroscientist. You can guess this already, by the way I spell nyorosiyance.

I'm a game designer who is full of misconceptions about how these fancy mind interfaces work. And yet I have the gall to have opinions on them, assuming my misconceptions turn out to be premonitions. How dare I?

There's a couple of different kinds of "mind control" devices being talked about, recently, so I'll differentiate. On the one hand, you've got Sony's patented "beam images directly into your brain" technology, which I pray is an April Fool's Day Joke, incase Michael Crichton smells a whiff of some new society-upheaving technology, again, and then willfully misinterprets it again, and makes lots and lots of money. Again.

It's the simpler brain-wave readers which I'd like to talk about. Have a look:





These (I believe) monitor and interpret a user's (alpha/beta/motor?) brain waves in clever ways, converting the sketchy, fluctuating frequencies into commands for the computer. The computer acts on the command, often showing its result to the user. A basic interactive feedback loop is established. That's important for later. Remember that bit what I just sed.

Here's the un-educated bit: concentrate. Working on Goo, I got to brush up against sound analysis a little. Tommy got Fourier analysis going, which basically allows you to look at the volume of the sound being played in real time. The track is made up of loads of numbers, which represent what position the cone on a speaker wants to be at. Chuck enough alternating numbers at the speakers at high frequency, and you get vibration, and then sound. You know this already. I know you do. I'm just recapping to explain where my likely misinterpretation of the mechanics of brain-readers comes from.

I imagine that in the same way that I tried and failed to get real-time beat detection working, the waves put out by the brain are equally filled with noise, and never going to come out as raw, discernible computer commands. They have to be processed, and while a lot of smarter people than I have done a much better job at it, stuff like that is still no-where near perfect (in real-time, at least). Try loading up the XBox 360 dashboard, running the audio visualizer, and see how many of your tracks make pleasingly synchronous imagery? I love Jeff Minter and all, but I'm guessing that it's certainly not 100%.

Getting beats out of a signal, then, is not easy. What is easy is the raw properties of the wave - the volume, and the pitch. We also (almost) tried some experiments in Goo where you would shout as loud as you could into a microphone to get more force into an attack. We saved ourselves some burst blood vessels by calling it what it was: a bad fucking idea. It would have worked, though, because volume is easy to grab from a sound signal. So is pitch, as this wonderful array of pitch controlled controls shows.

So here is the big deal that I think most people are going to point out at first: are these mind-control devices going to have us using our brains to do something as arbitrary and unrelated to the mind-controlled avatar, as screaming in falsetto? When I try to calibrate a verb to a brain wave, is it going to pick up on false positives? When I use my technology-aided telekinesis to raise objects aloft, am I powering it with contemplation over the human condition? When I try to lower the object again, am I imagining the taste of a peanut butter sandwhich?

In fact, the mind monitors shown in the movies above aren't watching anything even as close to specific as tastes, visions, or even abstract concepts. They're watching whether you're relaxed, concentrating, or activating motor controls in the body (though I don't believe they can make specific movements out through the noise). So in order to "use" these things, you're having to manually choose what verb to map your "relaxation", "concentration" and "general movement" to, and then try really hard to trigger those kinds of alpha, beta, and motor(?) waves.

Designers know that good "mapping" is ideal. Natural mapping is when the action you take on a system logically and intuitively matches what happens to/in the system (i.e. "pull a trigger to fire" makes a lot of sense in a video game. The physical action and the in-game metaphor of a gun with a trigger match up nicely). What the mind interfaces currently give us is a way to monitor "being sedate", and "concentrating". It's a bit hard to think of good ways to map that to game play, since meditation and concentration are typically side effects of playing many games, rather than requirements for interfacing with them.


    Consider:
    *WARNING* BIG BOSS TWO THOUSAND APPROACH *WARNING*

    Oh crap it's huge! It has guns in its guns! Bullets are taking up the screen like... Bon Jovie fans around a... CRAP! I don't have TIME for clever metaphors... how do I set off the smart bomb again?

    Oh yeah.

    Meditate.


(Actually that could be sort of cool).

However, the human brain is an amazing thing. It's able to adapt itself to even the most hostile interfaces. We learn to put up with the foibles of crappy interfaces far more often than we complain about them (assuming, as we do, that we are the idiots for using the interface incorrectly, rather than realizing when the designer is being a malevolent dick-head to his users). As we learn a new interface, we build new neural connections to be able to deal with the problem faster, next time. We keep doing this until we're well practiced, until the point that even when objective improvements are made to the bed of nails we've come to enjoy, we feel like they're snatching our hard earned expertise away.

In the same way, I imagine that no matter what thoughts the verbs in mind-controlled games are calibrated to, our brain will be able to recalibrate itself to make that neural connection a much higher frequency mental path way, which we find easier to access, and thus find control easier. Eventually, it will become almost common sense to our brains that "concentrating" is closely equivalent to "pyrokinesis".

In other words, we, as dynamically adjusting users, will have to do the grunt work of adapting to a new user interface. But then, hasn't that always been the way?

Wednesday 11 June 2008

Don't Vote For K2 if you don't like it!

There's zero chance in a hell snowball wrapped in petrol flames that I'm winning this TIGSource thing, so I'm not going to beg for votes. However, I do insist that you give a few entries a try, and vote on it.

As far as K2's progression goes... well... I've been having to build my flexi-time back up at work recently, so I've been a bit too tired in the evenings to continue. This is because I had to go to the dentist and have a filling on my birthday, last week. Plus, I deserve a bit of down time after pulling a few late-nighters, right?

Last weekend, I got a stub program running OpenGL using the Tao framework, but I still have a lot of work ahead, porting over what I already have (need replacements for XNA's pretty fantastic Vector Math libraries). It might even be worth starting it from scratch, and just copying the more choice pieces of code over from the prototype... after all, prototypes are there to be learned from, and thrown away.

I've set up a TIGSource thread for it. First post is just a look back in time at the original K, though I'll probably be returning to a similar aesthetic (the prototype is just a multicolour mess at the moment). Yuss. I think that this K could be special.

I've also been trying to help JP with his home brew project, Purity. We had about an hour long knock-around online, but our distance, and my wifi connection hardly gave me the best latency to appreciate it properly. It's going in the right direction, and a lot of the core game play is already in. The biggest issue to fix for this kind of "naked" abstract game is clear feedback. When there's no naturally limiting real-world metaphor to work within, you're completely free to express mechanics in any way you like, and thus, your approach to feedback simply becomes "What's the most informative and intuitive way to get this idea across?", rather than "What's believable?". Gots to watch out for information overload, though.

Oh, and my new company interviewed me for their website. I always get self-conscious about photos, and this one is not exactly catching my MySpace angles.

I saw American McGee putting forward his wishes for a mind interface. I think I have a rant brewing about that. Nothing nasty, just some issues I think we'll probably see with the early attempts.

Saturday 7 June 2008

K2 Prototype Finished

It's over. In all the confusion, I forgot to link anything. Here's the thread to get it from, again.


K2 Prototype For TIGSource Procedural Generation Compo from Aubrey Hesselgren on Vimeo.

So, this is less than 22 days work in my spare time, most of it without the benefit of internet. I learned a fair amount (including: try even harder not to re-invent the wheel). The main thing I learned is that most people cannot get XNA to run.

My first move for continuing this project is to move it over to something a little more compatible. I've started using SDL.NET and the Tao Framework. This doesn't give me platform freedom (since it's based on .NET), but I'll at least be able to let more people play it this way. I mean, even if this isn't a popularity contest, it's good to let more people play so that you have more of a chance of decent feedback.

Moving over to OpenGL isn't going to be an insurmountable task (since I'm familiar, but rusty with it - it was used for the original K), but it's going to be a fair amount of work re-building all the crutches that XNA pre-built for me... nice, clean, clear libraries for maths, vectors, importing fonts... a great sound tool in XACT (which allowed me to get all my sounds in, including random pitch offsets, game-play controlled pitch shifting, and 3D positioned sound in less than 4 hours from start to end).

I'll miss XNA. I'm sorry to leave it behind. It's just that while the installation process for the end-user remains such a clusterfuck, it's not quite worth the pain of realizing that you make up probably 20% of your own audience.