Project Natal was going to be awesome. It was going to be more than awesome…it was going to leave us just one or two hardware innovations from Minority Report. Then the Kinect came out, and it was just another Wii:
We wanted to lightsaber duel in a fully interactive digital environment — perhaps a bar fight, so we could pick up in-game chairs and bash virtual Chewbaccas to death with them — and instead we got to molest virtual tigers and go bowling. Again.
The scenario shouldn’t have been a surprise: This is what nerds do. We get way too excited about the potential of something, then, when faced with the disappointing reality, we howl in impotent rage and set out to destroy it. We put these things up on a pedestal, then immediately stand at the bottom of that pedestal with an ax, just waiting for the moment we get to chop it down. And the Kinect definitely deserved it.
We all know that new technology – game consoles especially, it seems – doesn’t launch at it’s full potential. It normally takes a couple of years to get software developers who are creative enough to see everything that a piece of hardware can do, and a couple more years to actually do it. But the Kinect is moving much faster, and some of the home brew apps are signaling a change – not just in gaming, but in computers in general:
In the right hands, the Kinect actually does all it promised and more. You just have to head out to the fringes to see it:
For starters, it’s effectively changing the future of graphic user interfaces. The medical field is making use of Kinect’s software to enhance and tweak how technicians interact with radiological scans. Instead of awkwardly manipulating a 3D image with 2D tools like a mouse and keyboard, a Kinect-driven interface uses voice recognition, body position and hand gestures to attain an entirely new level of precise, intuitive control.
And all without any sort of physical controller — hell, even Minority Report had to use gloves to accomplish the same thing.
The current Kinect games mostly recognize only a few predetermined gestures and broad, sweeping movements, but it’s not the software’s fault. For example, this Japanese gamer built a full-body 1:1 motion recognition mod. Every single movement he makes, his avatar makes in kind…think of other potential uses for this: With some collision detection, this could easily bring about the aforementioned lightsaber fantasy that takes place in a fully interactive digital environment.
Kind of like this: A fully rendered (if glitchy and unintentionally hilarious) environment with two-way interaction. He lifts up, moves and repositions digital objects inside the space, and the space, in turn, renders the real objects he places in it — his chair, for example, is present in both reality and the game. And that’s just what the Swedish version of Kip from Napoleon Dynamite here can do; if you throw some real funds and a professional development team behind it, you’ve got the closest thing we’ve ever had to true virtual reality.
Read more: http://www.cracked.com/article_18950_9-major-stories-everyone-got-wrong-this-year_p2.html#ixzz1BVYuKfOz