I must admit, until today I had never used the Kinect. I don’t really play computer games anymore unless its the occasional boredom killer session on Angry Birds, but I had always looked at Kinect as a bit of a rip off of Playstation’s EyeToy. I was wrong.
Using some software and drivers that Mussaab had given us, we were able to motion capture (MOCAP), scan objects, people and the whole room! It was incredible to think that this little WALL•E looking thing could do all that. I was impressed. Using the standard RGB camera, in tandem with a infrared camera Kinect is able to work out where a user/object is in space and create point clouds. There are limitations though, as if I was side on, and the camera couldn't see my right arm for example then as far as it was concerned it wasn't there. Also the resolution of the camera isn't brilliant, so if you are hoping to track your fingers/feet or neck - forget about it. Fortunately with new iterations of Kinect the resolution will be improved so we will have to wait till then I guess! So at the moment one way to increase accuracy of MOCAP is to use multiple Kinect’s which can equal some very impressive tracking!
Once we had scanned an area/scene what really cool was the fact we could then export (from Meshlab) the mappings/scans and open them up into 3ds Max and continue to work on the models in that. Awesome!
A huge amount of potential here, well done Microsoft for making something groundbreaking! Looking forward to doing more with this technology.