Project Outline
Original Stated Goals:
- Create workable prototype using C# code demonstrating use of Kinect device to ‘read’ surroundings and compute location of at least a single person onscreen.
- Adjust prototype to isolate people and other ‘seen’ objects against different backgrounds (“green screen effect”)
- Request feedback through different play versions
When I initially shifted away from my first plan after the Learn Tech & Reflect, I thought it would take a great deal of effort to match the seemingly little output I had produced laboring under the idea of getting the Kinect working under the OpenKienct libraries and usable via a JavaScript bridge. After all, I had spent many, many hours hunting down resources, trying out different projects, and generally being frustrated at the lack of current documentation on getting the Kinect working on non-Microsoft systems. Writing my Project Plan at the time, I honestly thought the same amount of investment would be needed to match all that initial work.
However, after finally getting my hands on the Kinect for Windows SDK 1.8, I quickly released, that no, in fact, all the time I had allotted to just getting things up and running would not be needed. The first goal of having a prototype was achieved within an hour of having the software development kit and playing around with the example code. The same, too, the next day when I achieved the next goal of having a working “green screen” effect prototype. The examples that came with the development kit showed most of what I wanted to accomplish and were easy to copy over and run with minimal effort. As long as I was running on a Windows 7 or greater system, using a Microsoft framework (.Net), and was willing to agree to their licenses and terms, development on the Kinect was substantially easier than I had experienced up to that point.
This lead me, about a week later (Friday, 14 November), to test using the Media Park laptop I had planned to use for demonstrations. Coming in and starting up the machine during the afternoon, I was able to download the SDK, install a couple of examples, and do some simple play-testing with Summer Glassie and Zack Hill (who were in the room at the time). While not specifically my own code, I was able to answer the portability question I had about being able to move code between machines and was able, if only briefly, to prove that the SDK would work with multiple people and carry out the same goals I had initially set out: the code could pick out people and it was able to handle the “green screen” effect of isolating people from their backgrounds.
Timeline
The first two scheduled weeks, 2-8 November and 9-15 November, became mostly one week in practice. As I wrote above, the time I had allotted to trying to get a prototype working became not, as I had predicted, days or weeks, but hours. Using the example code, I was able to achieve the minimum of what I wanted very quickly. However, despite this success, I was still hampered by two main problems: a failing desktop computer and lack of consistent access to the Media Park laptop.
For the first, that was where I was primarily developing code at home. Using a 32-bit, aging system, I was able to change some things from the examples and get some different things up and running. However, because of my schedule and general dislike of working from home on other projects, I was not around as often to achieve any meaningful progress beyond some simple tweaking of the examples and changing project settings. The system itself, too, would often crash Windows Explorer or produce such significant delays in carrying out common tasks that the process of merely using the examples could often prove highly annoying.
Deciding, then, that the Media Park laptop would prove a valuable resource for completing more progress, I came in to work on the project Friday, 21 November, but was told the Media Park laptop had been picked up for a dissertation defense that morning. This was also, I should note, the Friday before Thanksgiving break during which I became sick Friday, 28 November, and remained ill through Monday, 1 December, getting little done not only on the project itself, but all other planned activities over those four days.
Therefore, the project became stalled by the end of the planned third week, 16 – 22 November, will no real progress accomplished between Friday, 14 November, and the presentation on it Tuesday, 2 December.
Victories
- The OpenKinect software works. By installing the freenect library via Homebrew, it can be used on a Mac OS X system. However, despite this ease, its usefulness is curtailed by it being a primarily C++ only library and limited to what the Kinect itself can do outside additional library functionality provided through projects like OpenCV. It provides just the functionality needed to start up and use the visual and auditory inputs from the Kinect, nothing more.
- The Kinect for Windows 1.8 SDK also works, provided you are using a Windows 7 or greater system, have updated your .Net framework runtime on your computer, and have either the Express or full licensed versions of Visual Studio to import, compile, and run changes to the example code. All of which, of course, have their own EULA and TOS to follow and obey while using the products and software together.
Concerns
Dammit, Microsoft!
If there was one general theme about this entire endeavor, it would be that one refrain shouted ad nauseam. For starting in a completely different direction of trying to make an open source project based on open source software and libraries, I ended up exactly where I did not want to be at the end: using all Microsoft code and products just to get the damn Kinect doing useful things. Sure, after playing around with the freenect library for a long time, I was eventually able to get it working, but its usefulness (as I noted above) was so limited that I had to abandon it at the risk of spending even more time on library interoperability.
While I do think the Kinect points towards a future of non-touch gesture control of devices, the present is very mired in the minutiae of different companies competing aggressively to find their niche. As my own little research into projects using the Kinect has shown, it is still a volatile market with Apple, Microsoft, and even Sony to some degree trying to sneak in gesture-based interfaces through gaming consoles and extra peripheral controls, potentially priming the future prosumers to further inoculate the technology.