Links:
Introduction
Seeking to expose participants to the central paradox presented through the realization of the body as a primary input device for entertainment software and the equal disruptive and connective force this creates between the user and the system, this project proposes to foreground the pattern and randomness continuum presented by N. Katherine Hayles through the book How We Became Posthuman. Using a Kinect device, software will be developed to track user’s movement through space and compare this with computed coordinates within a game-like interface. By using a ludic environment, the goal is to attach a scoring mechanism to a player’s physical action, directly linking both bodily movement (materialism) with a representational avatar (game information).
By using the multiple camera system of the Kinect device, a greater degree of resolution can be achieved as well. Instead of merely viewing the user through one camera, the additional input of infrared ‘vision’ helps in the computation of motion across a field through the comparison of “shadows” (overlapping objects) with “solids” (closer to point of view, non-overlapping objects), thus allowing a greater computational probability of noting both small, quick motions and larger, slower movement.
Since Kinect devices are already in use within the greater video game entertainment industry, this also ties into the discourse around video game controllers and the move away, as seen with through the increase in motion controllers, from pressing buttons to waving arms. This trend, however, will be backgrounded as part of a system that provokes users into thinking about their bodies within the feedback loop of a playing a game. By both becoming the input (absence), and thus erasing the body from the controlling cycle, and disrupting that inculcation process (presence), the ludic system will show a user as themselves and then interrupt the associational relationship between a user, their body, and the data ‘body’ as shown by the camera and rendered by the software.
Technologies
Specific to the development process includes compiled C++ code modules designed for use with the Node.js JavaScript framework. Such modules will also make direct API calls to the open source Freenect libraries used to interact with the Microsoft Kinect device and as compiled for x64 (universal) architecture for the Mac OS X system. Because of the dependency of the .Net framework on Windows systems (and software license dependencies), this project will be solely aimed at the same systems (Unix-based) that the Freenect library itself supports.
The use of JavaScript is based in the decision to design around an asynchronous framework and the ease of use by the programmer in using JavaScript itself. (It’s a language I already know, in other words). It was also picked with the hope of eventual multi-user and server-based environments for future incarnations of the project.
This project, with its inclusion of compiled modules, also presents the challenge of writing code in one language (C++) and linking into a framework that executes another (JavaScript). Understanding this bridge, and how it it is used by other Node.js modules that operate through a similar mechanism, presents an interesting challenge in writing a library with two different sides: one in a compiled world and the other in the interpreted world.
Audience
While part of a classroom project, it will be aimed, in part, at students in the class. That is, at users who A) have some familiarity with the normalized gaming practices of using a controller and the material contexts therein, and B) who may not have used a Kinect as input in the past. Since this is also created within the larger academic setting, the secondary audience will be those users who are unfamiliar with ‘computer vision’ input devices like the Kinect and how they work in conjunction with ludic environments. Specific to this group are users who may be interested in exploring algorithmic interpretation of kinetic memory too, as I plan to link how we move and learn through a space together.
Considerations
- Learning the requirements of writing a Node.js module
- Figuring out the function execution loop framework and return type translation (from typed values to JS’ ‘loose’ types)
- Linking into the dynamic Freenect library during runtime
- Compilation requirements for Mac OS X environments
- Time allotment and scheduling issues
- Reaching basic functionality should be possible rather quickly. However, advanced motion tracking will require more time and effort.
- Advanced motion tracking (beyond coordinate collision detection) might require additional libraries and perhaps even more modules to be written.
- Balancing project with other responsibilities may result in finished product with fulfillment of a limited scope.
- Demonstration problems
- Because of the use of a specific architecture (Mac OS X and 64-bit), the use of a computer with that operating system will be needed.
- Use of the Kinect device requires space (three to six feet) for positioning of the camera and being attached to previously mentioned Mac machine. This limits demonstration to a “live” event in which people must be present to use it.
Parallel goals (excitement vectors)
- Knowledge of Node.js modules
- Asynchronous frameworks have become a major trend in web development and Node.js is the current leader. Learning to write modules for this framework positions me as better at adapting to current marketplace demands.
- Opportunities for renewing of C++ knowledge are always appreciated. So, too, is learning about bridging the two worlds (compiled and interpreted) through working the V8 engine (the one behind Node.js, Chrome, and many Google JS products).
- Designing games around the theme of ‘computer vision’
- While I have dabbled with the idea before, I am intrigued at the prospect of using the Kinect’s infrared vision to track and monitor movement. I’ve been interested in seeing the various games already on the market that use this technology (and its bundling in as an integral part of the Xbox One) and have been wanting to make some projects that use it myself.
- At that same point, I am also interested in the collision, for a lack of a better word, of pattern and randomness and how humans crave patterns in their visual data. By showing a user their own body as translated by the computer and then warping it somehow, the hope is to show that how we perceive ourselves is different from how computers ‘see’ us. Our materiality (and mental illusions of our body in space) mold our experiences; computers do not have this issue.
Based on the one line about your purpose/audience, I’m guessing you are producing for gameplayers to make them more aware of what “happens” in these different type of interface environments? If that is the case (and you are doing that through the use of this different environment itself), I still would like to understand a bit more of your rhetorical situation. Who exactly is the audience? What are their wants/needs/experiences? How does what you produce have to do certain things (what rhetorical strategies) to meet the needs of the purpose and the audience? In short, I’m getting a good sense of why and how; but I need a bit more of the what.