Terms:
.Net Framework: Learning the lessons of Java and other software-based VM-dependent programming languages, this is a common language runtime (CLR) and framework class library (FCL) combination in one package. It is Microsoft’s way of uniting all of their software development under one umbrella while also leaving enough room for future interoperability by abstracting the runtime away from being purely hardware based; if .Net will run on a system, goes the theory, so will anything written for .Net as well.
C#: One of a growing number of officially supported .Net programming languages. Borrowing from C++ and merging with ideas from Java and other class-based languages, C# is an object-oriented, strongly typed, generic language. Because of its ease of use and with more abstractions than a lower-level language like C++, it sees great popularity in hobby and many commercial projects.
(In open source circles, there is an alternative called Mono that is non-Microsoft based but follows the same public specifications.)
Kinect: A motion sensor and audio input device designed and primarily marketed for use through the Xbox 360 and Xbox One gaming console devices. Although there exists open source code to control the device, Microsoft has taken many strides to either assimilate or shut down most commercial projects not released under permissive open source licenses.
Project Outline:
- Create workable prototype using C# code demonstrating use of Kinect device to ‘read’ surroundings and compute location of at least a single person onscreen.
Right now, this consists of moving the example code over to a different project and and then recompiling it under a changed name. Effectively, I will be basing my first version on the examples supplied in the Kinect Studio Suite and adjusting it as needed to match my ends. The prototype will be, more or less, the example code will very little changed.
While this does bring up issues of copyrighted code (as the examples are themselves copyrighted), I will be modeling future versions on these working version to save time trying to figure out the exact calling of methods and evocation of objects needed to start the most basic usage of the device. Future versions will only resemble the copyrighted code in their similar usage of function calls and perhaps object organization.
- Adjust prototype to isolate people and other ‘seen’ objects against different backgrounds (“green screen effect”)
Like the first step above, this too will be based in the example code provided. However, it will also be of my own creation in that, instead of copying the code over wholesale, I hope to base it off of my own codebase and only use the specific techniques (instead of the code itself) for this step.
This will also move me toward a playable state ready for people other than myself to come in and test the code. First versions of this step may take place within my home (depending on if I can convince other people there to help or not), but the next step will be much more open. It will be the movement between a closed alpha and an open beta.
- Request feedback through different play versions
This step is all about the play testing. Once I reach a ‘decent’ (doesn’t crash too much) demonstration version, my hope is to bring in at least two different versions of the working code for play testing right out of the Media Park itself. The reasoning for this is that there are plenty of people around who have already volunteered or could probably be convinced to volunteer for playing a quick 5-10 minute session of the different versions of the game project.
This is also the continuation of moving the project between different computers too. Once I get a decent version going, but before I move to full, open testing, my plan is to shift the project runtime to another, more portable system and test it in a different location. If and only if that works will I move to testing out of the Media Park itself. There’s no reason to shift to the Media Park, after all, if it doesn’t work on another other than the development machine.
- Refine and polish code
This is actually the most recursive step within this list. I will be returning to this step frequently as I edit different parts and include greater complexity within the project. With each new revision and addition of functionality, the previous functionality must be testing for no apparent bugs entering the system (unit testing workflow).
Conceivably, this is also the longest step too. While some play testing will happen between versions, most of time spent on the project will take place during this step and trying to make sure the code is functional beyond the most basic parameters of the project (Agile software model).
Materials:
Development:
- Kinect (v1) device with associated USB and power connections
- Kinect Studio Software Suite 1.8 (SDK)
- Windows 7 or greater computer loaded with latest .Net framework
- Visual Studio C# (or C++) compiler (or Express compatible)
Demonstration:
- Kinect (v1) device and associated USB and power connections
- Windows 7 or greater computer loaded with latest .Net framework
Timeline:
2-8 November:
Completion of first prototype and test of compilation workflow using Visual Studio 2013 C# with example code provided by Kinect Studio Software. Testing of changing code to match initial needs of fullscreen and minimum of single person identification.
First private play-test with recording to follow soon after.
9-15 November:
Integration of “green screen” (person isolation and background substitution) code and beginning to test moving the code between systems using the .Net framework and C# compilation workflow. Once this is achieved (if it is), this will allow for migration to Media Park test environment and possibility of testing other places too.
Assuming the “green screen” code works, this week will also be the advancement into the limited private testing with those within close proximity of the code (i.e. in the same house).
Ludic aspects of the project will be added as time allows within this period too.
16-22 November:
Assuming all is progressing according to schedule, this will be the week of the demonstration within the Media Park or another associated area. This will be the public beta.
However, if things are not progressing according to the schedule (as is most likely), this will be an extended Week Two of the project and all previous steps outlined for 9-15 November will spill over into this week.
23-29 November:
This is the buffer week. Whatever is left to do will be done (or not) and the project will be wrapped up within the month of November. All presentational aspects of the overall project will continue within the timeframe of the class and the project itself, depending in its state, will either be demonstrated for the professor in person or as recorded from another setting if the project cannot be moved from its development platform.
Concerns:
- Time.
More an than other perceived issue looking into the project from outside it, time is the most pressing concern.
- C#
It’s been a few years since I’ve written anything in C# and there will be a learning curve to the project. Interfacing with the Kinect using this language is also, to a smaller degree, a concern here too. Obviously, much of the lower level code will be taken care of through the SDK I’ll be using, so that’s not a concern too much, but I haven’t used C# to interact with the Kinect much yet and that could pose a problem.
- Portability
Demonstrations of the project might be rather difficult if the code cannot be moved away from the development platform. Solving this problem will be high on the priority list of things to overcome.