ofxOpenNI examples

The addon includes a number of examples, exploring basic depth camera and OpenNI capabilities. The examples are short, self-explanatory, and well commented. There are three groups of examples: working with depth images, tracking hands, and tracking users. Let's discuss them.

Tip

Currently, the ofxOpenNI addon holds examples in an "almost ready form". You need to construct the desired example by copying the source files and libraries in one folder. It's not very comfortable but is quite simple. Please see the Drivers & getting the examples to work section of the README file inside the addon's folder for details.

When you compile an ofxOpenNI example under Mac OS X for the first time, you can get the following compiler error:

The specified SDK "Current OS X" does not appear to have all of the necessary headers installed...

To fix the error, you need to install the Command Line Tools package in Xcode. Namely, go to the Xcode menu, click on Preferences..., and go to the Downloads pane. Select the Command Line Tools item and click on the Install button.

Working with examples of depth images

There are two examples that demonstrate grabbing and visualizing of depth images. The following examples can be used as starting points for the projects, which require just raw depth images or 3D point clouds, without hands or user's body tracking:

  • The ImageAndDepth-Simple example draws depth and color images on the screen. All the work with the depth camera is done using the openNIDevice object having the type ofxOpenNI.

    Tip

    If you press the I key, the infrared images will be shown instead of the color images. Using this capability, you can use the depth camera just like the infrared camera. By closing the laser hole and adding an infrared light source with the corresponding wavelength, you can build an infrared-based sensing solution.

  • The ImageAndDepthMultDevice-Medium example shows the depth and color images like in the previous example, but here both the images are pixel-to-pixel aligned. Using this, you can, for example, use depth data for creating a mask for the color image for removing the background and a depth-based chroma-keying effect. Such an alignment is enabled by the openNIDevice.setRegister( true ) function. For comparing the aligned and non-aligned modes, press the T key. Also, this example works with all the depth cameras connected to your computer and shows all of them on the screen.

    Tip

    Because of big data rates, you should connect each depth camera to a different USB hub. Most of the computers have just two hubs. For connecting the third camera, you need to buy an additional PCI-e USB hub.

Hand-tracking examples

The following two examples show how to detect and track hands. You can use them as a starting point for your own interactive drawing projects:

Note

Note that currently the hand tracking examples stop tracking new hands after about 30 hands are tracked. So use hand tracking via ofxOpenNI just for testing and learning, and in more serious projects, use the corresponding OpenNI functions directly, without the addon.

  • The HandTracking-Simple example shows how to enable and use hand detection and tracking. It searches for wrists in the depth image and marks them with red rectangles. Actually, the OpenNI algorithm does not use human skeleton tracking for searching hands, but just searches for them in depth map singularities, which can be hand wrists, and then tracks their movement. So you can deceive the algorithm by exposing some moving things in front of the camera.

    The tracked hand is retrieved in the example using the ofxOpenNIHand object by the following line:

    ofxOpenNIHand &hand = openNIDevice.getTrackedHand(i);

    Then its coordinate on the screen is obtained as a hand.

    ofPoint &handPosition = hand.getPosition();

    The x and y point coordinates of handPosition are pixel coordinates of the hand in the depth image, and the z coordinate is a distance between the camera and the hand in millimeters.

    Tip

    If you need a 3D position of the hand in millimeters relative to the camera, use the hand.getWorldPosition() function that returns the ofPoint object.

  • The HandTracking-Medium example tracks hands, and also cuts some portion of the depth image around each tracked hand and draws it in separate images in the bottom part of the screen.

User tracking examples

User tracking is the most advanced feature of OpenNI, which is used in many interactive wall installations. It is demonstrated in the following examples:

Note

Note that currently the user tracking examples track only the first user and then don't detect new users. So use the user tracking capability via ofxOpenNI just for testing and learning, and in more serious projects, use the corresponding OpenNI functions directly, without the addon.

  • The UserAndCloud-Simple example shows how to enable a user's body tracking and draw the user's silhouette and his/her 3D skeleton. When OpenNI detects some object as the user's body, which has the size of a human body, the object moves and is separated from the other objects in the depth image. After such a detection, the 3D model of the human body consisting of a number of joined cylinders is fitted to the found object. So we obtain a 3D skeleton, representing the body as a number of points (head, torso, shoulders, elbows, hands, hips, knees, and feet).
  • The UserAndCloud-Medium example shows the 3D point-cloud of the tracked user body in 3D, which is colored using the data from the color camera.

There is one more example, ONIRecording-Simple. It demonstrates how to record and play ONI files and store data from the depth cameras. Such files simplify the testing of algorithms: you can prepare test recordings and then tune algorithms using them instead of the real depth cameras.

Now consider the example of using depth data for tracking objects on flat surfaces.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset