Introduction to Sensors

The use of sensors is a fairly advanced topic in 3D graphics and generally refers to detecting or sensing some signal in the environment. In 3D graphics, it is usually some movement of the user that we want to sense. Common forms of motion sensing are those involving joysticks, head tracking, motion suits, and hand/arm movements.

Developing a sensor requires some knowledge of how device drivers work. A device driver is software that helps a computer communicate with a hardware device. To fully explore this topic, you should have access to an input device such as a head tracker or special mouse (for example, the Magellan Space Mouse). However, we are able to approximate the workings of special devices using a simulated device. Included is the SimulatedDevice class that will act as a prototypical input device for these purposes.

Two classes and one interface are used together to implement a sensor in Java 3D: the InputDevice interface and the Sensor and SensorRead classes.

The InputDevice Interface

The InputDevice interface is used to communicate with a device driver. In order to be recognized by a Java 3D application, an object instantiated from a class implementing InputDevice must first be initialized and then registered through the PhysicalEnvironment.addInputDevice() method.

The most important aspect of device drivers to understand is the different types of drivers. There are three primary types, blocking, non-blocking, and demand driven. Blocking and non-blocking drivers are accessed by a scheduling thread that looks for inputs at regular time intervals. The difference between blocking and non-blocking is that a blocking driver causes the calling thread to block (wait) until the input data is completely read before returning, whereas non-blocking does not necessarily wait until the input data is completely read.

A demand driven sensor is not queried at specific intervals, but is only retrieved when the application program asks for it. The buttons can be turned on or off multiple times without the sensor doing anything. The only changes that will be recognized are those that are in place when the application asks for the status.

As already mentioned, InputDevice is an interface. As such, several methods must be developed in order to implement the interface. One of these is called pollAndProcessInput(). The purpose of this method is to update the values in the Sensor class (discussed next) with the values provided by the device driver.

The Sensor Class

There is a tight coupling of the InputDevice with the Sensor class. Indeed, the main purpose of any class implementing InputDevice is to update the Sensor class.

The Sensor class is an abstraction of all input devices such as six degrees of freedom trackers, joysticks, and haptic gloves. Another possibility is for a file to act as an InputDevice and provide input to the Sensor. This is a way to play back a user's experience for example.

At its most basic level, a Sensor represents a series of timestamps with the corresponding input values (stored in a transform) and the states of buttons and switches at the time of the read.

By way of example, let's consider a simple joystick that can only go forward and backward and has one button that can be pushed. When the pollAndProcessInput() method of the InputDevice is called, it will return a timestamp of when the method was called along with the digitized value of the front-back position of the joystick and whether the button was depressed. Note that Java 3D will normalize the input value of the front-back position to [-1,+1].

Prediction

It has been noted that the Sensor class encapsulates a series of timestamps. You might have been wondering why this series is kept when a single read is probably sufficient for the task. The answer is that the series of values enables prediction. It turns out that in tracking, prediction is almost always required in order to do a reasonable job of updating the scene. Much as we did in our collision avoidance examples, we have to look into the future and guess where the sensor will be. Otherwise, it might be too late to do anything.

This is where the series of timestamps become important. If the series of timestamps and values indicates acceleration in a particular direction, then the predicted value can reflect this information. Likewise, deceleration can also be used to better predict the sensors future value. There are two fields that are set to enable or disable prediction, PREDICT_NONE and PREDICT_NEXT_FRAME_TIME.

Finally, the prediction algorithm can be adjusted for the prediction of head and hand position and orientation. The reason for this is that the two fundamental types of prediction can be made much better by using certain constraints. For example, head movements are quite characteristic because heads do not typically move upward by more than a couple of inches, whereas hands can often move several feet.

The SensorRead Class

The SensorRead class encapsulates the data from a single read of a sensor. This includes a single timestamp, the transform value of the sensor, and the status of the buttons and switches. This class is used in conjunction with the Sensor class through the setNextSensorRead class. A full example of the use of the SensorRead is shown in Chapter 13.

Developing a Sensor

Given the InputDevice interface and the Sensor and SensorRead classes previously described, it is still difficult to understand the entire process of writing and implementing a sensor in an application. Once the Sensor is developed and is reading the data properly, an important choice has to be made regarding linking the Sensor to the scene. Does the developer want to use a Behavior class to implement the changes, or should the developer use one of Java 3D's existing mechanisms for this purpose? The most common example of the built-in mechanisms is the setUserHead() method demonstrated in Chapter 13.

In general, if you want to drive the View from either the user head position (head tracked) or the dominant or non-dominant hand (hand tracked), the built-in mechanisms are preferable. Other than using a joystick, head tracking is the most common application of Sensors. Examples of how to link the tracker to the View are given in the next chapter. You might consider cheating and telling Java 3D that some Sensor that doesn't really represent the user's head is, in fact, the user's head. In this case, prediction algorithms specific to head tracking might cause some problems. In the next chapter, we will show how prediction can be turned off.

Otherwise, you might consider working through a Behavior. Many non-tracking based sensor applications (that is, button events) will work well through the Behavior mechanism. Some tracking applications can also work well enough without using the built-in mechanisms.

Summary

Java 3D provides a well-conceived behavioral abstraction that can be used effectively for user interaction tasks. Using the Behavior class is preferable in many ways to writing a standard AWT or Swing listener because the Behavior class runs in a single thread and gathers all changes together so that they occur in the same frame. Otherwise, a Behavior is quite similar to a listener.

Important forms of user interaction in 3D environments include 2D and 3D mechanisms. Examples of 2D interaction include clicking on a button or icon, whereas 3D mechanisms include picking, collisions, and navigation.

Finally, the Sensor class is introduced as a way to feed data from the environment to the Java 3D renderer. This information is used to make changes to the rendered scene. The use of Sensors to affect the scene graph rendering is discussed in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset