Head Tracking and the Sensor Class

At the end of Chapter 12, we introduced the Sensor class as an abstract sensor for all kinds of inputs including buttons, joysticks, and 6DOF devices. We will now focus specifically on 6DOF devices for head tracking.

Recall that a Sensor is a variable length circular buffer of time stamps each with a Transform3D and the state of various buttons. In the head tracking scenario, it is the values of this Transform3D that we want to use to control our view. The reason for maintaining the information in a circular buffer is that this enables some flexibility in choosing which timestamp and data elements to use in assigning the best approximate position of whatever device is being monitored. The buffer of SensorReads can also be used in prediction, which is an invaluable technique in tracking systems in general.

The PhysicalBody class contains a specific method for slaving a 6DOF Sensor to the output, rendering the setUserHead() method.

Exploring Head Tracking through the Virtual6DOF Class

In order to explore tracking in more detail as well as to make this section accessible to readers without access to a head tracker, we develop a virtual 6 degree of freedom device, Virtual6DOFSensor (see Listing 13.3) and use it to illustrate some basic aspects of using head tracking in Java 3D.

Listing 13.3 Virtual6DOF.java
import javax.media.j3d.*;
import javax.vecmath.*;
import java.awt.*;
import java.awt.event.*;

public class Virtual6DOF implements InputDevice {
    private Vector3f position = new Vector3f();
    private Transform3D newTransform = new Transform3D();

    Sensor sensors[] = new Sensor[1];
    private int processingMode;
    private SensorRead sensorRead = new SensorRead();

    public Virtual6DOF() {
        processingMode = InputDevice.BLOCKING;
    sensors[0] = new Sensor(this);
       TransformGroup tg = new TransformGroup();
       this.outsideTG = tg;
    }
    public void close() {
    }

    public int getProcessingMode() {
        return processingMode;
    }

    public int getSensorCount() {
    return sensors.length;
    }

    public Sensor getSensor( int sensorIndex ) {
    return sensors[sensorIndex];
    }

    public boolean initialize() {
    return true;
    }

    public void pollAndProcessInput() {
      sensorRead.setTime( System.currentTimeMillis() );
      sensorRead.set(newTransform);
      sensors[0].setNextSensorRead(sensorRead);
   }

    public void processStreamInput() {
    }

    public void setNominalPositionAndOrientation() {
        sensorRead.setTime( System.currentTimeMillis() );
       //setting noimalPosition and Orientation to identity
        sensorRead.set( new Transform3D());
        sensors[0].setNextSensorRead( sensorRead );
    }

public void setRotationX() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.rotX(Math.PI/36);
        newTransform.mul(t);
    }

    public void setRotationY() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.rotY(Math.PI/36);
        newTransform.mul(t);
    }

    public void setRotationZ() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.rotZ(Math.PI/36);
        newTransform.mul(t);
    }

public void setTranslationX() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.setTranslation(new Vector3d(0.1, 0.0,0.0));
        newTransform.mul(t);
    }

    public void setTranslationY() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.setTranslation(new Vector3d(0.0, 0.1,0.0));
        t.rotY(Math.PI/36);
        newTransform.mul(t);
    }

    public void setTranslationZ() {
        sensorRead.get(newTransform);
        Transform3D t = new Transform3D();
        t.setTranslation(new Vector3d(0.0, 0.0,0.1));
        newTransform.mul(t);
    }

 public void setProcessingMode( int mode ) {
         switch(mode) {
            case InputDevice.DEMAND_DRIVEN:
            case InputDevice.NON_BLOCKING:
            case InputDevice.BLOCKING:
                 processingMode = mode;
            break;
            default:
               throw new IllegalArgumentException("Processing mode must " +
                       "be one of DEMAND_DRIVEN, NON_BLOCKING, or BLOCKING");
         }
    }

}

Note that once this device is registered with the PhyscialEnvironment object, as in the excerpt from the SimulatedHeadTracking.java application shown in Listing 13.4, it is polled continuously by the Java 3D rendering thread. Each time the renderer loops, it calls the pollAndProcessInput() method of the Sensor object. Within the pollAndProcessInput() method, we have put the appropriate code for setting the transform and updating the timestamp.

Listing 13.4 SimulatedHeadTracking.java
import java.awt.Frame;
import java.awt.Panel;
import java.awt.BorderLayout;
import java.awt.GraphicsEnvironment;
import java.awt.GraphicsDevice;
import java.awt.GraphicsConfiguration;
import java.awt.event.*;
import com.sun.j3d.utils.universe.*;
import com.sun.j3d.utils.geometry.ColorCube;
import javax.media.j3d.*;
import javax.vecmath.*;
import com.sun.j3d.utils.behaviors.mouse.MouseRotate;
import com.mnstarfire.loaders3d.Loader3DS;
//import com.mnstarfire.loaders3d.Loader3DS;

import java.applet.*;
//import com.sun.j3d.*;
import com.sun.j3d.utils.applet.*;
import java.awt.*;

import java.io.*;
public class SimulatedHeadTracking extends Applet {
    PhysicalBody body;
    PhysicalEnvironment environment;
    View view;
    Locale locale;
    public BranchGroup createSceneGraph() {
        // Create the root of the subgraph
        BranchGroup objRoot = new BranchGroup();
        // Create the transform group node and initialize it to the identity.
        // Enable the TRANSFORM_WRITE capability so that our behavior code
        // can modify it at runtime.  Add it to the root of the subgraph.
        TransformGroup objTrans = new TransformGroup();
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
        objRoot.addChild(objTrans);
        Bounds bounds =
             new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);

        //Create a 20cm wide cube spaning -10cm .. +10cm about the virtual
        // world origin.
        objTrans.addChild(new ColorCube(0.10));
        // Create a new Behavior object that will perform the desired
        objRoot.compile();
        return objRoot;
   }
    public BranchGroup createViewGraph() {
         BranchGroup objRoot = new BranchGroup();
     Transform3D t = new Transform3D();
     t.setTranslation(new Vector3f(0.0f, 0.0f,0.0f));
     ViewPlatform vp = new ViewPlatform();
     TransformGroup vpTrans = new TransformGroup();
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
     vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
     vpTrans.setTransform(t);
     vpTrans.addChild(vp);
     view.attachViewPlatform(vp);
          Bounds bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         NavigationBehavior nav = new NavigationBehavior(vpTrans);
         vpTrans.addChild(nav);
         nav.setSchedulingBounds(bounds);

         objRoot.addChild(vpTrans);
         return objRoot;

   }

    public SimulatedHeadTracking() {
        // Create a simple scene and attach it to the virtual universe
        BranchGroup scene = createSceneGraph();
       // SimpleUniverse u = new SimpleUniverse(canvases[0]);
    setLayout(new BorderLayout());
        GraphicsConfigTemplate3D g3d = new GraphicsConfigTemplate3D();
        GraphicsConfiguration gc =
                  GraphicsEnvironment.getLocalGraphicsEnvironment().
                  getDefaultScreenDevice().getBestConfiguration(g3d);
     Canvas3D c = new Canvas3D(gc);
     add("Center", c);
        // This will move the ViewPlatform back a bit so the
        // objects in the scene can be viewed.
        //View view = u.getViewer().getView();
         VirtualUniverse universe = new VirtualUniverse();
         Bounds bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         Locale locale = new Locale(universe);
         body = new PhysicalBody();
         environment = new PhysicalEnvironment();

        Virtual6DOF tracker = new Virtual6DOF();
        tracker.initialize();
        environment.addInputDevice(tracker);

         Transform3D ctotb = new Transform3D();
         environment.getCoexistenceToTrackerBase(ctotb);
         //the following command makes environment call pollAndProcessInput()
         environment.setSensor(0,tracker.getSensor(0));
         environment.setHeadIndex(0);

         //  environment.setSensor(0,v6dof.getSensor(0));
       //  environment.setHeadIndex(0);
         view = new View();
         view.setPhysicalBody(body);
         view.setPhysicalEnvironment(environment);

         view.setViewPolicy(View.HMD_VIEW);
         view.setTrackingEnable(true);
             // Attach the new canvas to the view c.setMonoscopicViewPolicy(View.
LEFT_EYE_VIEW); view.addCanvas3D(c);
         // With HMD_VIEW, coexistence coordinates in the physical world are
         // mapped exactly to view platform coordinates in the virtual world
         // except for scale.  To verify the image plate calibration, let's
         // set the scale to 1.0 so that objects in the virtual world are the
         // same size as objects in the physical world.
         view.setScreenScalePolicy(View.SCALE_EXPLICIT);
         view.setScreenScale(1.0);

         // Neither HeadToHeadTracker, CoexistenceToTrackerBase, nor the
         // initial head sensor read (from head tracker to tracker base) have
         // been set from their identity defaults.  We've set a unity screen
         // scale so that coexistence coordinates to view platform coordinates
         // is identity as well.
         //
         // This means that the initial view has view platform coordinates
         // equal to head coordinates.  Move the view platform in the virtual
         // world back by the focal plane distance of the HMD + 10cm so that
         // the front face of a 20cm wide cube centered about the virtual
         // world origin lies on the focal plane of the HMD screen image.  If
         // the HMD image plate calibration is correct then the cube image
         // will appear to be 20cm wide.
         BranchGroup vgraph = createViewGraph();
         Transform3D t = new Transform3D();
         t.setTranslation(new Vector3f(0.0f, 0.0f, 0.9144f + 0.10f));

        Set6DOFBehavior set6dof = new Set6DOFBehavior(tracker);
         vgraph.addChild(set6dof);
         set6dof.setSchedulingBounds(bounds);

         view.setBackClipDistance(80.0);
         locale.addBranchGraph(vgraph);
        locale.addBranchGraph(scene);
        //u.addBranchGraph(scene);
    }
   public static void main(String[] args) {
     new MainFrame(new SimulatedHeadTracking(), 256, 256);

    }
}

Finally, in order to invoke changes in our simulated 6DOF device, we include a Behavior, Set6DOFBehavior.java (see Listing 13.5), that listens for key events (pressing X, Y, Z) and changes the SensorRead Transform3D accordingly. This Behavior takes the place of a driver that would be used in a real tracking environment. The use of such a driver is demonstrated in the next section, “Real Head Tracking Example.”

Listing 13.5 Set6DOFBehavior.java
import java.awt.AWTEvent;
import java.awt.event.*;
import javax.media.j3d.*;
import java.util.*;
import javax.vecmath.*;

public class Set6DOFBehavior extends Behavior {
   Virtual6DOF v6dof;
   private WakeupCondition keyCriterion;
   public Set6DOFBehavior(Virtual6DOF v6dof) {
     this.v6dof = v6dof;
    }

    public void initialize() {
    //wakeupOn( conditions );
       System.out.println("Set6DOFBehavior initialize");
       WakeupCriterion[] keyEvents = new WakeupCriterion[2];
       keyEvents[0] = new WakeupOnAWTEvent( KeyEvent.KEY_PRESSED );
       keyEvents[1] = new WakeupOnAWTEvent( KeyEvent.KEY_RELEASED );
       keyCriterion = new WakeupOr( keyEvents );
       wakeupOn( keyCriterion );
 }
    public void processStimulus( Enumeration criteria ) {
      // System.out.println("processStimulus");
       WakeupCriterion wakeup;
       AWTEvent[] event;
       while( criteria.hasMoreElements() ) {
       wakeup = (WakeupCriterion) criteria.nextElement();
       if( !(wakeup instanceof WakeupOnAWTEvent) )
        continue;

      event = ((WakeupOnAWTEvent)wakeup).getAWTEvent();
      for( int i = 0; i < event.length; i++ ) {
          processKeyEvent((KeyEvent)event[i]);
      }

    }
    wakeupOn( keyCriterion );

  }

 protected void processKeyEvent(KeyEvent event) {
     int keycode = event.getKeyCode();
  // System.out.println("what: " + (.getID());
          if (keycode == KeyEvent.VK_X) {
            v6dof.setRotationX();
          }
          else if (keycode == KeyEvent.VK_Y) {
            v6dof.setRotationY();
          }
          else if (keycode == KeyEvent.VK_Z) {
            v6dof.setRotationZ();
          }
    }
}

Figure 13.2 shows a screen shot from the SimulatedHeadTracking.java example. The program sets up a situation analogous to having the cyborg's head attached to a single camera. Moving the remoted user's head (in this case using keystrokes) is akin to having the robot's camera mounted to a motorized tripod bevel.

Figure 13.2. Screen shot from SimulatedHeadTracking example.


Real Head Tracking Example

The first challenge to overcome in developing a Sensor for head tracking is in reading the data values correctly. Remember that we want our Sensor to contain two things; time stamps and a Transform3D. Our tracker, in the case of this example, is a Polhemus Fastrack that provides six values each time it is polled. The data values for each read correspond to x, y, z, pitch, roll, and yaw. We need to store these values in a Transform3D within the Sensor. Putting the data in the Transform3D is done by instantiating a new Transform3D object. The new Transform3D is an identity matrix. We can therefore rotate, multiply, and translate the matrix directly without inverting it. We will return to the development of the Sensor class after digressing a little to discuss the general challenge of setting up head tracking.

Preparing for Head Tracking

Before getting too frustrated trying to set up tracking, it is wise to remember that tracking is difficult work. It is difficult to know which way is up (or left or in front, for that matter).

Some general recommendations regarding tracking are now provided followed by a specific example of reading data on the serial port.

  • Test your tracker completely outside of Java. This simple rule is so often over looked, yet it can save days or even weeks of work in debugging.

  • Understand which of the two choices of transform chain you are working with.

  • Do be frustrated if you cannot see anything on the screen or the objects look strange when you first run your application. Make sure that you have a way to move around the scene to try to find your objects.

Reading the Values

The example in Listing 13.6 uses the Java Comm API to read the serial port. It is worth the effort to guarantee to yourself (and your boss) that you are getting reasonable values from the serial port read before moving on to slaving the view to the tracker values. Indeed, it is impossible to do tracking any other way. We cannot overemphasize this point enough because starting at the very beginning and testing at every point along the way is what everyone does eventually.

Listing 13.6 shows the extension of the InputDevice interface for reading the Polhemus Fastrak.

Listing 13.6 FastTracInputDevice.java
import javax.media.j3d.*;
import javax.vecmath.*;

public class FastrakInputDevice implements InputDevice {
    private FastrakDriver polhemus;
    private Sensor [] polhemusSensor;
    private SensorRead [] polhemusSensorRead;

    private Transform3D [] initPosTransform;
    private Transform3D [] initOriTransform;

    private int polhemusActiveReceivers;
    private Transform3D polhemusTransform = new Transform3D();
    private float [] polhemusPos = new float[3];
    private float [] polhemusOri = new float[3];

    private Transform3D posTransform = new Transform3D();
    private Transform3D oriTransform = new Transform3D();
    private Vector3f posVector = new Vector3f();
    private Transform3D trans = new Transform3D();

    private float sensitivity = 1.0f;
    private float angularRate = 1.0f;
    private float x, y, z;

    int ii=0;
        public FastrakInputDevice(FastrakDriver polhemus)
    {
        //System.out.println("FastrakInputDevice constructor");
                this.polhemus = polhemus;
        polhemusActiveReceivers = polhemus.getActiveReceivers();
        polhemusSensor    = new Sensor[polhemusActiveReceivers];
        polhemusSensorRead = new SensorRead[polhemusActiveReceivers];
        initPosTransform = new Transform3D[polhemusActiveReceivers];
        initOriTransform = new Transform3D[polhemusActiveReceivers];
                for (int n=0; n<polhemusActiveReceivers; n++)
        {
            polhemusSensor[n] = new Sensor(this);
            polhemusSensorRead[n] = new SensorRead();
            initPosTransform[n] = new Transform3D();
            initOriTransform[n] = new Transform3D();
            try {
               polhemus.readData();
              // System.out.println("readData on polhemus from InputDevice");
            } catch( Exception e ) {
                System.err.println( "PID: " + e.toString() );
            }
             getPositionTransform( n+1, initPosTransform[n] );
             getOrientationTransform( n+1, initOriTransform[n] );
        }
        setSensitivity(0.1f);
        setAngularRate(0.01f);
    }

    public boolean initialize()
    {
        for (int i=0; i<3; i++)
        {
            polhemusPos[i] = 0.0f;
            polhemusOri[i] = 0.0f;
        }
        return true;
    }

    public void close()
    {
    }

    public int getProcessingMode()
    {
        return DEMAND_DRIVEN;
    }

    public int getSensorCount()
    {
        return polhemusActiveReceivers;
    }

    public Sensor getSensor( int id )
    {
        return polhemusSensor[id];
    }

    public void setProcessingMode(int mode)
    {
    }

    public void getPositionTransform(int n, Transform3D posTrans)
    {
              //  System.out.println("getPositionTransform()");
                polhemusPos = polhemus.getLocation(n);
        posVector.x = polhemusPos[0];
        posVector.y = polhemusPos[1];
        posVector.z = polhemusPos[2];
         posTrans.setIdentity();
         posTrans.setTranslation(posVector);
    }

    public void getOrientationTransform(int n, Transform3D oriTrans)
    {
        //System.out.println("getOrientationTransform()");
                polhemusOri = polhemus.getRotation(n);
         oriTrans.setIdentity();
         // Fastrak gives azimuth, elevation and roll, which
             // do not translate to Java3D X, Y and Z directly, so
        // some assembly is required. Glue included.
         trans.setIdentity();
        trans.rotY(-Math.toRadians((double)polhemusOri[0]));
         //  System.out.println("polhemusOri[0]: "
              + (double)polhemusOri[0]);
        oriTrans.mul(trans);
                trans.setIdentity();
        trans.rotX(Math.toRadians((double)polhemusOri[1]));
        oriTrans.mul(trans);
                trans.setIdentity();
        trans.rotZ(-Math.toRadians((double)polhemusOri[2]));
         //   System.out.println("polhemusOri[2]: " +
                  (double)polhemusOri[2]);
            oriTrans.mul(trans);
    }

    public void pollAndProcessInput() {
                ii++;
           // System.out.println("pollAndProcessInput; interation: " + ii);
                try
        {
            polhemus.readData();
        }
        catch( Exception e )
        {
            System.err.println( "PID: " + e.toString() );
        }
        for (int n=0; n<polhemusActiveReceivers; n++) {
                //System.out.println("setting polhemus xform");
                polhemusSensorRead[n].setTime(System.currentTimeMillis());
              getPositionTransform(n, posTransform);
              getOrientationTransform(n, oriTransform);

              polhemusTransform.setIdentity();
              polhemusTransform.mulInverse(initOriTransform[n]);
              polhemusTransform.mul(oriTransform);
              Vector3d translation = new Vector3d();
              posTransform.get( translation );
              translation.scale( (double)sensitivity );
              polhemusTransform.setTranslation( translation );

              polhemusSensorRead[n].set(polhemusTransform);
              polhemusSensor[n].setNextSensorRead(polhemusSensorRead[n]);
        }
    }

    public void processStreamInput()
    {
    }

    public void setNominalPositionAndOrientation() {
      initialize();
      for (int n=0; n<polhemusActiveReceivers; n++) {
        polhemusSensorRead[n].setTime(System.currentTimeMillis());
        polhemusTransform.setIdentity();
        polhemusSensorRead[n].set(polhemusTransform);
        polhemusSensor[n].setNextSensorRead(polhemusSensorRead[n]);
      }
    }

    public void setSensitivity(float value)
    {
        sensitivity = value;
    }

    public float getSensitivity() {
        return sensitivity;
    }

    public void setAngularRate(float value) {
        angularRate = value;
    }

    public float getAngularRate() {
        return angularRate;
    }
}

The Sensor objects that are written for this class must generate values that transform the local tracker coordinates to the tracker base coordinates. Remember that the tracker base corresponds to a point in the physical world. The tracker base is a receiver/transmitter attached somewhere in the room.

Head Tracking Scenarios

Now that you can read the values and set the tracker to the tracker base transform, it is time to tell the renderer how to deal with this situation. There are two basic setups in which head tracking is used. The first setup is attached, in which the screens are attached to the user's head and therefore the reference frame of the screens is that of the tracker itself. The attached configuration is the correct one to use with an HMD. The second setup is called non-attached and pertains to the situation in which the screen(s) are attached rigidly to the reference frame of the physical environment.

These two situations are handled differently by the Java 3D renderer.

One tip for setting up both head tracking situations is to use a ViewAttachPolicy of NOMINAL_SCREEN. This is because with NOMINAL_SCREEN, the tracker is mapped directly onto the virtual world (again, except for scaling). When in HMD_VIEW mode, the ViewAttachPolicy is automatically NOMINAL_SCREEN. Note that it is also possible to set the ViewAttachPolicy as NOMINAL_SCREEN when in SCREEN_VIEW mode as well, thus also gaining the same simplifying benefits of having tracker coordinates being the same as virtual coordinates. That might seem a little counterintuitive to those of you who have used a different view model. Having the tracker mapped to the virtual world makes the coexistence coordinates pretty straightforward. Simply set the coexistence relative to the tracker base such that the origin of coexistence is directly in front of the user's nominal front facing direction (looking straight toward -Z with +Y pointing up toward the ceiling).

Note that when head tracking is enabled AND we are NOT in HMD_VIEW mode, the setCoexistenceCenteringEnable flag should be set to false because by definition coexistence centering is only appropriate when the trackerBaseToImagePlate transform is set to the identity matrix.

Beginning with the case of the HMD and moving on to the FishTank VR, we now show in more detail the head tracking necessary steps for using tracking with the Java 3D view model.

Transforms and Settings for an HMD

As we said previously, the HMD is an attached device. Because it is attached, we really just need to slave the camera object to the tracker. This is just as we imagined in one of our robot examples. In this case, the coexistence coordinates in the physical world correspond exactly to view platform coordinates in the virtual world (with the exception of scale). In other words, if you rotate the head tracker—90° in the x direction for example—you get the same rotation of the view platform.

The remaining challenge is to scale the screens appropriately. This requires getting out the manual for your HMD and looking up some numbers. The HMD used in our lab is a Virtual Research V8. The relevant numbers for this particular HMD specify 60° diagonal field of view, focal plane of 3ft, 100% overlap between right/left images, and a 4:3 aspect ratio. In addition, the apparent screen width is 0.8447 and the height is .6335. Again, these are apparent values because they are derived from the optics and are only meaningful when considered relative to the head. This can be confusing because the Screen3D asks for the physical width, height, and position. But really the only parameters that make any sense to apply to a Screen3D are the virtual size and position of the screen images as projected by the HMD optics onto the physical screen.

The approximate transformation from head to image plate is .4223 x, .3168 y, and .9144 z. These values are set in main().

Also, note that because there is 100% overlap in the screen images in the case of the V8, the same head tracker to image plate transformation can be used for both the left and right eyes. However, in many cases this is not true. Be prepared to set these values separately in some situations.

Note that the program in Listing 13.7 will not run properly on your machine unless you have at least two graphics pipelines. On a PC, this can be achieved through a dual head graphics card or a single head AGP with a single head PCI. Multiple PCI cards can work as well, but the bus becomes overcrowded with multiple pipes. Sun and SGI machines typically have multiple pipes. The situation for PCs is likely to change in the near future as more multiple pipe options enter the market.

We have shown the HMD program using our Virtual6DOF as the tracker. However, if you do have access to the 6 DOF tracker, you can add it as we have done shown in commented form. Just remove the comments and run it with your tracker.

Listing 13.7 BasicHMDSetup.java
import java.awt.Frame;
import java.awt.Panel;
import java.awt.BorderLayout;
import java.awt.GraphicsEnvironment;
import java.awt.GraphicsDevice;
import java.awt.GraphicsConfiguration;
import java.awt.event.*;
import com.sun.j3d.utils.universe.*;
import com.sun.j3d.utils.geometry.ColorCube;
import javax.media.j3d.*;
import javax.vecmath.*;
import com.sun.j3d.utils.behaviors.mouse.MouseRotate;

public class BasicHMDSetup {
    PhysicalBody body;
    PhysicalEnvironment environment;
    View view;
    Locale locale;
    public BranchGroup createSceneGraph() {
        // Create the root of the subgraph
        BranchGroup objRoot = new BranchGroup();
        TransformGroup objTrans = new TransformGroup();
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
        objRoot.addChild(objTrans);
        Create a 20cm wide cube spanning -10cm ..
           +10cm about the virtual  world origin.
         objTrans.addChild(new ColorCube(0.10));
         Bounds bounds =
             new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         MouseRotate mouseBeh = new MouseRotate(objTrans);
        objTrans.addChild(mouseBeh);
         mouseBeh.setSchedulingBounds(bounds);
        objRoot.compile();

        return objRoot;
    }

    public BasicHMDSetup(Canvas3D[] canvases) {
        BranchGroup scene = createSceneGraph();
        BranchGroup vgraph = new BranchGroup();
         VirtualUniverse universe = new VirtualUniverse();
         Bounds bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         Locale locale = new Locale(universe);
         body = new PhysicalBody();
         environment = new PhysicalEnvironment();
     /* Uncomment this section to use the actual Fastrak Driver
        FastrakDriver polhemus = new FastrakDriver();
         System.out.println( "Fastrak opened--" );
     try {
       polhemus.initialize();
         }
    catch( Exception e )
    {
        System.err.println( e.toString() + "
Error initializing Fastrak, exiting... " );
        System.exit(0);
    }

    tracker = new FastrakInputDevice( polhemus );
    tracker.initialize();

        tracker.setSensitivity(1.175f );
  */

         Virtual6DOF tracker = new Virtual6DOF();
         tracker.initialize();
         environment.addInputDevice(tracker);
         environment.setSensor(0,v6dof.getSensor(0));
         environment.setHeadIndex(0);

         view = new View();
         view.setPhysicalBody(body);
         view.setPhysicalEnvironment(environment);
         view.setViewPolicy(View.HMD_VIEW);
         view.setTrackingEnable(true);

         System.out.println("canvases.length: " + canvases.length);
         for (int i = 0; i < canvases.length; i++) {
             // Attach the new canvas to the view
             view.addCanvas3D(canvases[i]);
         }

        // With HMD_VIEW, coexistence coordinates in the physical world are
         // mapped exactly to view platform coordinates in the virtual world
         // except for scale.  To verify the image plate calibration, let's
         // set the scale to 1.0 so that objects in the virtual world are the
         // same size as objects in the physical world.
         view.setScreenScalePolicy(View.SCALE_EXPLICIT);
         view.setScreenScale(1.0);

         // Neither HeadToHeadTracker, CoexistenceToTrackerBase, nor the
         // initial head sensor read (from head tracker to tracker base) have
         // been set from their identity defaults.  We've set a unity screen
         // scale so that coexistence coordinates to view platform coordinates
         // is identity as well.
         //
         // This means that the initial view has view platform coordinates
         // equal to head coordinates.  Move the view platform in the virtual
         // world back by the focal plane distance of the HMD + 10cm so that
         // the front face of a 20cm wide cube centered about the virtual
         // world origin lies on the focal plane of the HMD screen image.  If
         // the HMD image plate calibration is correct then the cube image
         // will appear to be 20cm wide.
         Transform3D t = new Transform3D();
         t.setTranslation(new Vector3f(0.0f, 0.0f, 0.9144f + 0.10f));
         ViewPlatform vp = new ViewPlatform();
         TransformGroup vpTrans = new TransformGroup();
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
         vpTrans.setTransform(t);
         vpTrans.addChild(vp);
         Set6DOFBehavior set6dof = new Set6DOFBehavior(v6dof);
         vpTrans.addChild(set6dof);
         set6dof.setSchedulingBounds(bounds);

         NavigationBehavior nav = new NavigationBehavior(vpTrans);
         nav.setSchedulingBounds(bounds);
         vpTrans.addChild(nav);
         view.attachViewPlatform(vp);

         vgraph.addChild(vpTrans);
         locale.addBranchGraph(vgraph);
        locale.addBranchGraph(scene);
        //u.addBranchGraph(scene);
    }


    public static void main(String[] args) {
        int nScreens = 2;
        if (args.length > 0) {
            try {
                nScreens = Integer.parseInt(args[0]);
            }
            catch (NumberFormatException e) {
                System.out.println("Usage: java MultiScreens [#screens]");
                System.exit(1);
            }
        }

        int i;
        GraphicsDevice[] graphicsDevices = GraphicsEnvironment.
            getLocalGraphicsEnvironment().getScreenDevices();
        System.out.println("Found " + graphicsDevices.length +
                           " screen devices");
        for (i = 0; i < graphicsDevices.length; i++)
            System.out.println(graphicsDevices[i]);
        System.out.println();
        GraphicsConfigTemplate3D template;
        GraphicsConfiguration gConfig;
        Screen3D[] screens = new Screen3D[nScreens];
        Canvas3D[] canvases = new Canvas3D[nScreens];
        Frame[] frames = new Frame[nScreens];
        Panel[] panels = new Panel[nScreens];
        java.awt.Rectangle bounds;
        template = new GraphicsConfigTemplate3D();

        for (i = 0; i < nScreens; i++) {
            gConfig = graphicsDevices[i].getBestConfiguration(template);
            bounds = gConfig.getBounds();
            canvases[i] = new Canvas3D(gConfig);
            // Gotta do this for the new focus model in JDK 1.4, otherwise
            // full screen windows won't get keyboard focus.
            canvases[i].setFocusable(true);
            screens[i] = canvases[i].getScreen3D();
            System.out.println("Screen3D[" + i + "] = " + screens[i]);
            System.out.println("    hashCode = " + screens[i].hashCode());
            panels[i] = new Panel();
            panels[i].setLayout(new BorderLayout());
            panels[i].add("Center", canvases[i]);

            frames[i] = new Frame(gConfig);
            frames[i].setLocation(bounds.x, bounds.y);

            // Set to the full screen size with no borders.
            frames[i].setSize(bounds.width, bounds.height);
            frames[i].setUndecorated(true);
            frames[i].setLayout(new BorderLayout());
            frames[i].setTitle("Canvas " + (i+1));
            frames[i].add("Center", panels[i]);
            frames[i].addWindowListener(new WindowAdapter() {
                public void windowClosing(WindowEvent winEvent) {
                    System.exit(0);
                }
            });
        }

        //
        // HMD image plate calibration.  Assume 2-channel input, 60 degree
        // diagonal field of view, focal plane at 3ft, 100% overlap between
        // right/left images, 4:3 aspect ratio.
        //
        canvases[0].setMonoscopicViewPolicy(View.RIGHT_EYE_VIEW);
        // Apparent screen width and height in meters at focal plane.
        screens[0].setPhysicalScreenWidth(0.8447);
        screens[0].setPhysicalScreenHeight(0.6335);

        // Transform from head coordinates to apparent image plate location.
        Transform3D headToImagePlate = new Transform3D();
        headToImagePlate.set(new Vector3d(0.4223, 0.3168, 0.9144));
        // Use headToImagePlate for now, assuming HeadToHeadTracker is I.
        screens[0].setHeadTrackerToRightImagePlate(headToImagePlate);
        // Same for left eye view.  The apparent screen size is the same and
        // there is 100% overlap with the right eye view, so the same
        // head tracker to image plate transform can be used.
        canvases[1].setMonoscopicViewPolicy(View.LEFT_EYE_VIEW);
        screens[1].setPhysicalScreenWidth(0.8447);
        screens[1].setPhysicalScreenHeight(0.6335);
        screens[1].setHeadTrackerToLeftImagePlate(headToImagePlate);

        new MultiScreens(canvases);
        for (i = 0; i < nScreens; i++) {
            frames[i].setVisible(true);
        }
    }
}

FishTank VR Example

As we said, FishTank VR is a term for a head tracked VR system that uses stereo but that, unlike the HMD setup, has the screen fixed in the room and not in to the user's head.

The next step is to set the position and orientation of each screen to the tracker base. In the nonattached mode, this is accomplished through the setTrackerBaseToImagePlate method of Screen3D (accessed from Canvas3D). The orientation position of each screen is thus indirectly mapped to coexistence through the CoexistenceToTrackerBase transform.

Finally, the user must set the location and orientation of the center of coexistence relative to the tracker base using the setCoexistenceToTrackerBase() method of the PhysicalEnvironment method. The center of coexistence locates the center of the nominal screen. For one, three, or five screens, set this to the center of the middle screen. For two screens this center point would be the middle edge.

Yet another setting that must be considered is the CoexistenceCenteringEnable() method of the View object. This flag is set to true by default, but this is not appropriate for head tracking. When this flag is set, the center of coexistence is set to the middle of the screen and assumes that trackerBaseToImagePlate and coexistenceToTrackerBase are both identity, which is also not correct for head tracking. Therefore, in the head tracking situation, the setCoexistenceCenteringEnable() method must be called with the argument set to false.

Listing 13.8 is code for use in a Fishtank VR setup.

Listing 13.8 FishTankVR.java
import java.awt.Frame;
import java.awt.Panel;
import java.awt.BorderLayout;
import java.awt.GraphicsEnvironment;
import java.awt.GraphicsDevice;
import java.awt.GraphicsConfiguration;
import java.awt.event.*;
import com.sun.j3d.utils.universe.*;
import com.sun.j3d.utils.geometry.ColorCube;
import javax.media.j3d.*;
import javax.vecmath.*;
import java.awt.BorderLayout;
import java.applet.Applet;
import com.sun.j3d.utils.behaviors.mouse.MouseRotate;
//import com.mnstarfire.loaders3d.Loader3DS;

import java.applet.*;
//import com.sun.j3d.*;
import com.sun.j3d.utils.applet.*;
import java.awt.*;

import java.io.*;
public class FishTank extends Applet {
    PhysicalBody body;
    PhysicalEnvironment environment;
    View view;
    Locale locale;
   FastrakInputDevice tracker;
   // static final String filename = "C:\Cyclotron\cyclotron.3ds";
    public BranchGroup createSceneGraph() {
        // Create the root of the subgraph
        BranchGroup objRoot = new BranchGroup();
        // Create the transform group node and initialize it to the identity.
        // Enable the TRANSFORM_WRITE capability so that our behavior code
        // can modify it at runtime.  Add it to the root of the subgraph.
        TransformGroup objTrans = new TransformGroup();
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
        objTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
        objRoot.addChild(objTrans);

        // Create a 20cm wide cube spanning -10cm .. +10cm about the virtual
        // world origin.
        objTrans.addChild(new ColorCube(0.10));
        // Create a new Behavior object that will perform the desired
        // operation on the specified transform object and add it into the
        // scene graph.
         Bounds bounds =
             new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         MouseRotate mouseBeh = new MouseRotate(objTrans);
        objTrans.addChild(mouseBeh);
         mouseBeh.setSchedulingBounds(bounds);
        // Have Java 3D perform optimizations on this scene graph.
        objRoot.compile();

        return objRoot;
   }


    public FishTank() {
        // Create a simple scene and attach it to the virtual universe
        System.out.println("FISH TANK");
        BranchGroup scene = createSceneGraph();
       // SimpleUniverse u = new SimpleUniverse(canvases[0]);
        // This will move the ViewPlatform back a bit so the
        // objects in the scene can be viewed.
        //u.getViewingPlatform().setNominalViewingTransform();
        //View view = u.getViewer().getView();
        setLayout(new BorderLayout());
        GraphicsConfigTemplate3D g3d = new GraphicsConfigTemplate3D();
        g3d.setStereo(GraphicsConfigTemplate3D.REQUIRED);
        GraphicsConfiguration gc = GraphicsEnvironment.getLocalGraphicsEnvironment(). 
getDefaultScreenDevice().getBestConfiguration(g3d);
        Canvas3D c = new Canvas3D(gc);
        Screen3D screen = c.getScreen3D();
        screen.setPhysicalScreenWidth(.350);
        screen.setPhysicalScreenHeight(.245);

        Transform3D t = new Transform3D();
        t.setTranslation(new Vector3d(.175, .0845, .020));
        screen.setTrackerBaseToImagePlate(t);
        c.setStereoEnable(true);
    add("Center", c);

      //  c.setStereoEnable(true);
         BranchGroup vgraph = new BranchGroup();
         VirtualUniverse universe = new VirtualUniverse();
         Bounds bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0);
         Locale locale = new Locale(universe);
         body = new PhysicalBody();
         t.setIdentity();
        // t.setTranslation(new Vector3d(0.0, 0.02, 0.18));
        // body.setHeadToHeadTracker(t);

         environment = new PhysicalEnvironment();
        // t.setIdentity();
        // t.setTranslation(new Vector3d(0.0, -0.22, -0.02));
        // environment.setCoexistenceToTrackerBase(t);

        FastrakDriver polhemus = new FastrakDriver();
        System.out.println( "Fastrak opened--" );
    try {
       polhemus.initialize();
        }
    catch( Exception e )
    {
        System.err.println( e.toString() + "
Error initializing Fastrak, exiting... " );
        System.exit(0);
    }

    tracker = new FastrakInputDevice( polhemus );
    tracker.initialize();

     //   tracker.setSensitivity(1.175f );


     /*   Virtual6DOF tracker = new Virtual6DOF();
        tracker.initialize();
     */
        environment.addInputDevice(tracker);
         //the following command makes environment call pollAndProcessInput()
         environment.setSensor(0,tracker.getSensor(0));
         environment.setHeadIndex(0);
         view = new View();


         view.setPhysicalBody(body);
         view.setPhysicalEnvironment(environment);
          //uncommenting the following makes box disappear
         // view.setCoexistenceCenteringEnable(false);
         view.setCoexistenceCenteringEnable(false);
         view.setTrackingEnable(true);
         view.addCanvas3D(c);

        // view.setScreenScalePolicy(View.SCALE_EXPLICIT);
        // view.setScreenScale(1.0);

         t = new Transform3D();

         t.setTranslation(new Vector3f(0.0f, 0.0f, 0.9144f + 0.10f));
         ViewPlatform vp = new ViewPlatform();
         vp.setViewAttachPolicy(View.NOMINAL_SCREEN);
         TransformGroup vpTrans = new TransformGroup();
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
         vpTrans.setTransform(t);
         vpTrans.addChild(vp);
    //     Set6DOFBehavior set6dof = new Set6DOFBehavior(tracker);
     //    vpTrans.addChild(set6dof);
     //    set6dof.setSchedulingBounds(bounds);

         NavigationBehavior nav = new NavigationBehavior(vpTrans);
         nav.setSchedulingBounds(bounds);
         vpTrans.addChild(nav);
         view.attachViewPlatform(vp);

         vgraph.addChild(vpTrans);
         locale.addBranchGraph(vgraph);
        locale.addBranchGraph(scene);
        //u.addBranchGraph(scene);
    }

    public static void main(String[] args) {
         new MainFrame(new FishTank(), 512, 512);
   }
}

Coexistence Revisited

Recall that there exists a common space, the coexistence space, where the physical and the virtual world can both be represented. It is therefore possible to compute a transform of any point in coexistence space to a point in virtual space. Likewise, another transform exists for mapping coexistence space to the physical world. Given these transforms, it is possible to rotate, translate, or scale the virtual world relative to the physical world. This is the primary means through which Java 3D supports so many devices.

In non-head tracked setups, the center of coexistence is the same as the center of the tracker base. In head-tracked setups, there is a transformation between coexistence and the tracker base. This transformation is represented by the CoexistenceToTrackerBase transform.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset