The Most Basic Example

Again, the most fundamental relationship we want to know is where the user's eyes are with respect to the screen. This allows Java 3D to compute the projection matrix. For the case in which the screen is not attached to the user's head, the renderer already has enough information to generate a frame. When there is one display and no tracking is being used, the problem is relatively trivial. We only need to assume a fixed distance from the eyes to the display. Regardless of the true position of the head, Java 3D is rendering a frame appropriate for an assumed head, sitting in front of the screen.

In this most simple case, we can set several of the transforms to unity. Setting the transform to unity has the effect of canceling the transform and simplifying the series.

Let's test the water by showing the most common configuration. Listing 13.1 gives code for checking the graphics and default parameters for any system.

Listing 13.1 ShowJ3DGraphics.java
import java.awt.Frame;
import java.awt.BorderLayout;
import java.awt.GraphicsEnvironment;
import java.awt.GraphicsDevice;
import java.awt.GraphicsConfiguration;
import java.awt.event.*;
import com.sun.j3d.utils.universe.*;
import javax.media.j3d.*;

public class ShowJ3DGraphics {
    PhysicalBody body;
    PhysicalEnvironment environment;
    View view;
    Locale locale;

    public ShowJ3DGraphics(Canvas3D[] canvases) {
        // Create an empty scene and attach it to the universe
        BranchGroup scene = new BranchGroup();
        BranchGroup vgraph = new BranchGroup();
         VirtualUniverse universe = new VirtualUniverse();
         Locale locale = new Locale(universe);
         body = new PhysicalBody();
         System.out.println("PhysicalBody ear and eye positions (Left,Right): 
" + body.
toString());
         System.out.println("***PhysicalBody Transforms***");
         Transform3D t = new Transform3D();
         body.getHeadToHeadTracker(t);
         System.out.println("HeadToHeadTracker: 
" + t.toString());

         environment = new PhysicalEnvironment();
         environment.getCoexistenceToTrackerBase(t);

         System.out.println("***PhysicalEnvironment Transforms***");
         System.out.println("CoexistenceToTrackerBase: 
" + t.toString()); ok?
         view = new View();
         view.setPhysicalBody(body);
         view.setPhysicalEnvironment(environment);

         System.out.println("canvases.length: " + canvases.length);
         for (int i = 0; i < canvases.length; i++) {
            // Attach the new canvas to the view
            view.addCanvas3D(canvases[i]);
         }

     ViewPlatform vp = new ViewPlatform();
     TransformGroup vpTrans = new TransformGroup();
         vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
     vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_READ);
     vpTrans.addChild(vp);
         vgraph.addChild(vpTrans);
         locale.addBranchGraph(vgraph);
        locale.addBranchGraph(scene);
        //u.addBranchGraph(scene);
    }

    public static void main(String[] args) {

        GraphicsDevice[] allScreenDevices = GraphicsEnvironment.
                 getLocalGraphicsEnvironment().getScreenDevices();
        System.out.println("Found " + allScreenDevices.length +
                           " screen devices");
        for (int i = 0; i < allScreenDevices.length; i++)
            System.out.println(allScreenDevices[i]);
        System.out.println();
        GraphicsDevice[] graphicsDevices = new GraphicsDevice[allScreenDevices.length];
       for (int i = 0; i < allScreenDevices.length; i++) {
            graphicsDevices[i] = allScreenDevices[i];
        }

        GraphicsConfigTemplate3D template;
        GraphicsConfiguration gConfig;
        Canvas3D[] canvases = new Canvas3D[allScreenDevices.length];
        for (int i = 0; i < allScreenDevices.length; i++) {
            template = new GraphicsConfigTemplate3D();
            gConfig = graphicsDevices[i].getBestConfiguration(template);
            canvases[i] = new Canvas3D(gConfig);
            Screen3D screen = canvases[i].getScreen3D();
            System.out.println("***Screen parameters for screen# " + i + "*** 
");
            System.out.println(screen.toString());
            System.out.println("Screen3D transformations");
            Transform3D t = new Transform3D();
            screen.getTrackerBaseToImagePlate(t);
            System.out.println("TrackerBaseToImagePlate: 
" + t.toString());
            screen.getHeadTrackerToLeftImagePlate(t);
            System.out.println("HeadTrackerToLeftImagePlate: 
" + t.toString());
            screen.getHeadTrackerToRightImagePlate(t);
            System.out.println("HeadTrackerToRightImagePlate: 
" + t.toString());
            System.out.println("Screen3D[" + i + "] = " + screen);
        }

        new ShowJ3DGraphics(canvases);

    }
}

The output of this program (when run on my computer) is:

Found 1 Screen Device
Win32GraphicsDevice[screen=0]
***Screen parameters for screen # 0 ***
Screen3D: size = (1024 x 768),
physical size = (0.288995555555555556m x .2167466666667m)
***Screen3D transformations***
TrackerBaseToImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

HeadTrackerToLeftImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

HeadTrackerToRightImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

PhysicalBody ear and eye positions (Left,Right):
eyePosition = ((-.033, 0.0, 0.0), 0.033, 0.0, 0.0))
earPosition = ((-0.08, -0.03, 0.095), (0.08, -0.03, 0.095))
***PhysicalBody Transforms***

HeadToHeadTracker:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.02
0.0, 0.0, 1.0, 0.035
0.0, 0.0, 0.0, 1.0

***PhysicalEnvironment Transforms***
CoexistenceToTrackerBase
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

The particular computer used to run this example has one graphics card and is typical of the general computer available on the market. We will examine a case with multiple graphics pipelines shortly. For now, notice that one screen was found and that it occupies the 0 place in the array of Screen3Ds.

Quite a bit of information is contained in this simple example. First, let's talk about the reference frames. Recall that we have the eye reference, the head reference, the environment reference, the tracker reference, the tracker base reference, and the screen reference. We also have the coexistence reference, which will be explained in more detail later.

In this case, our goal is to find the location of the eyes relative to the screen because this information can be used to uniquely define the viewing frustum. In this very simple case, tracking is not being used, so we can simplify many of the transformations. Indeed, notice that all the transforms out of the tracker and tracker base reference frames are set to identity (that is, the transforms, TrackerBaseToImagePlate, HeadTrackerToLeftImagePlate, and HeadTrackerToRightImagePlate). In this case, we are only concerned with the TrackerBaseToImagePlate transform because we do not have separate left and right image plates. What this means is that tracker, tracker base, and image plate coordinate systems are all in the same reference frame. Specifying a point in one reference frame is the same as specifying the point in the other.

Next, observe the off-diagonal values in the HeadToHeadTracker transform. There is indeed a small translation specified for the HeadToHeadTracker transformation. This transformation specifies the Y and Z distances of the head to the sensor mounted on the head. As an exercise, you are encouraged to take any of the examples from Chapters 11 or 12 and add the following lines after the PhysicalBody object is created:

//create a transform to modify
Transform3D htoht = new Transform3D();
htoht.rotX(Math.PI/16);
htoht.setTranslations(new Vector3d(0.0, .5, 0.0));
body.setHeadToHeadTracker(htoht);

A second part of the output from Listing 13.1 to pay attention to is the size of the screens. Screens have both resolution and a physical size. Note that the physical size of the screen can be set to most dimensions.

What would the output look like in a different graphics pipeline? Rerunning this same code on our PC with the dual-head Wildcat card gives

Found 2 Screen Device
Win32GraphicsDevice[screen=0]
Win32GraphicsDevice[screen=1]
TrackerBaseToImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

HeadTrackerToLeftImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

HeadTrackerToRightImagePlate:
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0

Screen3D[0] = Screen3D: size = (640 x 480),
physical size = (0.180622222222224mm x .135466666666668m)
hashCode = 914691
Screen3D[1] = Screen3D: size = (640 x 480),
physical size = (0.180622222222224mm x .135466666666668m)
hashCode = 11914793

canvases.length: 2

In the preceding output listing, you can see that two Canvas3Ds are created and their corresponding Screen3D objects are shown. In this case, the two screens are set at 640x480.

The multiple-screen situation is somewhat more complex, naturally, than the single screen situation. We might have a CAVE or Wedge setup, (as described in Chapter 10, “3D Graphics, Virtual Reality, and Visualization,” and the later section, “Building a CAVE or Wedge with Java 3D”) or we might have an HMD with a separate screen for each eye. In all these cases, we are faced with different screen sizes and positions relative to the eyes and virtual world. If we want to display a stereoscopic view on the two screens of the HMD, for example, we will need to compute different views corresponding to each eye. In addition, we might want to adjust for different eye separations.

So far, we have kept the head tracker rigidly attached to some point in the physical environment just to keep things simple. The situation becomes almost hopelessly complicated when we add head tracking. In the HMD cases, we will want the camera slaved to the tracker so that when we look up we see the environment's ceiling and when we look down we see the environment's floor. To avoid making our subjects sick and to create the most immersive experience, we need the changes to occur with minimal lag. Moreover, calibration becomes important when we leave the non-head tracker environment.

Java 3D view model saves us from having to write a custom viewing engine for every viewing situation. The chain of transforms is quite diverse in all these different cases. That it not to say that an application merely has to set one switch and the head tracker is working. It is still considerably challenging to set up the whole sequence properly. This includes doing some basic calibration and making sure that several transforms and coordinate systems are set. We now describe different transformation series incorporated in to the view model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset