C H A P T E R  12

Turntable Scanner

by Enrique Ramos

If you’re like most people, one of the first things that comes to mind when you see a Kinect point cloud, or any 3D scanner point cloud, is that it’s only one-sided. There are a number of techniques that can be used to perform a 360-degree volumetric scan and then reconstruct the implicit surface into a three-dimensional mesh. One way is to use a turntable.

In this chapter, you will build a DIY turntable, connect it to your computer, and scan objects from every point of view. Then, you will learn how to process the scanned data and patch the different point clouds into one consistent point set. There will be a fair bit of trigonometry and vector math involved in this process, so you might want to keep a reference book handy.

You will then implement your own exporting routine to output files containing your point cloud data in a format readable by other software. Finally, you will learn how to use Meshlab to reconstruct the implicit surface from your point cloud so you can use or modify it in any modeling software, or even bring the modified volume back to the physical world with a 3D printer. Figure 12-1 shows the assembled system.

Images

Figure 12-1. Assembled Kinect and turntable

The Theory

The ultimate goal of this project is to be able to scan an object from all 360 degrees around it to get the full point cloud defining its geometry. There are a number of techniques that you could use to achieve this.

First, you could use several Kinect cameras, all connected to the same computer and placed around the object you want to scan. There are a couple of downsides to this technique. First, you need to buy several Kinect devices, which is expensive. Then, every time you need to scan something you need to set up the Kinects, which takes a large amount of space.

Another option is to use SLAM and RANSAC-based reconstruction. These are advanced computational techniques that allow you to reconstruct a scene from different scans performed with a 3D-scanner without having any kind of information about the actual position of the camera. The software analyzes the different point clouds and matches them by detecting the presence of outliers (RANSAC stands for RANdom SAmple Consensus, SLAM for Simultaneous Localization and Mapping). Nicolas Burrus implemented a very successful scene reconstruction that is included in his software RGBdemo (http://nicolas.burrus.name), and Aaron Jeromin has adapted the RGBdemo code to create a homemade turntable scanner, the results of which you can see in Figure 12-2 and on its YouTube site at http://www.youtube.com/user/ajeromin.

But turntable scanners are nothing new, nor do they depend on the capabilities of the Kinect. Commercial 3D scanning companies offer turntables to be used with their 3D scanning cameras, like Mephisto from 4ddynamics. Structured light-based 3D scanners have been around for a while, and you can find a few DIY examples on the Internet, such as SpinScan (http://www.thingiverse.com/thing:9972).

Images

Figure 12-2. Spinscan (left) and an image produced by Aaron Jeromin’s turntable scanner (right)

As clearly announced in the introduction, you’re going to take the turntable approach. But you’re not going to rely on any library or complicated algorithms for this project. You are going to build a simple but accurate turntable and use very basic geometrical rules to reconstruct a scene from several shots of the same geometry taken from different points of view. Then you will use another open source piece of software, Meshlab, to reconstruct the unstructured point cloud generated by your Processing sketch from the Kinect data.

Images

Figure 12-3. Turntable diagram

When you scan an object with the Kinect, there is always a visible side, the one that the Kinect can “see”, and a hidden side that you can’t capture with a single shot (Figure 12-3). If you mount your object on a rotating base, or turntable, you can rotate the object so that point P is on the visible side. You can then rotate the turntable to any angle and thus be able to see the object from every point of view.

But you want to reconstruct the whole object from different shots, so you need to have all points defining the object referred to the same coordinate system. In the original shot, point P was defined by the coordinates P(x,y,z); when you rotate the platform to see the point, the coordinates of P change, being now P(x’,y’,z’). This is only a problem if you don’t know how much the platform has rotated. But if you know precisely the angle of the platform from the original shot, you can easily retrieve the original coordinates P(x,y,z) from the transformed ones P(x’,y’,z’) using basic trigonometry. Let’s call your angle α; you are rotating the point around the y-axis, so the transformation will be the following:

x = x’ * cos(α) – z’ * sin(α);
y = y’
z = x’ * sin(α) + z’ * cos(α);

By applying this simple transformation, you can scan from as many points of view as you want and then reconstruct the volume in a consistent coordinate system. You are going to build a turntable that will allow you to rotate the object 360 degrees, knowing at all times the precise angle at which you are rotating the object so you can retrieve the global coordinates of the scanned points (Figure 12-4). You are going to start by building the turntable, and then you will learn how to process the data.

Images

Figure 12-4. Reconstruction of an object from three different views.

This project can be built entirely from cheap parts you can find in hardware stores and hobbyist shops. Table 12-1 lists the requirements.

Images

Building a Turntable

A turntable is a platform mounted on a swivel that can rotate around a center point. Yes, it’s exactly like the turntable on any record player. We actually considered using an old record player for this project, but in the end it was cheaper to build our own.

There is something that you will need to bear in mind during the whole process of building the turntable: accuracy. The purpose of building such a device is to have a platform that can hold a reasonable weight without collapsing and that can rotate freely using a servo motor. The higher the precision of your physical device, the higher the consistency of the point clouds when you patch them together, so try to be thorough with your dimensions and tolerances!

We started this project by building a prototype of the turntable so we could test basic algorithms and principles. The prototype consisted of a cardboard box with a servo sticking out of its top face, as you can see in Figure 12-5. On the servo, we attached a circle of cardboard. That was it. This prototype was, obviously, not accurate enough: the servo could only rotate 180 degrees so we couldn’t scan our object all around. However, it was very useful for understanding the basic principles and for developing the software that we will discuss in a later section.

Images

Figure 12-5. Turntable prototype  

The issue of the 180-degree rotation of standard servos was a key issue in the design of our final turntable. We could have used a continuous rotation servo or hacked a standard servo into rotating continuously, but then we would lose the feedback. In other words, we could drive the servo forward and backward, but we wouldn’t know the precise position of the turntable. An option that immediately came to mind was a 360-degree servo. However, power was also an issue for the prototype, so we wanted to increase the torque of our motor with gears. The final solution was to use a continuous rotation servo and build our own feedback system with a multi-turn potentiometer. The two were connected with gears to the turntable, increasing the power of the servo seven times. Let’s proceed step by step to the building of the turntable. Figure 12-6 shows all of the parts of the turntable.

Images

Figure 12-6. Turntable parts

Connecting the Gears

The gears you’re using have a one-to-seven ratio, so the large gear rotates once every seven turns of the smaller gears. This is perfect for this project because your potentiometer can turn up to ten times, and the three gears fit perfectly onto the base. But they come from a gear kit from Vex Robotics, so the gears don’t fit naturally with the Parallax servo or the potentiometer. The first thing you need to do is modify the gears so you can fit them tightly with the other two components (Figure 12-7).

For the potentiometer, drill a hole in the middle of the gear slightly smaller than the diameter of the potentiometer’s shaft. This way you’ll be able to push it in, and it will rotate the shaft without the need for gluing or attaching it permanently. Start with a small drill bit and go bigger step by step. Otherwise, you might drill a non-centered hole.

For the motor, you need to drill a hole only about a fourth of the depth of the gear (see Figure 12-7). The servo shaft is pretty short, and you’re going to be attaching the gear on the other side of the metal base. Don’t worry if the gear is slightly loose; you’re going to secure it to the servo with a screw.

Images

Figure 12-7. Gear adapted to the servo

Now you can attach the potentiometer and the servo to the base. First, check the position of the gears so you have an idea of where the holes need to be drilled for the potentiometer  (see Figures 12-8 and 12-9).

Images

Figure 12-8. Potentiometer asssembled to the base

Images

Figure 12-9. Gears assembled to the base

Images

Figure 12-10. Potentiometer and servo assembled to the base

The fixing of the servo and the potentiometer is highly dependent on the base element you have chosen. In our case, there was enough to space the servo bracket and the potentiometer with a couple of nuts. Note that we have attached the potentiometer and the servo to the sides of the base (Figure 12-10) instead of the top plate. This is to avoid any protruding bolts or nuts on the top plate that would get in the way of the swivel.

You should achieve a good match for the three gears so they rotate together without effort. The large gear will be attached to the acrylic sheet later. Make sure the gears are parallel to the base so there is no friction when the motor is in action.

The next step is to attach the swivel to the base (Figure 12-11), making sure that its center of rotation is as close to the center of rotation of the larger gear as you can possibly manage. This is very important because a badly centered swivel will not rotate when assembled!

Images

Figure 12-11. Swivel and gears

The large gear won’t be attached to the base. You attach it to the acrylic sheet that you previously cut in a disc shape 29cm in diameter. You fix the gear to the disc with a central screw, and you add another to avoid the differential rotation of the disc and the gear (Figure 12-12).

Images

Figure 12-12. Disc and gear ready to be assembled

You could declare your turntable ready at this point, but you are going to attach a base for the Kinect to it so you don’t have to calibrate the position of the Kinect every time you use the scanner.

The Kinect base is exactly like the one you used for the turntable, so both will be at the same height, and the Kinect will stand at pretty good position. First, cut the aluminum angle into two 1.2m pieces and drill three holes on each end to the same spacing of the holes on the bases (Figure 12-13).

Images

Figure 12-13. Bracing aluminum angles after drilling the holes

Images

Figure 12-14. Assembled turntable

Figure 12-14 shows two bases separated by more than 1m, which is enough for the Kinect to be able to scan a medium-sized object. If you are thinking of scanning larger objects, play a little with your Kinect first to determine an ideal distance to the turntable. Once you have calibrated your Kinect and your turntable (check that the center of your turntable is coincident with the center of the scanning volume in your sketch), you can attach the Kinect to the base or simply draw a contour that will tell you where to place it every time. If you fancy a cooler finishing, you can countersink the screws and add a vinyl record to your turntable as a finishing surface (see Figure 12-15).

Images

Figure 12-15. Assembled turntable

Building the Circuit

The turntable circuit is a very simple circuit with only a servo motor and a potentiometer. You’re building it on a SparkFun Arduino prototype shield because you want the circuit to be usable for this and other projects. The circuit includes an analog reader on pin 0 and a servo, as shown in Figures 12-16 and 12-17. The circuit leaves two three-pin breakaway headers ready to for the servo and the potentiometer.

Images

Figure 12-16. Arduino prototype shield with breakaway headers

Images

Figure 12-17. Arduino prototype shield (back)

The circuit, as depicted in Figure 12-18, connects the three pins of the breakaway headers to 5v, ground, and pins analog 0 and digital 6.

Images

Figure 12-18. Turntable circuit

Arduino Code

You are going to implement an Arduino program that will drive the servo forward and backward according to signals coming from your main Processing program and send back the rotation of the potentiometer at every point in time. It will also inform the Processing sketch when it reaches the intended position.

The servo is attached to digital pin 6 and the potentiometer to analog pin 0, so create the necessary variables for those pins and the pulse for the servo. You also need a couple of long integers for the servo timer.

int servoPin = 6;
int potPin = 0;
int servoPulse = 1500;
long previousMillis = 20;
long interval = 20;
Images

Figure 12-19. Testing the range of the potentiometer

After some intensive testing sessions, we worked out that 360 degrees on the turntable meant a range of 724 in the potentiometer values (Figure 12-19) . So use a start position of 100 and an end position of 824. In other words, whenever the potentiometer is at 100, you know your angle is zero; whenever the potentiometer reads 824, you know you have reached 360 degrees. The potentiometer values change in proportion to the angle, so you can always work out the angle from the values read from the potentiometer.

// Start and end values from potentiometer (0-360 degrees)
int startPos = 100;
int endPos = 824;

The continuous rotation servo doesn’t work like a normal servo. You still have a range of 500-2500 microseconds for your pulses, but instead of the pulse meaning a specific angle, any pulse under 1500 microseconds makes the servo to turn forward, and any pulse over 1500 microseconds sends the servo turning backward. Create a couple of variables for the forward and backward pulses.

// Forward and backward pulses
int forward = 1000;
int backward = 2000;

Your Arduino sketch works in two states: state=0 means stand and wait for orders and state=1 means move toward the target angle. PotAngle is the variable storing the potentiometer values.

int state = 1;  // State of the motor
int targetAngle = startPos;  // Target Angle
int potAngle;  // Current angle of the potentiometer

In the setup() function, you only need to start the serial communication and define your servo pin as output.

void setup(){
  Serial.begin(9600);
  pinMode (servoPin, OUTPUT);
}

The main loop() function is completely engulfed within an “if()” statement that executes once every 20 milliseconds. This makes sure that you only talk to the servo within the acceptable range and that you don’t send too much serial data and saturate the channel.

void loop() {
  unsigned long currentMillis = millis();
  if(currentMillis - previousMillis > interval) {
    previousMillis = currentMillis;

The first thing you do in the loop is read the value of the potentiometer and print it out to the serial port. After this, you perform a data validation check. If the read value is out of your range (100-824), you move the potentiometer in toward your range and you set the state to zero so it stops moving.

    potAngle = analogRead(0);  // Read the potentiometer
    Serial.println(potAngle);  // Send the pot value to Processing
    if(potAngle < startPos){  // If you have
      state = 0;
      updateServo(servoPin,forward);
      Serial.println("start");
    }
    if(potAngle > endPos){
      state = 0;
      updateServo(servoPin,backward);
      Serial.println("end");
    }

Now you need to check if you have any data in your serial buffer. You will implement a separate function for this, so here you simply have to call it. Finally, if your state is 1, you call the function goTo(targetAngle), which moves the servo towards your goal.

    checkSerial();  // Check for values in the serial buffer
    if(state==1){
      goTo(targetAngle);
    }
  }
}

The function goTo() checks whether the current reading from the potentiometer is higher or lower than the target angle and updates the servo accordingly. The step of the servo is larger than one unit, so you have allowed a buffer zone of 10 units to avoid the oscillation around a target value. If you are withinthe accepted range from the target, you set your state to 0 and wait for orders and then you send a message through the serial channel saying that you have reached your intended position.

void goTo(int angle){
  if(potAngle-angle < -10) {
    updateServo(servoPin,forward);
  }
  else if(potAngle - angle > 10){
    updateServo(servoPin,backward);
  }
  else {
    Serial.println("arrived");
    state=0;
  }
}

The orders you are waiting for will come via serial communication. In the checkSerial() function, you look into the serial buffer, and if there is at least two data, you read the first one. If this happens to be the trigger character ‘S’, you check the next one, which you interpret as the new target angle for the turntable. You map this value from 0-255 to the whole range of the potentiometer, and you set your state to 1, which allows the turntable to move toward this new value.

void checkSerial(){
  if (Serial.available() > 1) { // If data is available to read,
    char trigger = Serial.read();
    if(trigger == 'S'){
      state = 1;
      int newAngle = Serial.read();
      targetAngle = (int)map(newAngle,0,255,startPos,endPos);
    }
  }
}

The updateServo() function is the same as that used throughout the book. You write a high signal to the servo pin for the amount of microseconds that you have passed as an argument.

void updateServo (int pin, int pulse){
  digitalWrite(pin, HIGH);
  delayMicroseconds(pulse);
  digitalWrite(pin, LOW);
}

This simple code allows you to control the rotation of the turntable from any serial-enabled software (In your case, Processing). On top of that, this Arduino code is sending out messages with the current angle of the turntable and whether the target position has been reached. Let’s see how to use this data and your turntable to scan 3D objects.

Processing Code

You are going to write a slightly complex piece of code that will communicate with the Arduino board and interpret the incoming data to reconstruct a three-dimensional model of the objects standing on the turntable. We covered the geometrical principles behind the patching of the different scans at the beginning of the chapter, but we will expand on this in the pages that follow.

We looked for a complex and colorful shape for this example and, because we are writing this in November, the first thing we found was a little figure of Santa in a dollar store (Figure 12-20). We will be using this figure for the rest of the chapter. If you can find one too, you’ll have a complete, full-color model of your favorite Christmas gift-bringer inside of your computer by the end of the chapter, as per Figure 12-21. Let’s start coding!

Images

Figure 12-20. Our example model

Images

Figure 12-21. Our example model in scanning space

Variable Declaration

The imports are the usual ones: Serial, OpenGL, Simple-OpenNI, and KinectOrbit. You define a Boolean variable called serial that initializes the serial ports only if it is true. This is useful if you want to test stuff with the serial cable disconnected and you don’t want to get those nasty errors. If you want to play with the sketch without connecting the Arduino, change it to false. The String turnTableAngle holds the string coming from Arduino containing the current angle of the turntable.

import processing.serial.*;
import processing.opengl.*;
import SimpleOpenNI.*;
import kinectOrbit.*;
// Initialize Orbit and simple-openni Objects
KinectOrbit myOrbit;
SimpleOpenNI kinect;
// Serial Parameters
Serial myPort; // Initialize the Serial Object
boolean serial = true; // Define if you're using serial communication String turnTableAngle = "0"; // Variable for string coming from Arduino

You initialize four ArrayLists. The first two, ScanPoints and scanColors, contain the current scanned points and their colors. This means the points being scanned currently by the Kinect and contained within your scanning volume. The ArrayLists objectPoints and objectColors contain all the points that configure your scanned objects patched together. These are the points that will be exported in .ply format.

// Initialize the ArrayLists for the pointClouds and the colors associated
ArrayList<PVector> scanPoints = new ArrayList<PVector>(); // PointCloud
ArrayList<PVector> scanColors = new ArrayList<PVector>(); // Object Colors
ArrayList<PVector> objectPoints = new ArrayList<PVector>(); // PointCloud
ArrayList<PVector> objectColors = new ArrayList<PVector>(); // Object Colors

The following variables define the “model space,” the volume that contains your objects. The float baseHeight is the height of your base from your Kinect coordinate system. In our case, it was placed 67mm under the Kinect camera. modelWidth and modelHeight are the dimensions of the model space. Anything that falls within this volume will be scanned.

// Scanning Space Variables
float baseHeight = -67; // Height of the Model's base
float modelWidth = 400;
float modelHeight = 400;
PVector axis = new PVector(0, baseHeight, 1050);

You need to define some additional variables. The scanLines parameter defines how many rows of the Kinect depth image you use at every scan (Figure 12-22). Think of a traditional structured 3D scanner. A single line is scanned at every angle. This parameter can be set to low values if you have a pretty symmetrical geometry around its center. Low scanLines values need to be used with a high number of scans, defined by the size of shotNumber[]. Play with these two parameters until you achieve a smooth result in your point cloud. In our example, we used a high number of lines, 200, and only three shots or patches.

Images

Figure 12-22. The scanLines parameter set to 5 (left), 25 (center), and 65 (right)

The variable scanRes defines the number of pixels you take from your scan. One is the higher resolution. If you chose a higher number than 1, you will skip a number of pixels in every scan. The Boolean variables and currentShot integer are used to drive the flow of the sketch depending on the incoming data.

// Scan Parameters
int scanLines = 200;
int scanRes = 1;
boolean scanning;
boolean arrived;
float[] shotNumber = new float[3];
int currentShot = 0;

Setup and Draw Functions

Within the setup() function, you initialize all your KinectOrbit, Simple-OpenNI, and serial objects. You also need to work out the angle corresponding to every shot you want to take. The idea is to divide the 365 degrees of the circle, or 2*PI radians, into the number of shots specified by the size of the shotNumber[] array. Later, you send these angles to the Arduino and you take a 3D shot from every one of them. Lastly, you move the turntable to the start position, or angle zero, so it’s ready to scan.

public void setup() {
  size(800, 600, OPENGL);
  // Orbit
  myOrbit = new KinectOrbit(this, 0, "kinect");
  myOrbit.drawCS(true);
  myOrbit.drawGizmo(true);
  myOrbit.setCSScale(200);
  myOrbit.drawGround(true);
  // Simple-openni
  kinect = new SimpleOpenNI(this);
  kinect.setMirror(false);
  kinect.enableDepth();
  kinect.enableRGB();
  kinect.alternativeViewPointDepthToImage();

  // Serial Communication
  if (serial) {
    String portName = Serial.list()[0]; // Get the first port
    myPort = new Serial(this, portName, 9600);
    // don't generate a serialEvent() unless you get a newline
    // character:
    myPort.bufferUntil(' '),
  }

  for (int i = 0; i < shotNumber.length; i++) {
    shotNumber[i] = i * (2 * PI) / shotNumber.length;
  }
  if(serial) { moveTable(0); }
}

Within the draw() function, you first update the Kinect data and start the Orbit loop. Pack all the different commands into functions so the workflow is easier to understand.

public void draw() {
  kinect.update(); // Update Kinect data
  background(0);

  myOrbit.pushOrbit(this); // Start Orbiting

First, you draw the global point cloud with the drawPointCloud() function, which takes an integer as parameter defining the resolution of the point cloud. As you’re not very interested in the global point cloud, you speed things up by only drawing one every five points.

  drawPointCloud(5);

Then you update the object points, passing the scanLines and scanRes defined previously as parameters. As you will see later, this function is in charge of transforming the points to the global coordinate system.

  updateObject(scanLines, scanRes);

You have two Boolean values defining if you are in the scanning process and whether you have reached the next scanning position. If both are true, you take a shot with the function scan().

  if (arrived && scanning) { scan(); }

The last step is to draw the objects, the bounding box, and the camera frustum. And of course, close the Kinect Orbit loop.

  drawObjects();
  drawBoundingBox(); // Draw Box Around Scanned Objects
  kinect.drawCamFrustum(); // Draw the Kinect cam
  myOrbit.popOrbit(this); // Stop Orbiting
}

Additional Functions

The drawPointCloud() function is similar to the one previously used to visualize the raw point cloud from the Kinect. You take the depth map, bring it to real-world coordinates, and display every point on screen. Skip the number of points defined by the steps parameter. The dim, background points on Figure 12-23 are the result of this function.

void drawPointCloud(int steps) {
  // draw the 3D point depth map
  int index;
  PVector realWorldPoint;
  stroke(255);

  for (int y = 0; y < kinect.depthHeight(); y += steps) {
    for (int x = 0; x < kinect.depthWidth(); x += steps) {
      index = x + y * kinect.depthWidth();
      realWorldPoint = kinect.depthMapRealWorld()[index];
      stroke(150);
      point(realWorldPoint.x, realWorldPoint.y, realWorldPoint.z);
    }
  }
}
Images

Figure 12-23. Scanned figure in the bounding box

The drawObjects() function displays on screen the point cloud defining the object being scanned. This function has two parts: first, it displays the points stored in the objectPoints ArrayList, which are the object points patched together. Then it displays the points stored in the scanPoints ArrayList, which are the points being currently scanned. All these points are displayed in their real color and with strokeWeight = 4, so they stand out visually, as you can see on Figure 12-23.

void drawObjects() {
  pushStyle();
  strokeWeight(4);

  for (int i = 1; i < objectPoints.size(); i++) {
    stroke(objectColors.get(i).x, objectColors.get(i).y, objectColors.get(i).z);
    point(objectPoints.get(i).x, objectPoints.get(i).y, objectPoints.get(i).z + axis.z);
  }

  for (int i = 1; i < scanPoints.size(); i++) {
    stroke(scanColors.get(i).x, scanColors.get(i).y, scanColors.get(i).z);
    point(scanPoints.get(i).x, scanPoints.get(i).y, scanPoints.get(i).z + axis.z);
  }

  popStyle();
}

The drawBoundingBox() function displays the limits of the scanning volume. Anything contained within this volume is scanned.

void drawBoundingBox() {
  stroke(255, 0, 0);
  line(axis.x, axis.y, axis.z, axis.x, axis.y + 100, axis.z);
  noFill();
  pushMatrix();
  translate(axis.x, axis.x + baseHeight + modelHeight / 2, axis.z);
  box(modelWidth, modelHeight, modelWidth);
  popMatrix();
}

The scan() function is in charge of storing the current point cloud into the object point cloud. You need to avoid multiplying the number of points unnecessarily, so you implement a loop in which you check the distance from each new point to every point already stored in the objectPoints ArrayList. If the distance from the new point to any of the previously stored points is under a certain threshold (1mm in our example), you won’t add this point to the object point cloud. Otherwise, you add the point and the corresponding color to the object ArrayLists.

void scan() {
  for (PVector v : scanPoints) {
    boolean newPoint = true;
    for (PVector w : objectPoints) {
      if (v.dist(w) < 1)
        newPoint = false;
    }

    if (newPoint) {
      objectPoints.add(v.get());
      int index = scanPoints.indexOf(v);
      objectColors.add(scanColors.get(index).get());
    }
  }

After taking the shot and adding it to the object point cloud, check if the shot taken was the last of the list. If the shot is not the last one, you tell the turntable to move to the next position and you set the “arrived” Boolean to false. This prevents the sketch from taking any other shot before you get an “arrived” signal from Arduino indicating you have reached the target position. If the shot happens to be the last one, you set the scanning Boolean to false, declaring the end of the scanning process. Figure 12-24 shows the three steps in this scanning process.

  if (currentShot < shotNumber.length-1) {
    currentShot++;
    moveTable(shotNumber[currentShot]);
    println("new angle = " + shotNumber[currentShot]);
    println(currentShot);
    println(shotNumber);

  }
  else {
    scanning = false;
  }
  arrived = false;
}
Images

Figure 12-24. Three-step scan

The updateObject() function is where the computation of the world coordinates of the scanned points takes place. You declare an integer to store the vertex index and a PVector to store the real coordinates of the current point. Then you clear the ArrayLists of your scanned points so you can update them from scratch.

void updateObject(int scanWidth, int step) {
  int index;
  PVector realWorldPoint;
  scanPoints.clear();
  scanColors.clear();

You need to know the current angle of the turntable in order to compute the global coordinates of the points. You work out this angle by mapping the integer value of your turnTableAngle string from its range (100-824) to the 365-degree range in radians (0-2*PI). Remember that turnTableAngle is the string coming from Arduino, so it changes every time you rotate your turntable.

  float angle = map(Integer.valueOf(turnTableAngle), 100, 824, 2 * PI, 0);

The next lines draw a line at the base of the bounding box to indicate the rotation of the turntable.

  pushMatrix();
  translate(axis.x, axis.y, axis.z);
  rotateY(angle);
  line(0, 0, 100, 0);
  popMatrix();

Now, you run in a nested loop through your depth map pixels, extracting the real-world coordinates of every point and its color.

  int xMin = (int) (kinect.depthWidth() / 2 - scanWidth / 2);  int xMax = (int)
(kinect.depthWidth() / 2 + scanWidth / 2);
  for (int y = 0; y < kinect.depthHeight(); y += step) {
    for (int x = xMin; x < xMax; x += step) {
      index = x + (y * kinect.depthWidth());
      realWorldPoint = kinect.depthMapRealWorld()[index];
      color pointCol = kinect.rgbImage().pixels[index];

If the current point is contained within the defined scanning volume or bounding box (this is what the scary-looking “if()” statements check), you create the PVector rotatedPoint to store the global coordinates of the point.

      if (realWorldPoint.y < modelHeight + baseHeight && realWorldPoint.y > baseHeight) {
        if (abs(realWorldPoint.x - axis.x) < modelWidth / 2) {  // Check x
          if (realWorldPoint.z < axis.z + modelWidth / 2 && realWorldPoint.z > axis.z -modelWidth / 2) {  // Check z

            PVector rotatedPoint;

The coordinate system transformation process happens in two steps. First, you need to get the coordinates of the vector from the center of the turntable. The axis vector defined the coordinates of the center of the turntable from the Kinect coordinate system, so you only need to subtract the coordinates of the axis vector to the real-world point coordinates to get the transformed vector. Then you need to rotate the point around the y-axis by the current angle of your turntable. You use the function vecRotY() to do this transformation (Figure 12-25).

    realWorldPoint.z -= axis.z;
    realWorldPoint.x -= axis.x;
    rotatedPoint = vecRotY(realWorldPoint, angle);
Images

Figure 12-25. Coordinate system transformations

Now your rotatedPoint should contain the real-world coordinates of the scanned point, so you can add it to the scanPoints ArrayList. You also want to add its color to the scanColors ArrayList so you can retrieve it later. Finally, you close all the curly brackets: three for the if() statements, two for the for() loops, and the last one to close the function updateObject() function.

            scanPoints.add(rotatedPoint.get());
            scanColors.add(new PVector(red(pointCol), green(pointCol), blue(pointCol)));
          }
        }
      }
    }
  }
}

The vecRotY() function returns the PVector resulting from the operation of rotating of the input PVector by the input angle.

PVector vecRotY(PVector vecIn, float phi) {
  // Rotate the vector around the y-axis
  PVector rotatedVec = new PVector();
  rotatedVec.x = vecIn.x * cos(phi) - vecIn.z * sin(phi);
  rotatedVec.z = vecIn.x * sin(phi) + vecIn.z * cos(phi);
  rotatedVec.y = vecIn.y;
  return rotatedVec;
}

You use the function moveTable(float angle) every time you want to send a new position to your turntable. The input angle is in degrees (0-360). This function sends a trigger character ‘S’ through the serial channel to indicate that the communication has started. Then you send the input angle remapped to one byte of information (0-255). It prints out the new angle for information.

void moveTable(float angle) {
  myPort.write('S'),
  myPort.write((int) map(angle, 0, 2*PI, 0, 255));
  println("new angle = " +  angle);
}

All the preceding functions are called from the draw() loop or from other functions. You also need to include two additional functions that you won’t be calling explicitly but will be called by Processing in due time. The first one is the serialEvent() function. It is called every time you get a new line character in your serial buffer (you specified this in the setup() function). If you receive a string, you trim off any white spaces and you check the string. If you receive a “start” or “end” message, you just display it on the console. If you receive an “arrived” message, you also set your Boolean arrived to true so you know that you have reached the target position.

If the message is not one of the preceding ones, that means you should be receiving the current angle of the turntable, so you update your turnTableAngle string to the incoming string.

public void serialEvent(Serial myPort) {
  // get the ASCII string:
  String inString = myPort.readStringUntil(' '),
  if (inString != null) {
    // trim off any whitespace:
    inString = trim(inString);
    if (inString.equals("end")) {
      println("end");
    }
    else if (inString.equals("start")) {
      println("start");
    }
    else if (inString.equals("arrived")) {
      arrived = true;
      println("arrived");
    }
    else {
      turnTableAngle = inString;
    }
  }
}

The keyPressed() function is called whenever you press a key on your keyboard. You assign several actions to specific keys, as described in the following code.

public void keyPressed() {
  switch (key) {
  case 'r': // Send the turntable to start position
    moveTable(0);
    scanning = false;
    break;
  case 's': // Start scanning
    objectPoints.clear();
    objectColors.clear();
    currentShot = 0;
    scanning = true;
    arrived = false;
    moveTable(0);
    break;
  case 'c': // Clear the object points
    objectPoints.clear();
    objectColors.clear();
    break;
  case 'e':      // Export the object points
    exportPly('0'),
    break;
  case 'm': // Move the turntable to the x mouse position
    moveTable(map(mouseX, 0, width, 0, 360));
    scanning = false;
    break;
  case '+':  // Increment the number of scanned lines
    scanLines++;
    println(scanLines);
    break;
  case '-':  // Decrease the number of scanned lines
    scanLines--;
    println(scanLines);
    break;
  }
}

Note that you implement mouse actions for moving to the start position; also, if you press the [S]key, you start the scan from the start position. The turntable is first sent to the zero position and then you start scanning.

The M key helps you control the turntable. Pressing it sends the turntable to the angle defined by the x-coordinate of your mouse on the Processing frame (remapped from 0-width to 0-360). For example, pressing M while your mouse is on the left edge of the frame sends the turntable to 0 degrees, and pressing it while the mouse is on the right edge sends it to 360 degrees.

The key E calls the function exportPly to export the point cloud. This leads to the next step. At the moment, you have being able to scan a figure all around and have on your screen a full point cloud defining its shape (Figure 12-26) . But remember you wanted to convert this point cloud to a mesh so you can bring it to 3D modeling software or 3D print it. You are going to do this with another open source software package, so you need to be able to export a file recognizable by this software. You will use the polygon file format, or .ply, for this exporting routine, which you will implement yourself. You’ll see how in the next section.

Images

Figure 12-26. Figure after 360-degree scan

Exporting the Point Cloud

As stated in the introduction, you are going to mesh the point cloud using an open source software package called Meshlab,  which can import unstructured point clouds in two formats, .ply and obj. There are several Processing libraries that allow you to export these formats, but exporting a .ply file is so simple that you are going to implement your own file exporter so you can control every datum you send out.

The .ply extension corresponds to polygon file format, or stanford triangle format. This format can be stored in ASCII and binary files. You will use ASCII so you can actually write your data directly to an external file and then read it from Meshlab.

The .ply format starts with a header formed by several lines. It starts by declaring “I am a ply file” with the word “ply” written on the first line. Then it needs a line indicating the type of .ply file. You indicate that you are using ASCII and the version 1.0. You then insert a comment indicating that you are looking at your Processing-exported .ply file. Note that comments start with the word “comment”— pretty literal, eh?

images Note The next lines are not Processing code, but the lines that will be written on your .ply file!

ply
format ascii 1.0
comment This is your Processing ply file

These three first lines are constant in every file you export. You could add a comment that changes with the name of the file or any other information. If you want to add a comment, just make sure your line starts with the word “comment”.

The next lines, still within the header, declare the number of vertices, and the properties you are exporting for each of them. In your case, you are interested in the coordinates and color, so you declare six properties: x, y, z, red, green, and blue. You now close the header.

element vertex 17840
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header

After closing the header, you have to write the values of all the properties stated for each vertex, so you get a long list of numbers like this:

0.0032535514 0.1112856 0.017800406 125 1 1
0.005102699 0.1112856 0.017655682 127 1 81
-0.0022937502 0.10943084 0.018234566 130 1 1
………

Each line represents the six properties declared for each vertex in the order you declared them and separated by a white space. Let’s have a look at the code you need to implement to effectively producing .ply files containing the object point cloud that you previously generated.

images Note In this case, you are only exporting points, but .ply files also allow you to export edges and faces. If you wanted to export edges and faces, you would need to add some lines to your header defining the number of edges and faces in your model and then a list of faces and a list of edges. You can find information on the .ply format at http://paulbourke.net/dataformats/ply.

The exportPly Function

First, you declare a PrintWriter object. This class prints formatted representations of objects to a text-output stream. Then you declare the name of the output file and you call the method createWriter(), passing file name in the data path as a parameter.

void exportPly(char key) {
  PrintWriter output;
  String viewPointFileName;
  viewPointFileName = "myPoints" + key + ".ply";
  output = createWriter(dataPath(viewPointFileName));

Now you need to print out all the header lines. The number of vertices is extracted from the size of the objectPoints ArrayList.

  output.println("ply");
  output.println("format ascii 1.0");
  output.println("comment This is your Processing ply file");
  output.println("element vertex " + (objectPoints.size()-1));
  output.println("property float x");
  output.println("property float y");
  output.println("property float z");
  output.println("property uchar red");
  output.println("property uchar green");
  output.println("property uchar blue");
  output.println("end_header");

After closing the header, you run through the whole objectPoints ArrayList, and you print out its x, y, and z coordinates and the integers representing their color parameters. You want the output to be in meters, so you scale down the point coordinates by dividing the values by 1000.

  for (int i = 0; i < objectPoints.size() - 1; i++) {
    output.println((objectPoints.get(i).x / 1000) + " "
      + (objectPoints.get(i).y / 1000) + " "
      + (objectPoints.get(i).z / 1000) + " "
      + (int) objectColors.get(i).x + " "
      + (int) objectColors.get(i).y + " "
      + (int) objectColors.get(i).z);
  }

When you are done with your point cloud, you flush the PrintWriter object and you close it. Now you should have a neat .ply file in the data path of your Processing file!

  output.flush(); // Write the remaining data
  output.close(); // Finish the file
}

You have now gone through the whole process of scanning a three-dimensional object and exporting the resulting point cloud to a .ply file. You will next learn how to use this point cloud to generate a surface using Meshlab.

Surface Reconstruction in Meshlab

Surface reconstruction from unorganized point sets is a complicated process requiring even more complicated algorithms. There is a lot of research out there on how this process can be performed, and luckily for you, there are several open source libraries like CGAL (http://cgal.org) and open source software like Meshlab (http://meshlab.sourceforge.net) that you can use to perform these operations without being an accomplished mathematician.

Meshlab is an open source system for the processing and editing of 3D triangular meshes. It was born in 2005 with the intention of helping with the processing of the unstructured models arising in 3D scanning. That is pretty much the state you are in: you have 3D scanned a figure and need to process the point cloud and reconstruct the implicit surface.

Meshlab is available for Windows, Mac OSX and Linux, so whatever your OS, you should now go to the web site and download the software. You’ll start using it right now!

If you have already installed Meshlab, .ply files should be automatically associated to it, so you can double-click your file and it will open in Meshlab. If this is not the case, open Meshlab and import your file by clicking the File menu, and then Import Mesh.

If your file was exported correctly and the number of vertices corresponds to the declared number of vertices, when you open the file you should see on screen a beautifully colored point cloud closely resembling your scanned object, like the one shown in Figure 12-27.

Images

Figure 12-27. Point cloud .ply file opened in Meshlab

If your file contains inconsistencies (like a different number of points declared and printed), Meshlab can throw an “Unexpected end of file” error at you. If you click OK, you won’t see anything on the screen. Don’t panic! The points are still there, but you have to activate the Points view on the top bar, and then you will see the point cloud. It won’t be colored, though. You can then go to the Render menu and choose Colors/Per Vertex. After this, you should see an image similar to the previous one.

Now orbit around your point cloud and identify any unwanted points. Sometimes Kinect detects some points in strange places, and it helps the reconstruction process to get rid of those. Sometimes the Kinect is not perfectly calibrated with the turntable, so you will have slight inconsistencies in the patched edges. To delete the unwanted points, select the Select Vertexes tool from the top menu bar. You can select several points together by holding the Control or Command key down. After selecting, you can click Delete Selected Vertices on the right of the top menu bar (Figure 12-28).

Images

Figure 12-28. Delete unwanted points

You are now ready to start the surface reconstruction process. First, you need to compute the normal vector on your points, which is the vector perpendicular to the surface of the object at each point, pointing out of the object. This provides the reconstruction algorithm information about the topology of your object. Go to the Filters/Point Set menu and choose “Compute normal for point sets.” This should bring up a menu. Choose 16 as the number of neighbors and click Apply. The process shouldn’t take more than a couple of seconds. It may seem like nothing changed, but if you go to the Render menu and tick Show Vertex Normals, you should now see the point cloud with all the normals in light blue.

Go to the Filters/Point Set menu again and click Surface Reconstruction: Poisson. In the menu, there are several parameters that we won’t have the time to discuss in this book. These parameters have an influence on how precise the reconstruction is, how many neighboring points are taking into account, and so on. Chose 16 as Octree Depth and 7 as Solver Divide; depending on your object and the quality of the scan, the optimum parameters can greatly change, so try to find yours.

Images

Figure 12-29. Mesh generated by Poisson reconstruction

Images

Figure 12-30. Vertex colors transferred to the mesh

After a couple of painfully long minutes, you should get a pretty neat surface like the one in Figure 12-29. The last step is transferring the colors of the vertices to your mesh, so you can export the fully colored mesh to any other software. Go to Filters/Sampling and choose Vertex Attributes Transfer. Make sure that the Source Mesh is your point cloud, and the Target Mesh is your Poisson Mesh. Tick only Transfer Color and click Apply. The mesh should show now a pretty similar range of colors to the original figure (Figure 12-30), and you are done!!

If your point cloud was not what you would call “neat,” sometimes the Poisson reconstruction will throw something much like a potato shape slightly smaller than your point cloud. Don’t be discouraged; Meshlab can fix it! In the previous step, instead of transferring the colors only, transfer the geometry as well. You will get then a nicely edged shape like the one in the Figure 12-31.

Images

Figure 12-31. Vertex geometry and color transferred to the mesh

Go to “Filters/Smoothing, Fairing, and Deformation” and click HC Laplacian Smooth. Apply the filter as many times as needed to obtain a reasonably smooth surface. With a little luck, the result will be reasonably good, considering you started from a pretty noisy point cloud.

One last tip on this process: sometimes you’ll get an error on the importation process, the computed normal will be inverted, and the figure will look pretty dark. Again, you can use Meshlab to invert your face normals. Go to “Filters/Normals, Curvature, and Orientation” and click Invert Faces Orientation. This should make your mesh appear brighter. Check that the light toggle button is on. Sometimes the switch is off and you might get the impression that your normals are inverted; if you have just forgotten to turn on the light, do it on the top menu bar!

You have generated a mesh that can be exported to a vast number of file formats compatible with CAD packages, 3D printers and other 3D modeling software. Go to File/Export Mesh As and choose the file format you need. We exported it as a .ply file with colors and imported it into Blender (see Figure 12-32), a great open source modeling software freely available from www.blender.org.

Images

Figure 12-32. The figure mesh imported into Blender

If your goal is to replicate your object with a 3D printer, there are plenty of services online that will take your STL model, print it (even in color), and send it to you by mail. But, being a tinkerer, you might be willing to go for the DIY option and build yourself a RepRap. This is a pretty astonishing concept of self-replicating 3D-printer, which means that RepRap is partially made from plastic components that you can print using another RepRap. It is certainly the cheapest way to have your own 3D printer, so have a look at http://reprap.org/ for plenty of information on where to buy the parts and how to build it.

Summary

In this chapter you learned several techniques for working with raw point clouds. You haven’t made use of any of the NITE capabilities, so actually this project could be done using any of the available Kinect libraries, as long as you have access to the Kinect point data.

You used gears, a servo, and a multi-turn potentiometer to build a precise turntable that can be controlled from Processing and that provides real-time feedback of its rotational state. You acquired 3D data with your Kinect and patched the data together using geometrical transforms to form a 360-degree scan of an object. You even built your own file-exporting protocol for your point cloud and learned how to process the point cloud and perform a Poisson surface reconstruction using Meshlab. The outcome of this process is an accurate model of your 3D object that you can export to any 3D modeling software package.

The possibilities of 360-degree object scanning for architects, character artists, and people working in 3D in general are endless. Having this tool means that for less than $200 (including your Kinect!) you have a device that can take any three-dimensional shape and bring its geometry and colors into your computer to be used by any application. It’s your task now to imagine what to do with it!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset