C H A P T E R  13

Kinect-Controlled Delta Robot

by Enrique Ramos and Ciriaco Castro

This last chapter of the book will guide you through the process of building and programming your own Kinect-controlled delta robot (Figure 13-1). You will learn everything from the inverse kinematics of this specific kind of robot to the programming of the hand tracking routines and the process of translating the data from the virtual simulation into electrical pulses for the operation of the physical robot.

Delta robots, or parallel robots, are not only great for industrial use, where they perform a variety of tasks at incredibly high speeds, but they are also beautiful to watch and relatively simple to build. They rely on the use of parallelograms to translate the rotational movement of three independent motors into a precise 3D position of the robot’s effector. You are building a basic gripper in this project, but the effector could actually be any manipulator you can think of. Imagine this effector changed into a magnetic gripper, a laser welding/cutting tip, or a nozzle for 3D printing. After you understand the concepts in this project, you will build a platform extendable to any kind of robotic application.

Images

Figure 13-1. Delta robot simulation and physical robot

You will need quite a few components and materials for the development of this project, but they are all widely available and reasonably priced (see Table 13-1). You will find most components in your local hardware store; the motors are standard servos you can find at SparkFun and similar retailers. The ball joints are perhaps the most specific pieces for this project, but there are a number of online shops where you can find them.

Images

About This Project

This project was born at the Adaptive Architecture and Computation MSc program at the Bartlett, UCL. We created the first version of the Kinect-driven delta robot together with Miriam Dall’Igna under installation artist Ruairi Glynn. After finishing the project, we had the opportunity to develop a new, much larger version of the installation called “Motive Colloquies” at the prestigious Centre Pompidou in Paris, together with Ruairi Glynn and the London School of Speech and Drama. You can view a first version of the project at http://vimeo.com/20594424 and the performance at Pompidou Centre at http://vimeo.com/25792567.

The Delta Robot

What exactly is a delta robot? A delta robot is a type of parallel robot invented by Reymond Clavel in the 1980s. The basic idea behind it is the use of parallelograms in the design of the robot’s legs in order to keep the orientation of an end platform, or effector, that has three degrees of translational freedom but no rotation. The robot consists of only three motors that control the rotation of the three legs, which transform these rotations into a specific pose determined by the connectivity and the nature of the joints of the structure.

There are many industrial applications for this type of robot, mainly due to their low inertia, which allows high accelerations. This makes delta robots some of the fastest robots ever built; they are especially useful for picking and packaging processes. Run a search on the Internet for “delta robot” and you will be surprised by the variety of applications and the amazing speeds these robots can achieve.

Building a Delta Robot

Let’s start putting things together so you can get your fantastic delta robot working. As mentioned, your delta robot has three limbs (even though there are some four-legged members of the family) attached to a fix base on one end and to the robot’s effector on the other end. The fix base accommodates three servos and the Arduino board; it is the only static part of the robot. The legs constitute a kinematic chain that translates the servos’ rotational states into specific spatial positions of the effector, which is composed of a rotating base and a gripper, controlled by two small servos.

The Legs

In order to describe the robot’s limbs, we’re going to use an analogy of a human leg. Each leg is composed of “thigh” (one strong piece that stands the torque coming from the motor), “shin” (two parallel pieces that are connected to the effector and the thigh), that are connected through a “knee” consisting of ball joints. These legs connect to the effector on an “ankle” joint and to the base on the “hip,” which is the joint controlled by the servo motors.

For the thigh part, we selected hollow square sections due to their higher resilience to bending moment and torsion. Before starting to build the first limb, you need to drill small holes on these bars in order to connect them to the servos. These holes should be the same diameter as the bolt that you use to connect this bar to the servos. It’s important to be really accurate because this union needs to be as strong as possible if you want to achieve a good level of accuracy (Figure 13-2).

Images

Figure 13-2. Thigh connected to the servo horn

You need to measure the position of the holes according to the plastic servo horn. Once you have drilled these holes (you need a minimum of two), you can join the square bar to the servo leg using a couple of bolts. On the other end, drill one hole in the perpendicular plane. This time it’s bigger: the same diameter as the corrugated rod that you are going to pass through (6 mm diameter). The next job is to prepare the shin’s aluminum rods. You have to thread them using a diestock in order to screw them into the ball joint (Figure 13-3). If you don’t have the right tool, you can use standard steel corrugated rods instead of the aluminum ones. Note that this option may require that you use shorter legs or more powerful servos, because of the increase in weight.

Images

Figure 13-3. Threaded aluminum rod and ball joint ready for assembly

Once you put together the aluminum rod and the ball joint on each end, repeat the process with another aluminum rod, making sure they are exactly the same length.

Next, you join all these elements together and finish your first leg. The use of ball joints is key for this project. The leg’s joints should allow full rotational freedom for the correct movement of the robot. Pass the corrugated rod trough the ball joint, the square section bar, and the second ball joint, as shown in Figure 13-4.

Images

Figure 13-4. Ball joint assembly

In order to allow the free rotation of the ball joint, use a series of spacers. First, you screw in a couple of nuts on both sides of the square section bar in order to fix it with the rod, and then you fit a short section of the aluminum tube (5 mm will do); this allows for ball joint movement without clashing with the corrugated rod. These spacers are fitted before and after the ball joint. Finally, put a nylon nut on the ends (Figure 13-5). If you don’t have nylon nuts around, you can use two nuts, locking them together tightly in order fix the articulation.

Images

Figure 13-5. Leg articulation finished

You now have one leg ready. Just repeat the same process two times (Figure 13-6), paying attention to the position of the holes in the square section bars (dimensions should be consistent on the three legs!) and the length of the aluminum rods. These two factors play a key role in the final accuracy of the robot’s movements. Once you have your legs ready, move on to building the base.

Images

Figure 13-6. Legs ready to be assembled to the base and effector

The Base

The base holds the servo motors and is the piece connecting the whole robot to the cantilevered timber plank acting as a support. If you are planning to build your robot differently, just pay attention to the clearance for the legs. When the robot moves, the legs come higher than the base, so you need to make room for them to move freely.

You are going to fabricate the base out of Perspex, but any other rigid material will work. We built our base using a laser cutter, so we got a pretty neat and accurate shape and holes. If you want to laser-cut your base, there are some good online services where you can send the vector file (dwg, ai or pdf) and they will ship the finished product. Otherwise, you can use the files from the book’s web site to trace over and cut your base with a good old scalpel.

The base shape is a triangle (Figure 13-7) on which you place each leg’s axis aligned to the middle point of each side. Use a triangle of 440 mm per side. The servos are held via L brackets so the rotation happens on the right plane. You subtract a rectangular shape in each side in order to allow the thigh to lift over the plane of the base. You also chop the triangle vertices to save some Perspex and have a smaller base (Figure 13-8).

Images

Figure 13-7. Base shape

Images

Figure 13-8. Laser-cut base and servo with an L bracket

Once the piece is cut, you just have to screw the servo motors to the brackets, fixing their position; pay special attention to the distance that they are fixed from the border, which must be equal (Figure 13-9). The next step is to screw the brackets that hold the servos to the base.

Images

Figure 13-9. Assembled base

The Effector

The effector is a triangular piece that moves parallel to the plane of the base (Figure 13-10). You build it out of a triangular piece of Perspex with 135mm sides. The aluminum rods are joined to this piece.

Use the same type of joint that you saw in the union between thigh and shin. Start by fixing the small square bars to the triangle. Draw a series of holes in the medians of the triangle and a rectangular shape where you insert a new servo motor for the gripper. Don’t forget to also draw the holes for fixing the servos to this piece! Once the piece is cut, drill some small holes into the square bars, making them match up with the ones in the effector. These square bars stick out of the triangular shape in order to have enough room to connect them to the legs. Before fixing them with some screws and nuts, drill a hole in one of the extremes (in the perpendicular plane) to build the same type of joint previously described. This hole must be big enough to let an M6 rod pass through it.

Images

Figure 13-10. Effector drawing and assembly

Once all the holes are drilled, fix the square bars to the Perspex piece, conforming to the effector. Now join it to the legs. Remember to use the same type of node you used previously: square bar in the center, nut fixing it on both sides, spacer, ball joint, spacer, nylon nut, or double nut.

Now that the three legs are joined to the effector, you can join them to the base! Just fix the servo leg to the servomotor and screw them in so they don’t come off (believe me, they do). And that’s it! Your delta robot is nearly finished. But no robot is complete without an effector end. The robot needs a tool to act in the world; you’re going to build a gripper, a robotic hand that will catch anything within its reach.

The Gripper

The gripper is used by the robot to pick up objects. It’s a really simple mechanism that can open and close using just one servo. You will laser-cut the Perspex pieces and assemble them into the gripper mechanism. The gripper shown in Figure 13-11 is a modified version of a design from http://jjshortcut.wordpress.com, which is a modification on a design from http://www.diniro.net (one beautiful example of Internet sharing!). We altered the design to account for the thickness of the material, the specific servos, and the effector size; we also added a circular base.

Images

Figure 13-11. Gripper parts

Note that all the holes have been marked in order to make the assembly easier and also to allow the rectangular shape to host a servo. The rotation of the servo is transmitted to the gears; this closes the gripper. The gripper is assembled to a base that allows rotation in the perpendicular plane using another servomotor.

You start by assembling the first gear with the servo horn, then assemble the first servo to the big piece, and continue with the rest of pieces, using bolts and nylon nuts or double nuts. Once you have the complete mechanism, test it by moving it with your hand. It should have a pretty smooth movement (Figure 13-12). Secure all bolts but don’t over tight them. Add some rubber adhesives in order to have a more stable grip (Perspex doesn’t have enough friction, so objects just slip away!).

Images

Figure 13-12. Half gripper

Images

Figure 13-13. Motor assembled on the gripper

Once your gripper is assembled, connect it to the effector. Remember that you left enough space to fix one servo to the effector (Figure 13-13). Then adjust a servo horn to the gripper circular base. After that, fix it to the servo and screw it. Lastly, assemble the gripper with the circular base. To avoid too much vibration, use two mini L brackets, one on each side. The final piece should look like the one in Figure 13-14.

Images

Figure 13-14. Assembled gripper

As a final touch, we added a couple of red LEDs on both sides of the gripper simply as an effect (Figure 13-15). The quaint face-like effect it conveys is quite theatrical when the robot is in action.

Images

Figure 13-15. Assembled gripper with servo and LEDs on board

Hanging the Robot

You cantilever the robot using an MDF board that you previously cut in the same shape as the robot’s base (Figure 13-16), allowing the legs to go over the base plane. It is very important that you allow this movement. If you attach the robot directly to a flat surface (say you screw the base to your ceiling), the legs won’t be able to go over the base plane and the robot’s movements will be severely limited.

Images

Figure 13-16. Cantilevered MDF board

Mark the holes and use long screws to fix this MDF board to the base. Once everything is assembled, use a timber shelve as a platform and use a couple of clamps to fix the robot hanging in space (Figure 13-17). It’s important to measure this height because you need to add a working plane if you want to have a surface where you can leave objects for the robot to pick (Figure 13-18).

Images

Figure 13-17. Hanging the robot from the board

Images

Figure 13-18. Robot setup

You can use a normal table as a working plane. You place the Kinect under the working plane—at the right height to detect your movements but also far enough from the robot as you never know what these creatures are capable of!

Building the Circuit

Now it’s time to deal with the wiring. You have two LEDs and two servos in the gripper plus three big servos for the legs, and you need to connect them to your main circuit. Connect some extension lead cables from the gripper to connect the LEDs and the servos, running it from one leg through cable ties. Another option is to use some cable tidy (a plastic spiral that collects all cables inside), shown in Figure 13-19.

Images

Figure 13-19. Cable tidy

Then you connect shorter extension cables for the three servos on the legs. Note that you need an external 5V power supply for this project; the 5V power supply coming from the Arduino is not stable enough for the power requirements of all servos working at the same time.

The circuit contains five servos (three for the legs and two for the gripper), two LEDs with their resistors, your Arduino, and an external power supply. Arduinos pins 9, 10, and 11 control three servos for the legs. Pins 5 and 6 control the gripper’s servos and pin 3 controls the LEDs, as per Figure 13-20.

Images

Figure 13-20. Delta robot circuit

Images

Figure 13-21. Circuit materials

To build the circuit, you are going to use a strip board (Figure 13-21). Start by measuring it and soldering the breakaway headers that connect to your Arduino board (Figure 13-22). It’s important that you have a good look at the back of the strip board and confirm the direction of the copper strips before starting to solder! If you want to give it more stability, you can add more headers than necessary. The important ones are ground and pins 9, 10, 11, 5, 6, and 3.

Images

Figure 13-22. Strip board with soldered headers

After soldering, double check that they are placed correctly by plugging the shield to your Arduino board, making sure that it comes in easily. You may have to bend some of the headers with a nipper to get the shield properly connected.

The next step is to solder strips of three headers that are plugged to the female lining from the servos. Put them in the same column, leaving a distance between them; leave three rows in between. On each pin you get power (red wire), ground (black wire), and signal (yellow wire), as you saw in the previous chapters when we introduced servo motors. All servos are going to share ground and power, which come from an external power supply; leave the third pin connected independently to the corresponding Arduino pin (Figure 13-23).

Images

Figure 13-23. Headers for the servos

After soldering five of them (one for each servo), solder a strip of two for later connecting the LED. In this case, you share just the ground (common for all the circuit) so it is shifted respective to the previous column; in other words, one pin is in the same column as the ground and the other completely independent in another strip.

Now turn over the strip board and scratch some lines into the copper so you disconnect pins that shouldn’t be connected (servos’ signal). You can use a scalpel to remove the copper that connects the line. Then check that the connection is completely broken by using a multimeter.

The next step is to connect the ground to the Arduino ground pin (ground is common in the circuit). After that, connect each signal pin to the corresponding pin on the Arduino. Now all your servos are connected to the Arduino. For the LED, connect a resistor between the strip that is connected with pin 3 on the Arduino board and the power free pin that you left before for the LED (Figure 13-24).

Images

Figure 13-24. Finished board

The last step is to solder two cables in the strip’s power and ground for the servos. These two cables are connected to an external power source. You can use a PC power supply for a smooth 5V power out. The only trick is to hack it so it works without being connected to a computer motherboard.

You will use the so-called paperclip trick. Make sure that the power is off and the power supply is disconnected (Figure 13-25); find the 20/24-pin connector that is used to connect the power supply to the motherboard. In the 20-pin block, find the green wire; there is only one and beside it are two black (ground cables). On the other end, connect the green wire with any of the grounds using a small piece of wire; it doesn’t matter which ground you choose as both will work.

Images

Figure 13-25. The paperclip trick

And that’s it! For safety, wrap some electrical tape around this connection so it’s not exposed. Now you can pick any thin cable (5V) and connect red (power) and black (ground) to your circuit

Everything is ready now! Plug your homemade shield to Arduino, plug all the servos in the correspondent pins (making sure that they are the correct ones, and that ground, power, and signals are correct!). Connect your LED cable (again making sure that you correctly connect ground and power!), connect the power and ground to your external power supply (Figure 13-26), and you’re ready to bring in the code!

Images

Figure 13-26. Finished circuit

Delta Robot Simulation Software

You now have a (theoretically) working delta robot hanging from a support, and you have built a circuit attached to an Arduino board to control the robot. It’s time to give it a brain! This role is fulfilled by your Processing sketch, which will tell the three servos to go to certain positions in order to move the effector to a precise point in space. You will also send the necessary data to determine the rotation and the state of the gripper. You will, of course send all this information to Arduino through serial communication; your board is in charge of transmitting the right messages to the right devices at the right times.

You are going to proceed step by step. First, you are going to develop the delta robot classes and test them with a simple example. When you are sure the simulation is working properly, you’ll add all the Kinect and Serial methods, and then you will implement the Arduino program that will translate the serial data into pulses that the servos can understand. But first, we need to clarify an important concept.

Inverse Kinematics

When you work with robots, you often need to make use of inverse kinematics. But what on Earth is that? When you have a jointed, flexible object, or kinematic chain (like your delta robot, or any other articulated robot), there are two problems that you usually need to solve when studying its movements.

  • If I move all the motors or actuators in the structure to certain states, what is the resulting pose of the robot?
  • If I want to achieve a specific pose, what are the required states of my actuators in order to achieve it?

The first question can be answered by forward kinematics, the second one by inverse kinematics. In your case, what you want is to be able to drive the robot’s effector to specific points in space so it can follow your hand; this is within the realm of inverse kinematics. You have a pretty straightforward structure, driven by only three servos (the other two are unrelated to the robot’s position in space), but you need to find out how the three independent spatial coordinates of x, y, and z that define the desired position of the effector translate back through the kinematic chain to the three rotations of the servos. This is achieved by analyzing the connection and geometry of the robot and the constraints this geometry imposes on the robot’s kinematics.

The goal of the inverse kinematics of a leg is to find the servo rotation necessary to achieve a specific effector position in space. Let’s assume for a second you are looking at a simpler, two-dimensional problem. If you need to find out the servo angle for the diagram in Figure 13-27, the answer is obtained through planar geometry. The code that gives you the answer is the following:

float c = dist(posTemp.x + effectorSize, posTemp.y, baseSize, 0);
float alpha = acos((-a2 + thigh * thigh + c * c) / (2 * thigh * c));
float beta = -atan2(posTemp.y, posTemp.x);
servoAngle = alpha - beta;
Images

Figure 13-27. Delta robot’s inverse kinematics in 2D and 3D

But you are not in a two-dimensional world, so you need to resolve the problem for a three-dimensional structure. If you look closely at the Figure 13-27, you can see that the problem can be reduced to a two-dimensional problem in which the dimension of the shin is reduced to its projection on the plane of the servo rotation. This projection, called a2 in the code, is obtained by simple triangle theory.

float a2 = shin * shin - posTemp.z * posTemp.z;

Once a2 is known, you only need to substitute it for the shin in the previous equations and you get the answer to the three-dimensional problem.

float c = dist(posTemp.x + effectorSize, posTemp.y, baseSize, 0);
float alpha = acos((-a2 + thigh * thigh + c * c) / (2 * thigh * c));
float beta = -atan2(posTemp.y, posTemp.x);
servoAngle = alpha - beta;

Images Note  This has been a very brief introduction to delta robot’s inverse kinematics, but there are plenty of resources on the Internet where you can find more information on this topic. Another advantage of delta robots is the extent to which they have been studied and the availability of good online information about them.

Now you need to structure your delta robot classes.

DeltaRobot Class

The delta robot simulation consists of the DeltaRobot class (Figure 13-28), containing the main parameters and routines, and a DeltaLeg class, where you deal with the inverse kinematics and the drawing functions for each leg. Let’s start with the DeltaRobot class. You need a series of fields defining the position of the effector at each moment in time, and a zeroPos PVector that establishes the position from which you define the movements of your effector.

Images

Figure 13-28. Delta robot classes

You define a variable called numLegs that defines the number of legs of your robot. Yes, you can actually increase the number of legs to 4, 5, or any higher number. The fact that there is no 23-legged parallel robot out there is because any high number of legs would just be a waste of material (three being the minimum necessary number of legs for stability). Having said that, one of the fastest commercial delta robots, the Adept Quattro, is based on a four-leg architecture, so feel free to play with the parameter. The results are certainly curious (Figure 13-29).

class DeltaRobot {
   PVector posVec = new PVector(); // Position of the Effector
   PVector zeroPos;
   int numLegs = 3; // Number of legs
Images

Figure 13-29. Two distant (and pretty bizarre) cousins of your delta robot, with 5 and 150 legs

You then set an array called DeltaLeg to contain the three (or any other number of) legs of the robot, and an array of floats for the angles of the joints driven by the servos (the upper joints). You also declare the variables that set the dimensions of your specific delta robot, and three variables defining the maximum span of your robot in the three axes: robotSpanX, robotSpanZ, and maxH). You need to respect these distances if you don’t want to send unreachable locations to your robot, as this would lead to the legs disappearing in the case of the simulation, or worst, a damaged robot if you are driving the physical robot.

  DeltaLeg[] leg = new DeltaLeg[numLegs]; // Create an array of deltaLegs
  float[] servoAngles = new float[numLegs]; // Store the angles of each leg
  float thigh, shin, baseSize, effectorSize; // Delta-Robot dimensions
  float gripRot = 100;
  float gripWidth = 100;
  // Maximum dimensions of the Robot space
  float robotSpanX = 500;
  float robotSpanZ = 500;
  float maxH;

You have come to the main constructor of the DeltaRobot class. It takes as parameters the lengths of the hip and ankle elements, and the size of the base and the effector. Note that when you refer to the base and effector sizes, you mean the distance between the center of the base/effector and the axis of the joint attached to the element (Figure 13-30).

Images

Figure 13-30. Base and effector sizes in the sketch

DeltaRobot(float thigh, float shin, float baseSize, float effectorSize) {
    // Set the variables
    this.thigh = thigh;
    this.shin = shin;
    this.baseSize = baseSize;
    this.effectorSize = effectorSize;
    this.maxH = -(thigh + shin) * 0.9f; // Avoid the effector going out of range
    zeroPos = new PVector(0, maxH / 2, 0);

The DeltaLegs need to be initialized to the right dimensions, those that you extract from your robot’s parameters. Each leg is initialized to a certain angle, one that you find by dividing the full circle (2*Pi radians) by the number of legs. In our sketch, we added PI/6 because we wanted to center the legs from the Kinect point of view.

for (int i = 0; i < numLegs; i++) {
      float legAngle = (i * 2 * PI / numLegs) + PI / 6;
      leg[i] = new DeltaLeg(i, legAngle, thigh, shin,  baseSize, effectorSize);
    }
  }

Once all your variables are initialized, you add some methods to your main delta robot class. The public methods you call from the main sketch are moveTo() and draw().

The moveTo() method takes a PVector as a parameter. You use this method to tell the robot to go to a specific position from your main sketch. You consider this PVector to be the relative position of the robot’s effector to the point specified as zero position. The resulting position of your robot’s effector is the result of adding the incoming PVector to the zero position.

public void moveTo(PVector newPos) {
    posVec.set(PVector.add(newPos, zeroPos));

Within this method, and before passing the new position to each leg, you need to make sure that the position is a “legal” position, which means a position physically reachable by the robot. The position is legal if it is contained within a parallelepiped of the dimensions defined by the variables robotSpanX, robotSpanZ, and maxH. If one of your coordinates is out of this virtual volume, trim the value to the maximum value of the movement on that axis.

    float xMax = robotSpanX * 0.5f;
    float xMin = -robotSpanX * 0.5f;
    float zMax = robotSpanZ * 0.5f;
    float zMin = -robotSpanZ * 0.5f;
    float yMax = -200;
    float yMin = 2*maxH+200;

    if (posVec.x > xMax) posVec.x = xMax;
    if (posVec.x < xMin) posVec.x = xMin;
    if (posVec.y > yMax) posVec.y = yMax;
    if (posVec.y < yMin) posVec.y = yMin;
    if (posVec.z > zMax) posVec.z = zMax;
    if (posVec.z < zMin) posVec.z = zMin;

Finally, move each leg to its new position and inquire what angle is necessary to move your servos to in order to achieve it. The servo angle is stored in an array for later.

    for (int i = 0; i < numLegs; i++) {
      leg[i].moveTo(posVec); // Move the legs to the new position
      servoAngles[i] = leg[i].servoAngle; // Get the servo angles
    }
  }

By this step, you have all the data you need to drive the robot (the three servo angles), so you can now implement a serial communication protocol and start driving your robot straight away. But let’s develop the class a little further to include a whole visualization interface for your robot. You will ultimately have a virtual model of the delta robot that behaves exactly like the physical model and that you can visualize on screen. We think this is important for the project because it’s a good idea to make sure everything is working before you output the values to the servos, but the whole project would work perfectly without the visual side. We also consider this simulation to be of great help in understanding the behavior of the machine, and it will later prove very useful when you bring the Kinect input along.

The public method draw() is called from your main routine when you want to print your delta robot on screen. This method changes the main matrix to a different position so the delta robot is shown at a certain height instead of at the origin of coordinates, because later this origin is the Kinect device. Then you call each leg’s own draw() function and the drawEffector() function that draws the gripper.

   public void draw() {
    stroke(50);
    pushMatrix();
    translate(0, -maxH, 0);

    for (int i = 0; i < numLegs; i++) {
      leg[i].draw();
    }

    drawEffector();
    popMatrix();
  }

The drawEffector() function displays the effector and the gripper you have attached to it. We won’t go into much detail on this because it is mainly an exercise of changing transformation matrices and drawing elements so you end up having a simplified model of your gripper that can be rotated, open, and closed at will (Figure 13-31).

Images

Figure 13-31. Effector and gripper simulation

   void drawEffector() {
    // Draw the Effector Structure
    stroke(150);
    fill(150, 50);
      beginShape();
      for (int i = 0; i < numLegs; i++) {
        vertex(leg[i].ankleVec.x, leg[i].ankleVec.y,
        leg[i].ankleVec.z);
      }
      endShape(CLOSE);
    // Draw Gripper
    stroke(200, 200, 255);
    fill(200, 50);

    // Translate your Coordinate System to the effector position
    pushMatrix();
    translate(posVec.x, posVec.y - 5, posVec.z);
    rotateX(-PI/2); // Rotate The CS, so you can drwa
    ellipse(0, 0, effectorSize / 1.2f, effectorSize / 1.2f);
    rotate(map(gripRot, 35, 180, -PI / 2, PI / 2));

    for (int j = -1; j < 2; j += 2) {
      translate(0, 2 * j, 0);
      beginShape();
      vertex(-30, 0, 0);
      vertex(30, 0, 0);
      vertex(30, 0, -35);
      vertex(15, 0, -50);
      vertex(-15, 0, -50);
      vertex(-30, 0, -35);
      endShape(CLOSE);

      for (int i = -1; i < 2; i += 2) {
        pushMatrix();
        translate(i * 20, 0, -30);
        rotateX(PI / 2);
        ellipse(0, 0, 10, 10);
        rotate(i * map(gripWidth, 50, 150, 0, PI / 2.2f));
        rect(-5, -60, 10, 60);
        translate(0, -50, 0);
        rotate(-i * map(gripWidth, 50, 150, 0, PI / 2.2f));
        rect(-5, -60, 10, 60);
        popMatrix();
      }
    }
    popMatrix();
  }

There is one more function to add, but you only use in the next example. The updateGrip() method can be called from the main sketch whenever you want to change the rotation and width of the gripper.

  public void updateGrip(float gripRot, float gripWidth) {
    this.gripRot = gripRot;
    this.gripWidth = gripWidth;
  }
}

deltaLeg Class

Your main DeltaRobot class is complete now, and you have built the necessary interfaces to drive the robot from your main program, but the most important part of the code is yet to be implemented: working out the inverse kinematics of the robot and getting the servo angles necessary to achieve the desired pose.

In the deltaRobot class, when moving the robot to a new position, you use a “black box” that gives you the servo angles. This black box is the deltaLeg class. This class has a series of variables defining the id of the leg, position of the effector, angle of the servo, angle of the leg, and the real-world coordinates of the leg’s joints, which is used for visualization.

class DeltaLeg {
  int id; // id of the leg
  PVector posVec = new PVector(); // Effector Position
  float servoAngle; // Anlgle between the servo and the XZ plane
  float legAngle; // Y rotation angle of the leg

  // Universal position of the joints
  PVector hipVec, kneeVec, ankleVec;

  float thigh, shin, baseSize, effectorSize; // Sizes of the robot's elements
   DeltaLeg(int id, float legAngle, float thigh, float shin, float base, float effector) {
    this.id = id;
    this.legAngle = legAngle;
    this.baseSize = base;
    this.effectorSize = effector;
    this.thigh = thigh;
    this.shin = shin;
  }

The following function, moveTo, functionis actually where the magic happens. This short function converts the input position vector into a servo rotation. This is the equivalent of saying that this is the function dealing with the robot’s inverse kinematics. (We ran through the delta robot’s inverse kinematics previously, so we just repeat the code here.)

  void moveTo(PVector thisPos) {
    posVec.set(thisPos);
    PVector posTemp = vecRotY(thisPos, -legAngle);

    // find projection of a on the z=0 plane, squared
    float a2 = shin * shin - posTemp.z * posTemp.z;

    // calculate c with respect to base offset
    float c = dist(posTemp.x + effectorSize, posTemp.y, baseSize, 0);
    float alpha = (float) Math.acos((-a2 + thigh * thigh + c * c) / (2 * thigh * c));
    float beta = -(float) Math.atan2(posTemp.y, posTemp.x);
    servoAngle = alpha - beta;
    getWorldCoordinates();
  }

The function getWorldCoordinates() function updates the PVectors defining the joints of the leg in world coordinates, so it can draw them on screen. Use the helper functions vecRotY() and vecRotZ() to perform the vector rotations.

   void getWorldCoordinates () {
    // Unrotated Vectors of articulations
    hipVec = vecRotY(new PVector(baseSize, 0, 0), legAngle);
    kneeVec = vecRotZ(new PVector(thigh, 0, 0), servoAngle);
    kneeVec = vecRotY(kneeVec, legAngle);
    ankleVec = new PVector(posVec.x + (effectorSize * (float) Math.cos(legAngle)), posVec.y,
    posVec.z - 5 + (effectorSize * (float) Math.sin(legAngle)));
  }

   PVector vecRotY(PVector vecIn, float phi) {
    // Rotates a vector around the universal y-axis
    PVector rotatedVec = new PVector();
    rotatedVec.x = vecIn.x * cos(phi) - vecIn.z * sin(phi);
    rotatedVec.z = vecIn.x * sin(phi) + vecIn.z * cos(phi);
    rotatedVec.y = vecIn.y;
    return rotatedVec;
  }

   PVector vecRotZ(PVector vecIn, float phi) {
    // Rotates a vector around the universal z-axis
    PVector rotatedVec = new PVector();
    rotatedVec.x = vecIn.x * cos(phi) - vecIn.y * sin(phi);
    rotatedVec.y = vecIn.x * sin(phi) + vecIn.y * cos(phi);
    rotatedVec.z = vecIn.z;
    return rotatedVec;
  }

The public draw() function functionis called from the deltaRobot’s draw() function and takes care of displaying the geometry of the leg on screen. Once again, we won’t go into much detail here, but you can follow the comments in the code.

  public void draw() {
    // Draw three lines to indicate the plane of each leg
    pushMatrix();
    translate(0, 0, 0);
    rotateY(-legAngle);
    translate(baseSize, 0, 0);
    if (id == 0) stroke(255, 0, 0);
    if (id == 1) stroke(0, 255, 0);
    if (id == 2) stroke(0, 0, 255);
    line(-baseSize / 2, 0, 0, 3 / 2 * baseSize, 0, 0);
    popMatrix();

    // Draw the Ankle Element
    stroke(150);
    strokeWeight(2);
    line(kneeVec.x, kneeVec.y, kneeVec.z, ankleVec.x, ankleVec.y,
    ankleVec.z);
    stroke(150, 140, 140);
    fill(50);
    beginShape();
    vertex(hipVec.x, hipVec.y + 5, hipVec.z);
    vertex(hipVec.x, hipVec.y - 5, hipVec.z);
    vertex(kneeVec.x, kneeVec.y - 5, kneeVec.z);
    vertex(kneeVec.x, kneeVec.y + 5, kneeVec.z);
    endShape(PConstants.CLOSE);
    strokeWeight(1);

    // Draw the Hip Element
    stroke(0);
    fill(255);

    // Align the z axis to the direction of the bar
    PVector dirVec = PVector.sub(kneeVec, hipVec);
    PVector centVec = PVector.add(hipVec, PVector.mult(dirVec, 0.5f));
    PVector new_dir = dirVec.get();
    PVector new_up = new PVector(0.0f, 0.0f, 1.0f);
    new_up.normalize();
    PVector crss = dirVec.cross(new_up);
    float theAngle = PVector.angleBetween(new_dir, new_up);
    crss.normalize();

    pushMatrix();
    translate(centVec.x, centVec.y, centVec.z);
    rotate(-theAngle, crss.x, crss.y, crss.z);
    // rotate(servoAngle);
    box(dirVec.mag() / 50, dirVec.mag() / 50, dirVec.mag());
    popMatrix();
    }
  }

Driving the Delta Robot Simulation with the Mouse

You have wrapped all functions you need for driving and visualizing a delta robot in two classes that you can add to any Processing sketch, so let’s run a first test.

You are going to implement the simplest of applications and drive a virtual model of a delta robot with your mouse position. Import OpenGL and your Kinect Orbit library; then initialize a deltaRobot object to play with.

import processing.opengl.*;
import kinectOrbit.KinectOrbit;

import SimpleOpenNI.*;

// Initialize Orbit and simple-openni Objects
KinectOrbit myOrbit;
SimpleOpenNI kinect;

// Delta Robot
DeltaRobot dRobot;
PVector motionVec;

public void setup() {
  size(1200, 900, OPENGL);
  smooth();

  // Orbit
  myOrbit = new KinectOrbit(this, 0, "kinect");
  myOrbit.setCSScale(100);

  // Initialize the Delta Robot to the real dimensions
  dRobot = new DeltaRobot(250, 430, 90, 80);
}

Now the only thing you need to do is, within the orbit loop (so you can rotate around), create a motion PVector with your mouse coordinates, move the delta robot to that point in space, and draw it on screen to see the result. Done!

public void draw() {
  background(0);
  myOrbit.pushOrbit(this); // Start Orbiting
  motionVec = new PVector(width/2-mouseX, 0, height/2-mouseY);// Set the motion vector
  dRobot.moveTo(motionVec); // Move the robot to the relative motion vector
  dRobot.draw();  // Draw the delta robot in the current view.
    myOrbit.popOrbit(this); // Stop Orbiting
}

Run the sketch. You should get a neat image of a delta robot on screen (Figure 13-32). If you move your mouse over the sketch, the robot should move around accordingly.

Images

Figure 13-32. Mouse-driven delta robot simulation

Spend some time with your new virtual toy and be delighted by its movements. When we first saw a delta robot in action, we were amazed by the richness of the movements that emerged from such a simple structure. When you have made friends with this quaint creature, you can move on to the next step: driving it with your bare hands.

Kinect-Controlled Delta Robot

You are going to reuse the two classes you already created to develop a more complex application. The goal is to introduce Kinect and NITE’s hand tracking capabilities to drive the delta robot simulation, and then implement your own routine to drive the gripper by tilting, opening, and closing your hand (Figure 13-33). Then you will add a serial communication routine to send the servo state to Arduino so you can plug the software straight into your physical robot and drive it with your hand.

Images

Figure 13-33. Delta robot driven by hand gestures

Import the full set of libraries you have been working with (this means Simple-OpenNI, Serial, OpenGL, and KinectOrbit) and then declare the corresponding variables. You are using hand tracking, so you need NITE’s Session Manager and Point Control. The Boolean variable serial can be set to false if you want to run the sketch without an Arduino board plugged to the computer.

import processing.opengl.*;
import processing.serial.*;
import SimpleOpenNI.*;
import kinectOrbit.KinectOrbit;

// Initialize Orbit and simple-openni Objects
KinectOrbit myOrbit;
SimpleOpenNI kinect;
Serial myPort;
boolean serial = true;

// NITE
XnVSessionManager sessionManager;
XnVPointControl pointControl;
// Font for text on screen
PFont font;

// Variables for Hand Detection
boolean handsTrackFlag;
PVector handOrigin = new PVector();
PVector handVec = new PVector();
ArrayList<PVector> handVecList = new ArrayList<PVector>();
int handVecListSize = 30;
PVector[] realWorldPoint;

You want to be able to control the position of the robot and the state of the gripper, so you need to declare the PVector motionVec and the two floats gripRot and gripWidth.

// Delta Robot
DeltaRobot dRobot;
PVector motionVec;
float gripRot;
float gripWidth;

private float[] serialMsg =  new float[5]; // Serial Values sent to Arduino

The setup() function initializes all the objects you have previously declared and enables all NITE capabilities you are going to be using. Note that you are only adding “wave” gesture recognition to the Session Manager because you want to have a better control of hand creation in runtime. If you include RaiseHand, as soon as your hand enters Kinect’s field of view you trigger a hand creation event; by adding only waving, you can control when and where you start tracking your hand, which is useful in a later stage.

Of course, you also initialize the delta robot object to the dimensions (in mm) of the robot you have built and set the serial communication to the first port in your computer where your Arduino is connected.

public void setup() {
  size(800, 600, OPENGL);
  smooth();

  // Orbit
  myOrbit = new KinectOrbit(this, 0, "kinect");
  myOrbit.drawCS(true);
  myOrbit.drawGizmo(true);
  myOrbit.setCSScale(100);

  // Simple-openni object
  kinect = new SimpleOpenNI(this);
  kinect.setMirror(false);
  // enable depthMap generation, hands + gestures
  kinect.enableDepth();
  kinect.enableGesture();
  kinect.enableHands();

  // setup NITE
  sessionManager = kinect.createSessionManager("Wave", "Wave");
  // Setup NITE.s Hand Point Control
  pointControl = new XnVPointControl();
  pointControl.RegisterPointCreate(this);
  pointControl.RegisterPointDestroy(this);
  pointControl.RegisterPointUpdate(this);

  sessionManager.AddListener(pointControl);
  // Array to store the scanned points
  realWorldPoint = new PVector[kinect.depthHeight() * kinect.depthWidth()];
  for (int i = 0; i < realWorldPoint.length; i++) {
    realWorldPoint[i] = new PVector();
  }
  // Initialize Font
  font = loadFont("SansSerif-12.vlw");

  // Initialize the Delta Robot to the real dimensions
  dRobot = new DeltaRobot(250, 430, 90, 80);

  if (serial) {
    // Initialize Serial Communication
    String portName = Serial.list()[0]; // This gets the first port on your computer.
    myPort = new Serial(this, portName, 9600);
  }
}

Now add the XvN Point Control callback functions, which are pretty similar to the ones you used in previous examples for hand tracking. Set your hand tracking flag on or off and update your hand PVector and your hand position history ArrayList.

Add one more line to the onPointCreate function. Because you want to track your hand movements after you have waved to the Kinect, you need to set the handOrigin PVector to the point in space where the waving took place (Figure 13-34). This allows you to set your motion vector as the relative displacement vector of your hand from the hand-creation point.

Images

Figure 13-34. Hand origin and current hand position

public void onPointCreate(XnVHandPointContext pContext) {
  println("onPointCreate:");
  handsTrackFlag = true;
  handVec.set(pContext.getPtPosition().getX(), pContext.getPtPosition()
    .getY(), pContext.getPtPosition().getZ());
  handVecList.clear();
  handVecList.add(handVec.get());
  handOrigin = handVec.get();
}

public void onPointDestroy(int nID) {
  println("PointDestroy: " + nID);
  handsTrackFlag = false; }
public void onPointUpdate(XnVHandPointContext pContext) {
  handVec.set(pContext.getPtPosition().getX(), pContext.getPtPosition()
    .getY(), pContext.getPtPosition().getZ());
  handVecList.add(0, handVec.get());
  if (handVecList.size() >= handVecListSize) { // remove the last point
    handVecList.remove(handVecList.size() - 1);
  }
}

The main draw() loop updates the Kinect object and sets a Kinect Orbit loop. Within this loop, you update the hand position and draw it, using two specific functions that you implement next.

public void draw() {
  background(0);
  // Update Kinect data
  kinect.update();
  // update NITE
  kinect.update(sessionManager);
  myOrbit.pushOrbit(this); // Start Orbiting

  if (handsTrackFlag) {
    updateHand();
    drawHand();
  }

Just to have an idea of the relative motion between the current position of the hand and the origin point (where the hand was created), draw a green line between the two.

  // Draw the origin point, and the line to the current position
  pushStyle();
  stroke(0, 0, 255);
  strokeWeight(5);
  point(handOrigin.x, handOrigin.y, handOrigin.z);
  popStyle();
  stroke(0, 255, 0);
  line(handOrigin.x, handOrigin.y, handOrigin.z, handVec.x, handVec.y,
  handVec.z);

And now you store that relative motion vector into the motionVec PVector and use it as a parameter to move the delta robot to its new position. You then proceed to draw the delta robot and send the serial data. Add a displayText function that prints on screen the data you are sending to the serial port for double-checking.

  motionVec = PVector.sub(handVec, handOrigin);// Set the relative motion vector
  dRobot.moveTo(motionVec); // Move the robot to the relative motion vector
  dRobot.draw();  // Draw the delta robot in the current view.
  kinect.drawCamFrustum(); // Draw the Kinect cam
  myOrbit.popOrbit(this); // Stop Orbiting
  if (serial) {
    sendSerialData();
  }
  displayText();  // Print the data on screen
}

Gripper Control

You want to implement a routine to control the delta robot’s gripper with your hand movements. Here is the theory.

Using NITE, you have already worked out the position of your hand, that is the very center of your palm, and you are using it to set the robot’s position in space. Wouldn’t it be cool to open and close the gripper by opening and closing your hand? Well, there is a straightforward way of implementing this feature. You know your hand is defined by the set of points around the handVec position, so you can parse your point cloud and select all the points within a certain distance of your hand vector. After some testing, you set 100mm as a reasonable distance. If you display the points that fall within this rule, you get a “white glove” (Figure 13-35), which you use to extract the state of your hand.

Images

Figure 13-35. Hand point cloud

Once you know what points constitute your hand, you can think about what changes when you open and close your hand. No two ways around it: when you open your hand, the total width of your point cloud increases, which is to say that the horizontal distance between the rightmost and leftmost points (the tips of the thumb and little finger) increases (Figure 13-36).

Images

Figure 13-36. Hand width tracking

In the same way, you can observe that when you tilt your hand forwards and backwards, the total height of the point cloud changes, which you can later use to define the rotation of your gripper (Figure 13-37).

Images

Figure 13-37. Hand tilt tracking

You need to find these points and calculate their distances. Here’s the routine.

void updateHand() {
  // draw the 3D point depth map
  int steps = 3; // You can speed up the calculation by using less points
  int index;
  stroke(255);
  // Initialize all the PVectors to the barycenter of the hand
  PVector handLeft = handVec.get();
  PVector handRight = handVec.get();
  PVector handTop = handVec.get();
  PVector handBottom = handVec.get();

  for (int y = 0; y < kinect.depthHeight(); y += steps) {
    for (int x = 0; x < kinect.depthWidth(); x += steps) {
      index = x + y * kinect.depthWidth();
      realWorldPoint[index] = kinect.depthMapRealWorld()[index].get();
      if (realWorldPoint[index].dist(handVec) < 100) {
        // Draw poin cloud defining the hand
        point(realWorldPoint[index].x, realWorldPoint[index].y, realWorldPoint[index].z);
        if (realWorldPoint[index].x > handRight.x) handRight = realWorldPoint[index].get();
        if (realWorldPoint[index].x < handLeft.x) handLeft = realWorldPoint[index].get();
        if (realWorldPoint[index].y > handTop.y) handTop = realWorldPoint[index].get();
        if (realWorldPoint[index].y < handBottom.y) handBottom = realWorldPoint[index].get();
      }
    }
  }

After running this loop, you have four PVectors storing the four points you need for your purposes. You are going to draw a control gizmo using these points. This gizmo is a cube, the size of which will change according to your hand’s width. Likewise, the tilt will try to match the tilt of your hand, based on its height.

  // Draw Control Cube
  fill(100, 100, 200);
  pushMatrix();
  translate(handVec.x, handVec.y, handVec.z);
  rotateX(radians(handTop.y - handBottom.y));
  box((handRight.x - handLeft.x) / 2, (handRight.x - handLeft.x) / 2,
  10);
  popMatrix();

After running this code ad nauseam, we came up with some numbers that make a good range for the hand width and height values. A range of 65-200 works pretty nicely for both parameters. If you map this range to 0-255, you will have a pretty smooth value to be passed on to the Arduino board. You store the mapped values as gripWidth and gripRot.

  // Set the robot parameters
  gripWidth = lerp(gripWidth, map(handRight.x - handLeft.x, 65, 200, 0, 255), 0.2f);
  gripRot = lerp(gripRot, map(handTop.y - handBottom.y, 65, 200, 0, 255), 0.2f);
  dRobot.updateGrip(gripRot, gripWidth);
}

Include the drawHand() function from previous sketches to keep track of previous hand positions.

void drawHand() {
  stroke(255, 0, 0);
  pushStyle();
  strokeWeight(6);
  point(handVec.x, handVec.y, handVec.z);
  popStyle();

  noFill();
  Iterator itr = handVecList.iterator();
  beginShape();
  while (itr.hasNext ()) {
    PVector p = (PVector) itr.next();
    vertex(p.x, p.y, p.z);
  }
  endShape();
}

Sending the Data to Arduino

By now you have all the data you need to control your robot, so you can proceed to sending these values to the Arduino board. You will develop the Arduino code in the next section. Remember that you were calling a sendSerialData() function from your main draw() loop, so it’s time to implement this function.

First, send the trigger character ‘X’ to indicate the beginning of the serial message. Then you have five values to send: three rotations for each arm’s servo position, the gripper’s rotation, and gripper’s width.

Your servos accept a range from 500 to 2500, which gives you 2000 different possible positions. But Arduino reads the incoming serial data one byte at a time and, as one byte can only hold 256 values, that is the maximum resolution for your servo rotation that you can transmit in one single message. This is much lower than the resolution that your servos can use.

You want to match the resolution of your messages to the 2000 possible servo positions to get as smooth a movement of the robot as you can. The way you solve this problem is by splitting your integer into two single-byte numbers, sending the two bytes, and then recomposing the integer on the other end of the line (i.e. in the Arduino). What you are doing here is developing a protocol, as you learned in Chapter 4. To perform this splitting, you are going to make use of bitwise operations.

Images Note  Bitwise operations handle binary numerals at the level of their individual bits. They are fast, primitive actions, directly supported by the processor. A detailed explanation of bitwise operations is beyond the scope of this book, but you can find more information on www.cplusplus.com/doc/tutorial/operators.

First, you write the trigger character ‘X’ to the serial buffer to indicate the start of the communication. For each leg, you work out your angle as an integer, mapped to the range 0-2000 in order to match its range to the range of the servos. You then store it in the serialMessage[] array to be displayed on screen later.

Now you know the integer serialAngle that you want to send via serial. You can proceed to decompose it into its more significant byte (MSB) and its less significant byte (LSB), which you send as two consecutive serial messages.

The bitwise operation & 0xFF masks all but the lowest eight bits, so you can use this to extract the LSB. The operation >> 8 shifts all bits eight places to the right, discarding the lowest eight bits. You can combine this operation with & 0xFF to get the MSB.

You cast the two values to byte, because both operations return integers. After this, you are ready to write the two values to your serial port.

void sendSerialData() {
  myPort.write('X'),
  for (int i=0;i<dRobot.numLegs;i++) {
    int serialAngle = (int)map(dRobot.servoAngles[i], radians(-90), radians(90), 0, 2000);
    serialMsg[i] = serialAngle;
    byte MSB = (byte)((serialAngle >> 8) & 0xFF);
    byte LSB = (byte)(serialAngle & 0xFF);

    myPort.write(MSB);
    myPort.write(LSB);
  }

For the grip rotation and width, 256 values are enough. If you are a perfectionist, you can take up the challenge of sending all the values as pairs of bytes.

  myPort.write((int)(gripRot));
  serialMsg[3] = (int)(gripRot);
  myPort.write((int)(gripWidth));
  serialMsg[4] = (int)(gripWidth);
}

Finally, you display on screen the values that you are passing to the Arduino. This is good practice for debugging purposes.

void displayText() {

  fill(255);
  textFont(font, 12);
  text("Position X: " + dRobot.posVec.x + " Position Y: " + dRobot.posVec.y
        + " Position Z: " + dRobot.posVec.z, 10, 20);
  text("Servo1: " + serialMsg[0] + " Servo2: " + serialMsg[1]
        + " Servo3: " + serialMsg[2] + " GripRot: " + serialMsg[3]
        + " GripWidth: " + serialMsg[4], 10, 80);
}

And that’s it! The code you just wrote is ready to communicate with Arduino through serial. Now you need to write the Arduino code that will translate the information coming from Processing into physical movements of the physical robot.

Arduino Code

The code running inside your Arduino board has a simple role: it receives serial data that it remaps to the range of the servos and sends the appropriate message to each one of them.

You need variables to store the temporary values of the incoming data and then integers for the pin numbers and different pulses for the five servos. The longs previousMillis and interval deal with the spacing between messages sent to the servos.

unsigned int tempHandRot, tempGrip;
unsigned int servo1Pos, servo2Pos, servo3Pos;

int ledPin = 3;
int servo1Pin = 9;
int pulse1 = 1500;
int servo2Pin = 10;
int pulse2 = 1500;
int servo3Pin = 11;
int pulse3 = 1500;
int handRotPin = 5;
int handRotPulse = 1500;
int gripPin = 6;
int gripPulse = 1500;

long previousMillis = 0;
long interval = 20;

int speedServo1 = 0;
int speedServo2 = 0;
int speedServo3 = 0;

int handRotSpeed = 20;
int gripSpeed = 20;

Set all your pins as outputs and initialize the serial port.

void setup() {
  pinMode (ledPin, OUTPUT);
  pinMode (servo1Pin, OUTPUT);
  pinMode (servo2Pin, OUTPUT);
  pinMode (servo3Pin, OUTPUT);
  pinMode (handRotPin, OUTPUT);
  pinMode (gripPin, OUTPUT);

  Serial.begin(9600); // Start serial communication at 9600 bps

}

In the main loop, write a high pulse to the LEDs because you’re having them on at all times, and then check the state of your serial buffer. If you have received more than eight values, which is what you are expecting, and the first of them is the trigger character ‘X’, you start reading the values as bytes. After every two values, you recompose the integer using the C function word(), which returns a word data type (16-bit unsigned number, the same as an unsigned integer) from two bytes.

void loop() {
  digitalWrite(ledPin, HIGH);
  if (Serial.available()>8) { // If data is available to read,
    char led=Serial.read();
    if (led=='X'){
      byte MSB1 = Serial.read();
      byte LSB1 = Serial.read();
      servo1Pos = word(MSB1, LSB1);

      byte MSB2 = Serial.read();
      byte LSB2 = Serial.read();
      servo2Pos = word(MSB2, LSB2);

      byte MSB3 = Serial.read();
      byte LSB3 = Serial.read();
      servo3Pos = word(MSB3, LSB3);

      tempHandRot = Serial.read();
      tempGrip = Serial.read();  
  }
}

And now you remap the pulses from their expected ranges to the servo range 500-2500.

  pulse1 = (int)map(servo1Pos,0,2000,500,2500);
  pulse2 =  (int)map(servo2Pos,0,2000,500,2500);
  pulse3 =  (int)map(servo3Pos,0,2000,500,2500);

  handRotPulse =  (int)map(tempHandRot,0,200,2500,500);
  gripPulse =  (int)map(tempGrip,0,220,500,2500);

And finally, if 20 milliseconds have elapsed, you send the pulses to your servo motors to update their angles. Remember that different servos have slightly different ranges,  so you should test your servos and find the correct range that drives them from 0 to 180 degrees before remapping the values!

  unsigned long currentMillis = millis();
  if(currentMillis - previousMillis > interval) {
    previousMillis = currentMillis;  
    updateServo(servo1Pin, pulse1);
    updateServo(servo2Pin, pulse2);
    updateServo(servo3Pin, pulse3);
    updateServo(handRotPin, handRotPulse);
    updateServo(gripPin, gripPulse);

  }
}

void updateServo (int pin, int pulse){
  digitalWrite(pin, HIGH);
  delayMicroseconds(pulse);
  digitalWrite(pin, LOW);
}

If all the steps have been implemented appropriately, you should have a working delta robot following your hand as you move it in space (Figure 13-38).

Images

Figure 13-38. The delta robot in action

Summary

This has been quite a long project and its development has required you to master 3D transformations, inverse kinematics, hand recognition, point cloud analysis, serial communication, and a long list of programming techniques associated with the implementation of a physically accurate simulation of a robot. On the physical side, you worked with ball joints, laser cutting, servos, LEDs, and a series of other parts required by the making of a precise robotic platform.

The possibilities that natural interaction introduce in the world of robotics are yet to be explored. Not only are you delivered of using interfacing devices such as a mouse or keyboard, but you are also on the verge of a richer and more intuitive communication between humans and machines. Tools like Arduino and Kinect have allowed us to bridge the gap between our body kinetics and those of a machine within a pretty constrained budget and time scale. And this is only a start. You have now the tools to start tinkering with more complex applications for your robot, attaching different effectors, trying new forms of interaction. Your only limit is your imagination.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset