10. An Autonomous Robot Needs STORIES

Robot Sensitivity Training Lesson #10: A robot doesn’t know what it doesn’t know.

At the hardware level a robot is simply a combination of chips, wires, pins, actuators, sensors, and end-effectors. How do we get from those components to a robot that can light the candles on a birthday cake or clean up after a party? At ROLL level 1, programming a robot is all about setting a voltage high/low, trapping signals, and reading pins.

At level 2, we graduate to making software drivers for the robot’s servos and actuators. We can control gear speeds using level 1 and level 2 robot programming. But at some point we want a robot that can do useful things in a meaningful environment. Not only that, we want the robot to do useful things without supervision and hand holding. We want the robot to walk our dog, bring us a cool beverage, turn off the lights, and close the door all on its own. All of that is a long way from setting pin voltages and stepping motors. How do we get from sending signals to a collection of hardware components to autonomous useful action by a robot?

Recall the seven requirements for our definition of a robot from Chapter 1, “What Is a Robot, Anyway?”:

1. It must be capable of sensing its external and internal environments in one or more ways through the use of its programming.

2. Its reprogrammable behavior, actions, and control are the result of executing a programmed set of instructions.

3. It must be capable of affecting, interacting with, or operating on its external environment in one or more ways through its programming.

4. It must have its own power source.

5. It must have a language suitable for the representation of discrete instructions and data as well as support for programming.

6. Once initiated, it must be capable of executing its programming without the need for external intervention.

7. It must be a nonliving machine. (Therefore, it’s not animal or human.)


Image Note

Requirement #5 specifies that a robot must have a language that can support instructions and data. A programming language has a method of specifying actions and a method of representing data or objects. An autonomous robot must carry out actions on objects in some designated scenario and environment. The language associated with the robot provides the fundamental key. Using a programming language, we can specify a list of actions the robot must execute. But the programming language can also be used to describe the environment and objects the robot must interact with. Requirements 1, 3, and 6 then can be used to implement robot autonomy.


It’s Not Just the Actions!

The actions that the robot performs are usually center stage when thinking about robot programming. But the environment in which the robot performs and the objects that the robot interacts with are at least as important as the actions the robot performs. We want the robot to perform tasks. These tasks involve the robot interacting with objects within a certain context, situation, and scenario.

The process of programming the robot requires that we not only use the programming language to represent the robot’s actions, but use the programming language to represent the robot’s environment, its situation, and the objects the robot must interact with. The situation and object representation requirements cause us to select object-oriented languages like C++ and Java for programming a robot to be autonomous. These languages support object-oriented and agent-oriented programming techniques. And these techniques are part of the foundation of autonomous robots. Let’s revisit our birthday party robot for a moment.

Birthday Robot Take 2

Recall that in our birthday party BR-1 scenario, the BR-1 was charged with the responsibility of lighting the candles on the birthday cake and then clearing the table of paper plates and cups after the party was over. How do we specify to the BR-1 what a birthday cake is? How do we describe the candles? What does it mean to light the candles? How will the BR-1 determine when the party is over? How do we program the BR-1 to recognize paper plates and cups?

In Chapter 6, “Programming the Robot’s Sensors,” we explain how basic data types are used with sensors. For example, we described how floats and ints might be used to represent sensor measurements of temperature, distance, and color such as:

float  Temperature =  96.8;
float  Distance  =  10.2;
int     Color    =   16;

However, simple data types are not enough to describe a birthday cake or a birthday party scenario to a robot. What simple data type could we use to represent a candle or the birthday cake? How would we describe the idea of a party in our birthday robot scenario? A task is made up of actions and things. Robot programming must have a way to express the actions that must be performed as well as the things involved in those actions. There must be some way to describe to the robot its environment. There must be some way to pass on to the robot the sequence of events that are part of the scenario we want the robot to participate in. Our approach to programming autonomous robots requires that the robot be fully equipped with a description of a scenario, situation, episode, or script. So exactly what do we mean by scenario, situation, or script? Table 10.1 shows some common definitions that we use.

Image

Table 10.1 Common Definitions for Scenario, Situation, Episode, and Script

The basic idea is to give the robot what to expect, a role to play, and one or more tasks to execute and then send it on its way. Everything in this approach is expectation driven. We remove or at the very least reduce any surprises for the robot during the tasks it executes. This expectation-driven approach is meant to be totally self-contained. Scenarios, situations, episodes, and scripts contain all the relevant sequences of actions, events, and objects involved with the robot’s interactions. For example, if we think through our birthday party scenario, all the information we need is part of the scenario. The idea of a birthday party is common.

There is:

Image A celebration of some sort

Image A guest of honor (it’s somebody’s birthday)

Image Birthday guests

Image Birthday cake, or possibly birthday pie

Image Ice cream, and so on

Sure, some of the details of any particular birthday party may vary, but the basic idea is the same, and once we decide what those details are (for example, cake or pie, chocolate or vanilla, number of guests, trick candles, and so on), we have a pretty complete picture of how the birthday party should unfold. So if we can just package our job for the robot as a scenario, situation, script, or episode, the hard part is over. What we need to program our birthday party robot to execute its birthday party responsibilities is a way to describe the birthday scenario using object-oriented languages like Java or C++, and then upload that scenario to the robot to be executed. At Ctest Laboratories, we developed a programming technique and data storage mechanism that we call STORIES that does just that.

Robot STORIES

STORIES is an acronym for Scenarios Translated into Ontologies Reasoning Intentions and Epistemological Situations. STORIES is the end result of converting a scenario into components that can be represented by object-oriented languages and then uploaded into a robot. Table 10.2 is a simplified overview of the five-step process.

Image

Table 10.2 Simplified Overview of the Five-Step Process to Create Robot STORIES

STORIES is a technique and storage mechanism that allows us to program a robot to autonomously execute a task(s) within a particular scenario. This is made possible because the scenario becomes part of the instructions uploaded to the robot.

In this book, the actions that we want the robot to perform are written as C++ or Java code, and the scenario that we want the robot to perform in is also written as Java or C++ code. STORIES makes our anatomy of autonomous robot complete. And once you’ve integrated the STORIES component into your robot programming, you have the basic components required to program a robot to execute tasks autonomously. Figure 10.1 is the first of several blueprints that we present in this chapter that give us an anatomy of an autonomous robot. Each blueprint provides more detail until we have a complete and detailed blueprint.

Image

Figure 10.1 Blueprint of an anatomy of an autonomous robot

The two primary components in Figure 10.1 are the robot skeleton (hardware) and the robot’s softbot frame. At this level of detail in the blueprint, the three major components of the robot’s software are its basic capabilities, its SPACES (introduced in Chapter 9, “Robot SPACES”), and its STORIES. To see how all this works for our Arduino and Mindstorms EV3-based robots, let’s revisit our extended robot scenario program from Chapter 9.

The Extended Robot Scenario

The robot (Unit1) is located in a small room containing a single object. Unit1 is playing the role of an investigator and is assigned the task of locating the object, determining its color, and reporting that color. In the extended scenario, Unit1 has the additional task of retrieving the object and returning it to the robot’s initial position.

Converting Unit1’s Scenario into STORIES

We use the five-step approach shown previously in Table 10.2 to convert or translate our extended robot scenario into STORIES that can be uploaded for Unit1’s execution. The five steps represent each of the major parts of the STORIES software components. Figure 10.2 shows a more detailed blueprint of our autonomous robot anatomy.

Image

Figure 10.2 Blueprint 2 of the autonomous robot anatomy: more detail

First, let’s break our scenario down into a list of things, actions, and events. There are many ways we can do this, and we can do it in varying levels of detail. But here we opt for a simple breakdown. Table 10.3 shows three things, three events, and three actions that make up our extended robot scenario.

Image

Table 10.3 A Simple Tchart Breakdown for Our Extended Robot Scenario

A Closer Look at the Scenario’s Ontology

The things shown in Table 10.3 constitute a basic ontology of the extended robot scenario. For our purposes, an ontology is the set of things that make up a scenario, situation, or episode. It’s important that we identify the ontology of the scenario for an autonomous robot. The ontology information for most teleoperated or remote-control robots is in the head and eyes of whoever is controlling the robot, which means that much of the detail of a situation can be left out of the robot’s programming because the person with the controls is relying on her knowledge about the scenario.

For example, in some cases the robot doesn’t have to worry about where it’s going because the person with the controls is directing it. Details such as size, shape, or weight of an object can be left out because the person operating the robot can see how tall or how wide the object is and can direct the robot’s end-effectors accordingly. In other cases, the robot’s programming doesn’t have to include what time to perform or not perform an action because the person doing the teleoperating pushes the start and stop button at the appropriate times. The more aspects of the robot that are under remote control means the robot needs less ontology specification. The more autonomy a robot has, the more ontology specification it will need.

In some approaches to robot autonomy, the programmer designs the robot’s programming so that the robot can discover on its own the things in its scenario. This would be a level 4 or level 5 autonomous robot, as described in Table 8.1 in Chapter 8, “Getting Started with Autonomy: Building Your Robot’s Softbot Counterpart.” But for now, we focus on level 1 and level 2 autonomy and provide a simple and generic ontology of the scenario. But as you will see, where robots are concerned, the more detail you can provide about the things in the ontology, the better.

Providing Details for the “Things”

What size is the object in our scenario? Although we don’t know the color (that’s for the robot to determine), perhaps we know the object’s size and weight, maybe even the object’s shape. Where is the object located in the area? Exactly where is the robot’s starting position in the area? Not only is it important to know which things make up a scenario, it is also important to know which of the details about those things will or should impact the robot’s programming. Further, the details come in handy when we are trying to determine whether the robot can meet the REQUIRE specifications for the task.

It’s useful to describe the things and the details about the things using terminology indicative of the scenario. If we know the object in Table 10.3 is a basketball, why not call it a basketball in the code? If the area is a gymnasium, we should use the term gymnasium instead of area. Breaking down the scenario into its list of things and naming them accordingly helps to come up with the robot’s ROLL model used later for naming variables, procedures, functions, and methods. How much detail to provide is always a judgment call. Different programmers see the scenario from different points of view. But whichever approach is taken, keep in mind it is important to be consistent when describing details such as units of measurement, variables, and terminology. Don’t switch back and forth. Figure 10.3 shows some of the details of the things in the extended robot’s scenario.

Image

Figure 10.3 Details of the things in the extended robot scenario

Using Object-Oriented Programming to Represent Things in the Scenario

In Chapter 8, we introduced the idea of the softbot frame and we demonstrated how classes and objects are used to represent the software component of a robot. All the things and actions in a robot scenario can also be represented with techniques of object-oriented programming using classes and methods. For example, in our extended robot scenario, we have five major classes listed in Table 10.4.

Image

Table 10.4 Major Classes for Extended Robot Scenario

The details of the scenario are part of each class. Details of the situation are part of the situation_object, details of a softbot are part of the softbot object, and so on. The details can sometimes be represented by built-in data types such as strings, integers, or floating point numbers. Details can also be represented by classes. But these five classes represent the major components of our robot STORIES component. Figure 10.4 is a more detailed blueprint of our robot anatomy with the ontology component broken down into a little more detail.

Image

Figure 10.4 More detailed blueprint of the robot anatomy with ontology component broken down

To illustrate how this works, we take a look at the Java code for our EV3 microcontroller and the C++ code for our Arduino microcontroller that make up the code for our Unit1 robot used in our extended robot scenario. BURT Translation Listing 10.1 shows the definition of the situation class.

BURT Translation Listing 10.1 Definition of the situation Class

BURT Translations Output: Java Implementation Image


 //ACTIONS: SECTION 2
  179    class situation{
  180
  181       public room Area;
  182       int ActionNum = 0;
  183       public ArrayList<action>  Actions;
  184       action RobotAction;
  185       public situation(softbot  Bot)
  186       {
  187           RobotAction = new action();
  188           Actions = new ArrayList<action>();
  189           scenario_action1 Task1 = new scenario_action1(Bot);
  190           scenario_action2 Task2 = new scenario_action2(Bot);
  191           Actions.add(Task1);
  192           Actions.add(Task2);
  193           Area = new room();
  194
  195       }
  196       public void nextAction() throws Exception
  197       {
  198
  199           if(ActionNum < Actions.size()){
  200              RobotAction = Actions.get(ActionNum);
  201           }
  202           RobotAction.task();
  203           ActionNum++;
  204
  205
  206       }
  207       public int numTasks()
  208       {
  209           return(Actions.size());
  210
  211       }
  212
  213    }
  214


The definition of the situation class starts on line 179. It contains a room object on line 181 and a list of action objects on line 183. It only has three methods:

Image constructor(situation())

Image nextAction()

Image numTasks()

The room object is used to represent the location where the scenario situations take place. The actions list is used to store a list of action objects. There is one action object for each major task the robot has to perform. The constructor is defined on lines 185 to 195 and does most of the work for setting up the situation. The constructor creates two action objects (Task1 and Task2) and creates a room object. Notice that the constructor adds the action objects to the list of actions. From this example we can see that so far there are two actions that the robot must execute in this situation. The nextAction() method causes the robot to execute the next action in the list.

if(ActionNum < Actions.size()){
    RobotAction = Actions.get(ActionNum);
    RobotAction.task();
    ActionNum++;
}

Notice the code on lines 199 to 203 that retrieves the next action in the Actions list and then uses

RobotAction.task();

as the method call to execute the task(). But how are Task1 and Task2 defined? Lines 189 to 190 refer to two scenario classes:

scenario_action1 Task1;
scenario_action2 Task2;

These classes inherit the action class that we created for this example. BURT Translation Listing 10.2 shows the definitions of the action class and the scenario_action1 and scenario_action2 classes.

BURT Translation Listing 10.2 Definitions of the action, scenario_action1, and scenario_action2 Classes

BURT Translations Output: Java Implementation Image


//ACTIONS: SECTION 2
   31    class action{
   32       protected softbot Robot;
   33       public action()
   34       {   Robot = NULL;
   35       }
   36       public action(softbot Bot)
   37       {
   38           Robot = Bot;
   39
   40       }
   41       public void task() throws Exception
   42       {
   43       }
   44    }
   45
   46    class scenario_action1  extends action
   47    {
   48
   49       public scenario_action1(softbot Bot)
   50       {
   51           super(Bot);
   52       }
   53       public void task() throws Exception
   54       {
   55           Robot.moveToObject();
   56
   57       }
   58
   59    }
   60
   61
   62    class scenario_action2 extends action
   63    {
   64
   65       public  scenario_action2(softbot Bot)
   66       {
   67
   68           super(Bot);
   69       }
   70
   71       public  void task() throws Exception
   72       {
   73           Robot.scanObject();
   74
   75       }
   76
   77
   78    }
   79


These classes are used to implement the notion of scenario actions. The action class is really used as a base class and we could have used a Java interface class here or a C++ abstract class (for Arduino implementation). But for now we want to keep things simple. So we use action as a base class, and scenario_action1 and scenario_action2 use action through inheritance. The most important method in the action classes is the task() method. This is the method that contains the code that the robot has to execute. Notice lines 53 to 57 and lines 71 to 75 make the calls to the code that represent the tasks:

   53  public void task() throws Exception
   54  {
   55      Robot.moveToObject();
   56
   57  }
   ...
   71  public  void task() throws Exception
   72  {
   73      Robot.scanObject();
   74
   75  }

These tasks cause the robot to move to the location of the object and determine the object’s distance and color. Notice in Listing 10.1 and Listing 10.2 that there are no references to wires, pins, voltages, actuators, or effectors. This is an example of level 3 and above programming. At this level we try to represent the scenario naturally. Now of course we have to get to the actual motor and sensor code somewhere. What does Robot.moveToObject() actually do? How is Robot.scanObject() implemented? Both moveToObject() and scanObject() are methods that belong to the softbot frame that we named softbot. Let’s look at moveToObject() first. BURT Translation Listing 10.3 shows the implementation code for moveToObject().

BURT Translation Listing 10.3 Implementation Code for moveToObject()Method

BURT Translations Output: Java Implementation Image


//TASKS: SECTION 3
  441        public void moveToObject() throws Exception
  442       {
  443            RobotLocation.X = (Situation1.Area.SomeObject.getXLocation() -
                                    RobotLocation.X);
  444            travel(RobotLocation.X);
  445            waitUntilStop(RobotLocation.X);
  446            rotate(90);
  447            waitForRotate(90);
  448            RobotLocation.Y = (Situation1.Area.SomeObject.getYLocation() -
                                    RobotLocation.Y);
  449            travel(RobotLocation.Y);
  450            waitUntilStop(RobotLocation.Y);
  451            Messages.add("moveToObject");
  452
  453       }


This method gets the (X,Y) location coordinates that the robot needs to go to from the Situation1 object on lines 443 and 448. This code illustrates that the robot’s future X and Y locations are derived from the situation. According to lines 443 and 448 the situation has an area. The area has an object. And we get the object location by calling the methods:

SomeObject.getYLocation()
SomeObject.getXLocation()

Once we get those coordinates, we give the robot the command to travel() east or west the distance specified by X and north or south the distance specified by Y. Notice on line 181 from Listing 10.1 that the situation class has a room class. BURT Translation Listing 10.4 shows the definition of the room class.

BURT Translation Listing 10.4 Definition of the room Class

BURT Translations Output: Java Implementation Image


//Scenarios/Situations: SECTION 4
  135   class room{
  136       protected int Length;
  137       protected int Width;
  138       protected int Area;
  139       public something SomeObject;
  140
  141       public  room()
  142       {
  143           Length = 300;
  144           Width = 200;
  145           SomeObject =  new something();
  146           SomeObject.setLocation(20,50);
  147       }
  148       public int  area()
  149       {
  150           Area = Length * Width;
  151           return(Area);
  152       }
  153
  154       public  int length()
  155       {
  156
  157           return(Length);
  158       }
  159
  160       public int width()
  161       {
  162
  163           return(Width);
  164       }
  165    }


We can see in Listing 10.4 that the room class has a something class. The room constructor sets the length of the room to 300 cm and the width of the room to 200 cm. It creates a new something object and sets its location (20,50) within the room. So the something class is used to represent the object and ultimately the object’s location. Let’s inspect the something class in BURT Translation Listing 10.5.

BURT Translation Listing 10.5 Implementation of the something Class

BURT Translations Output: Java Implementation Image


//Scenarios/Situations: SECTION 4
   94    class something{
   95       x_location Location;
   96       int Color;
   97       public something()
   98       {
   99           Location = new x_location();
  100           Location.X = 0;
  101           Location.Y = 0;
  102           Color = 0;
  103       }
  104       public void setLocation(int X,int Y)
  105       {
  106
  107           Location.X = X;
  108           Location.Y = Y;
  109
  110       }
  111       public int getXLocation()
  112       {
  113           return(Location.X);
  114       }
  115
  116       public int getYLocation()
  117       {
  118           return(Location.Y);
  119
  120       }
  121
  122       public void setColor(int X)
  123       {
  124
  125           Color = X;
  126       }
  127
  128       public int getColor()
  129       {
  130           return(Color);
  131       }
  132
  133    }


We see that something has a location and a color declared on lines 95 and 96. The pattern should be clear from Listing 10.1 through Listing 10.5; that is, we break down the robot’s situation into a list of things and actions. We represent or “model” the things and actions using the notion of a class in some object-oriented language. The classes are then put together to form the robot’s situation, and the situation is declared as a data member, property, or attribute of the robot’s controller.

Decisions a Robot Can Make, Rules a Robot Can Follow

After you have identified the things and actions in the scenario (steps 1 and ), it’s time to identify the decisions and choices the robot will have concerning those things and actions. You must determine what course of action the robot will take and when based on the details of the scenario (step 3). If some of the robot’s actions are optional, or if some of the robot’s actions depend on what is found in the scenario, now is the time to identify decisions the robot will have to make. If there are certain courses of action that will always be made depending on certain conditions, now is the time to identify the rules that should apply when those conditions are met.

Once the details of the scenario are identified, it is appropriate to identify the pre/postconditions (SPACES) of the scenario. These decisions and rules make up the reasoning component of our robot STORIES structure. The reasoning component is an important part of robot autonomy. If the robot has no capacity to make decisions about things, or events, or actions that take place in the scenario, the robot’s autonomy is severely limited. Robot decisions are implemented using the basic if-then-else, while-do, do-while, and case control structures. Recall from Listing 10.2 that the robot has a Task1 and a Task2. We have already seen the code moveToObject() that implements Task1. Task2 implements scanObject(), which is shown in BURT Translation Listing 10.6.

BURT Translation Listing 10.6 Implementation Code for scanObject

BURT Translation Output: Java Implementations Image


//TASKS: SECTION 3
  455        public void scanObject()throws Exception
  456        {
  457
  458             float Distance = 0;
  459             resetArm();
  460             moveSensorArray(110);
  461             Thread.sleep(2000);
  462             Distance = readUltrasonicSensor();
  463             Thread.sleep(4000);
  464             if(Distance <= 10.0){
  465                getColor();
  466                Thread.sleep(3000);
  467             }
  468             moveSensorArray(50);
  469             Thread.sleep(2000);
  470
  471         }


There is a single simple decision for the robot to make on line 464. If the distance from the object is less than 10.0 centimeters, then the robot is instructed to determine the object’s color. If the robot is more than 10 centimeters from the object, it will not attempt to determine the object’s color. On line 462 the robot uses an ultrasonic sensor to measure the distance to the object. Once the robot has measured the distance it has a decision to make. Figure 10.5 is the robot anatomy blueprint that contains the reasoning component broken down into a bit more detail.

Image

Figure 10.5 The robot anatomy blueprint that contains the reasoning component broken down.

The idea is to set up the decisions so that each one either leads the robot to correctly execute the task or prevents the robot from taking unnecessary or impossible steps. Each decision should lead the robot closer to the completion of its primary task. Keep in mind that as the level of autonomy increases for the robot, the number and sometimes the complexity of the decision paths for the robot also increase.

Paying Attention to the Robot’s Intention

Every situation has one or more actions. These actions together make up the robot’s tasks. Each task represents one of the robot intentions (sometime referred to as goals). For instance, if the robot has four tasks, this means it has four intentions. Each decision that the robot makes should move it closer to one or more of its intentions. The program should give the robot explicit instructions if it cannot meet one or more of its intentions. If the robot cannot carry out its intentions, then it cannot fulfill its role in the scenario. Together the set of intentions represent the robot’s role in the scenario. In C++ and Java the main() function is where the robot’s primary intentions or primary tasks are placed. BURT Translation Listing 10.7 shows a snippet of the main() function for our extended robot scenario example.

BURT Translation Listing 10.7 The main() Function for Our Extended Robot Example

BURT Translation Output: Java Implementations Image


  812        public static void main(String [] args)  throws Exception
  813        {
  814
  815
  816            softbot Unit1;
  817            float Distance = 0;
  818            int TaskNum = 0;
  819
  820            try{
  821                   Unit1 = new softbot();
  822                   TaskNum = Unit1.numTasks();
  823                   for(int N = 0; N < TaskNum; N++)
  824                   {
  825                       Unit1.doNextTask();
  826
  827                   }
  828                   Unit1.report();
  829                   Unit1.closeLog();
  830
  831            }
  ...

  847
  848        }


The main() function shows that our robot has a simple set of intentions. Line 822 shows that the robot is going to get the total number of tasks that it is supposed to execute. It then executes all the tasks in the situation. Lines 823 to 827 show that a simple loop structure controls how the robot approaches its intentions. After it is done executing the tasks it reports and then closes the log. In this oversimplified situation the robot simply attempts to execute the actions stored in the action list sequentially. However, for more sophisticated tasks, the way the next action is selected is typically based on robot decisions, the environment as detected by the sensors, the distance the robot has to travel, power considerations, time considerations, priority of tasks, SPACES that have or have not been met, and so on. On line 825 Unit1 invokes the method doNextTask(). This method ultimately depends on the list of intentions (actions) that are part of the robot’s situation. Figure 10.6 shows the detailed blueprint of our anatomy for autonomous robots and how the intentions component is just a breakdown of the robot’s tasks.

Image

Figure 10.6 The robot anatomy blueprint that contains the intentions component broken down.

In Figure 10.6 the STORIES software component is clarified to show how the major components are represented or implemented. Once the robot’s STORIES components are tied to its SPACES requirements and implemented, we have the basis for a robot that can carry out its tasks autonomously without the intervention of remote control. So far in our extended robot scenario we have used Java and the leJOS library to implement the STORIES components. BURT Translation Listing 10.8 contains most of the extended robot scenario program.

BURT Translation Listing 10.8 Implementation of the Extended Robot Scenario Program

BURT Translation Output: Java Implementations Image


    3
    4    import java.io.DataInputStream;
    5    import java.io.DataOutputStream;
    6    import lejos.hardware.sensor.NXTUltrasonicSensor;
    7    import lejos.hardware.*;
    8    import lejos.hardware.ev3.LocalEV3;
    9    import lejos.hardware.port.SensorPort;
   10    import lejos.hardware.sensor.SensorModes;
   11    import lejos.hardware.port.Port;
   12    import lejos.hardware.lcd.LCD;
   13    import java.net.ServerSocket;
   14    import java.net.Socket;
   15    import lejos.hardware.sensor.HiTechnicColorSensor;
   16    import lejos.hardware.sensor.EV3UltrasonicSensor;
   17    import lejos.robotics.navigation.*;
   18    import lejos.robotics.navigation.DifferentialPilot;
   19    import lejos.robotics.localization.OdometryPoseProvider;
   20    import lejos.robotics.SampleProvider;
   21    import lejos.hardware.device.tetrix.*;
   22    import lejos.hardware.device.tetrix.TetrixRegulatedMotor;
   23    import lejos.robotics.navigation.Pose;
   24    import lejos.robotics.navigation.Navigator;
   25    import lejos.robotics.pathfinding.Path;
   26    import java.lang.Math.*;
   27    import java.io.PrintWriter;
   28    import java.io.File;
   29    import java.util.ArrayList;
   30
//ACTIONS: SECTION 2
   31    class action{
   32       protected softbot Robot;
   33       public action()
   34       {
   35       }
   36       public action(softbot Bot)
   37       {
   38           Robot = Bot;
   39
   40       }
   41       public void task() throws Exception
   42       {
   43       }
   44    }
   45
   46    class scenario_action1  extends action
   47    {
   48
   49       public scenario_action1(softbot Bot)
   50       {
   51           super(Bot);
   52       }
   53       public void task() throws Exception
   54       {
   55           Robot.moveToObject();
   56
   57       }
   58
   59    }
   60
   61
   62    class scenario_action2 extends action
   63    {
   64
   65       public  scenario_action2(softbot Bot)
   66       {
   67
   68           super(Bot);
   69       }
   70
   71       public  void task() throws Exception
   72       {
   73           Robot.scanObject();
   74
   75       }
   76
   77
   78    }
   79
//Scenario/Situation
   80
   81    class x_location{
   82       public int X;
   83       public int Y;
   84       public x_location()
   85       {
   86
   87           X = 0;
   88           Y = 0;
   89       }
   90
   91    }
   92
   93
   94    class something{
   95       x_location Location;
   96       int Color;
   97       public something()
   98       {
   99           Location = new x_location();
  100           Location.X = 0;
  101           Location.Y = 0;
  102           Color = 0;
  103       }
  104       public void setLocation(int X,int Y)
  105       {
  106
  107           Location.X = X;
  108           Location.Y = Y;
  109
  110       }
  111       public int getXLocation()
  112       {
  113           return(Location.X);
  114       }
  115
  116       public int getYLocation()
  117       {
  118           return(Location.Y);
  119
  120       }
  121
  122       public void setColor(int X)
  123       {
  124
  125           Color = X;
  126       }
  127
  128       public int getColor()
  129       {
  130           return(Color);
  131       }
  132
  133    }
  134
  135    class room{
  136       protected int Length = 300;
  137       protected int Width = 200;
  138       protected int Area;
  139       public something SomeObject;
  140
  141       public  room()
  142       {
  143           SomeObject =  new something();
  144           SomeObject.setLocation(20,50);
  145
  146
  147       }
  148       public int  area()
  149       {
  150           Area = Length * Width;
  151           return(Area);
  152       }
  153
  154       public  int length()
  155       {
  156
  157           return(Length);
  158       }
  159
  160       public int width()
  161       {
  162
  163           return(Width);
  164       }
  165    }
  166
  167    class situation{
  168
  169       public room Area;
  170       public situation()
  171       {
  172           Area = new room();
  173
  174       }
  175
  176    }
  177
  178
  179    class situation{
  180
  181       public room Area;
  182       int ActionNum = 0;
  183       public ArrayList<action>  Actions;
  184       action RobotAction;
  185       public situation(softbot  Bot)
  186       {
  187           RobotAction = new action();
  188           Actions = new ArrayList<action>();
  189           scenario_action1 Task1 = new scenario_action1(Bot);
  190           scenario_action2 Task2 = new scenario_action2(Bot);
  191           Actions.add(Task1);
  192           Actions.add(Task2);
  193           Area = new room();
  194
  195       }
  196       public void nextAction() throws Exception
  197       {
  198
  199           if(ActionNum < Actions.size()){
  200              RobotAction = Actions.get(ActionNum);
  201           }
  202           RobotAction.task();
  203           ActionNum++;
  204
  205
  206       }
  207       public int numTasks()
  208       {
  209           return(Actions.size());
  210
  211       }
  212
  213    }
  214
  215    public class softbot
  216    {
  //PARTS: SECTION 1
  //Sensor Section
  217        public EV3UltrasonicSensor Vision;
  218        public HiTechnicColorSensor ColorVision;

  219        int CurrentColor;
  220        double  WheelDiameter;
  221        double TrackWidth;
  222        float  RobotLength;
  223        DifferentialPilot  D1R1Pilot;
  224        ArcMoveController  D1R1ArcPilot;
  //Actuators
  225        TetrixControllerFactory  CF;
  226        TetrixMotorController MC;
  227        TetrixServoController SC;
  228        TetrixRegulatedMotor LeftMotor;
  229        TetrixRegulatedMotor RightMotor;
  230        TetrixServo  Arm;
  231        TetrixServo  Gripper;
  232        TetrixServo  SensorArray;
//Support
  233        OdometryPoseProvider Odometer;
  234        Navigator D1R1Navigator;
  235        boolean PathReady = false;
  236        Pose CurrPos;
  237        int OneSecond = 1000;
  238        Sound  AudibleStatus;
  239        DataInputStream dis;
  240        DataOutputStream Dout;
  241        location  CurrentLocation;
  242        SampleProvider UltrasonicSample;
  243        SensorModes USensor;
  244        PrintWriter Log;

  //Situations/Scenarios: SECTION 4
  245        situation Situation1;
  246        x_location RobotLocation;

  247        ArrayList<String> Messages;
  248        Exception SoftbotError;
  249
  250        public softbot() throws InterruptedException,Exception
  251        {
  252
  253            Messages = new ArrayList<String>();
  254            Vision = new EV3UltrasonicSensor(SensorPort.S3);
  255            if(Vision == null){
  256               Messages.add("Could Not Initialize Ultrasonic Sensor");
  257               SoftbotError = new Exception("101");
  258               throw SoftbotError;
  259            }
  260            Vision.enable();
  261            Situation1 = new situation(this);
  262            RobotLocation = new x_location();
  263            RobotLocation.X = 0;
  264            RobotLocation.Y = 0;
  265
  266            ColorVision = new HiTechnicColorSensor(SensorPort.S2);
  267            if(ColorVision == null){
  268                Messages.add("Could Not Initialize Color Sensor");
  269                SoftbotError = new Exception("100");
  270                throw SoftbotError;
  271            }
  272            Log = new PrintWriter("softbot.log");
  273            Log.println("Sensors  constructed");
  274            Thread.sleep(1000);
  275            WheelDiameter = 7.50f;
  276            TrackWidth = 32.5f;
  277
  278            Port APort = LocalEV3.get().getPort("S1");
  279            CF = new TetrixControllerFactory(SensorPort.S1);
  280            if(CF == null){
  281               Messages.add("Could Not Setup Servo Port");
  282               SoftbotError = new Exception("102");
  283               throw SoftbotError;
  284            }
  285            Log.println("Tetrix Controller Factor Constructed");
  286
  287            MC = CF.newMotorController();
  288            SC = CF.newServoController();
  289
  290            LeftMotor = MC.getRegulatedMotor(TetrixMotorController.MOTOR_1);
  291            RightMotor = MC.getRegulatedMotor(TetrixMotorController.MOTOR_2);
  292            if(LeftMotor == null || RightMotor == null){
  293               Messages.add("Could Not Initalize Motors");
  294               SoftbotError = new Exception("103");
  295               throw SoftbotError;
  296            }
  297            LeftMotor.setReverse(true);
  298            RightMotor.setReverse(false);
  299            LeftMotor.resetTachoCount();
  300            RightMotor.resetTachoCount();
  301            Log.println("motors Constructed");
  302            Thread.sleep(2000);
  303
  304
  317            SensorArray = SC.getServo(TetrixServoController.SERVO_3);
  318            if(SensorArray == null){
  319               Messages.add("Could Not Initialize SensorArray");
  320               SoftbotError = new Exception("107");
  321               throw SoftbotError;
  322            }
  323            Messages.add("Servos Constructed");
  324            Log.println("Servos Constructed");
  325            Thread.sleep(1000);
  326
  327            SC.setStepTime(7);
  328            Arm.setRange(750,2250,180);
  329            Arm.setAngle(100);
  330            Thread.sleep(1000);
  331
  335
  336
  337            SensorArray.setRange(750,2250,180);
  338            SensorArray.setAngle(20);
  339            Thread.sleep(1000);
  340            D1R1Pilot = new DifferentialPilot
                              (WheelDiameter,TrackWidth,LeftMotor,RightMotor);
  341            D1R1Pilot.reset();
  342            D1R1Pilot.setTravelSpeed(10);
  343            D1R1Pilot.setRotateSpeed(20);
  344            D1R1Pilot.setMinRadius(0);
  345
  346            Log.println("Pilot Constructed");
  347            Thread.sleep(1000);
  348            CurrPos = new Pose();
  349            CurrPos.setLocation(0,0);
  350            Odometer = new OdometryPoseProvider(D1R1Pilot);
  351            Odometer.setPose(CurrPos);
  352
  353
  354            D1R1Navigator = new Navigator(D1R1Pilot);
  355            D1R1Navigator.singleStep(true);
  356            Log.println("Odometer Constructed");
  357
  358            Log.println("Room  Width: " + Situation1.Area.width());
  359            room SomeRoom = Situation1.Area;
  360            Log.println("Room Location: " +
                             SomeRoom.SomeObject.getXLocation() + "," +
                             SomeRoom.SomeObject.getYLocation());
  361            Messages.add("Softbot Constructed");
  362            Thread.sleep(1000);
  363
  364
  365
  366        }
  367
  434
//TASKS: SECTION 3
  435        public void doNextTask() throws Exception
  436        {
  437            Situation1.nextAction();
  438
  439        }
  440
  441        public void moveToObject() throws Exception
  442        {
  443            RobotLocation.X = (Situation1.Area.SomeObject.getXLocation()
                                     - RobotLocation.X);
  444            travel(RobotLocation.X);
  445            waitUntilStop(RobotLocation.X);
  446            rotate(90);
  447            waitForRotate(90);
  448            RobotLocation.Y = (Situation1.Area.SomeObject.getYLocation()
                                     - RobotLocation.Y);
  449            travel(RobotLocation.Y);
  450            waitUntilStop(RobotLocation.Y);
  451            Messages.add("moveToObject");
  452
  453        }
  454
  455        public void scanObject()throws Exception
  456        {
  457
  458            float Distance = 0;
  459            resetArm();
  460            moveSensorArray(110);
  461            Thread.sleep(2000);
  462            Distance = readUltrasonicSensor();
  463            Thread.sleep(4000);
  464            if(Distance <= 10.0){
  465               getColor();
  466               Thread.sleep(3000);
  467            }
  468            moveSensorArray(50);
  469            Thread.sleep(2000);
  470
  471        }
  472
  473        public int numTasks()
  474        {
  475            return(Situation1.numTasks());
  476        }
  477
  512
  513        public float readUltrasonicSensor()
  514        {
  515            UltrasonicSample =  Vision.getDistanceMode();
  516            float X[] = new float[UltrasonicSample.sampleSize()];
  517            Log.print("sample size ");
  518            Log.println(UltrasonicSample.sampleSize());
  519
  520            UltrasonicSample.fetchSample(X,0);
  521            int Line = 3;
  522            for(int N = 0; N < UltrasonicSample.sampleSize();N++)
  523            {
  524
  525                Float Temp = new Float(X[N]);
  526                Log.println(Temp.intValue());
  527                Messages.add(Temp.toString());
  528                Line++;
  529
  530            }
  531            if(UltrasonicSample.sampleSize() >= 1){
  532               return(X[0]);
  533            }
  534            else{
  535                    return(-1.0f);
  536            }
  537
  538        }
  539        public int getColor()
  540        {
  541
  542            return(ColorVision.getColorID());
  543        }
  544
  545
  546        public void identifyColor() throws Exception
  547        {
  548            LCD.clear();
  549            LCD.drawString("color identified",0,3);
  550            LCD.drawInt(getColor(),0,4);
  551            Log.println("Color Identified");
  552            Log.println("color = " + getColor());
  553        }
  554        public void rotate(int Degrees)
  555        {
  556            D1R1Pilot.rotate(Degrees);
  557        }
  558        public void forward()
  559        {
  560
  561            D1R1Pilot.forward();
  562        }
  563        public void backward()
  564        {
  565            D1R1Pilot.backward();
  566        }
  567
  568        public void travel(int Centimeters)
  569        {
  570            D1R1Pilot.travel(Centimeters);
  571
  572        }
  573
  601        public void moveSensorArray(float X) throws Exception
  602        {
  603
  604            SensorArray.setAngle(X);
  605            while(SC.isMoving())
  606            {
  607                  Thread.sleep(1500);
  608            }
  609
  610
  611        }
  612
  641
  642        public boolean waitForStop()
  643        {
  644            return(D1R1Navigator.waitForStop());
  645
  646        }
  647
  648
  703
  704
  705
  706        public void waitUntilStop(int Distance) throws Exception
  707        {
  708
  709            Distance = Math.abs(Distance);
  710            Double  TravelUnit = new Double
                                       (Distance/D1R1Pilot.getTravelSpeed());
  711            Thread.sleep(Math.round(TravelUnit.doubleValue()) * OneSecond);
  712            D1R1Pilot.stop();
  713            Log.println("Travel Speed " + D1R1Pilot.getTravelSpeed());
  714            Log.println("Distance:  " + Distance);
  715            Log.println("Travel Unit: " + TravelUnit);
  716            Log.println("Wait for: " + Math.round
                             (TravelUnit.doubleValue()) * OneSecond);
  717
  718            }
  719            public void waitUntilStop()
  720            {
  721                do{
  722
  723                }while(D1R1Pilot.isMoving());
  724                D1R1Pilot.stop();
  725
  726            }
  727
  728            public void waitForRotate(double Degrees) throws Exception
  729            {
  730
  731                Degrees = Math.abs(Degrees);
  732                Double DegreeUnit = new Double
                                         (Degrees/D1R1Pilot.getRotateSpeed());
  733                Thread.sleep(Math.round(DegreeUnit.doubleValue()) * OneSecond);
  734                D1R1Pilot.stop();
  735                Log.println("Rotate Unit: " + DegreeUnit);
  736                Log.println("Wait for: " + Math.round
                                 (DegreeUnit.doubleValue()) * OneSecond);
  737
  738
  739
  740            }
  741
  742
  743
  744            public  void closeLog()
  745            {
  746                Log.close();
  747            }
  748
  749
  750            public void report() throws Exception
  751            {
  752                ServerSocket Client = new ServerSocket(1111);
  753                Socket SomeSocket = Client.accept();
  754                DataOutputStream Dout = new DataOutputStream
                                          (SomeSocket.getOutputStream());
  755                Dout.writeInt(Messages.size());
  756                for(int N = 0;N < Messages.size();N++)
  757                {
  758                    Dout.writeUTF(Messages.get(N));
  759                    Dout.flush();
  760                    Thread.sleep(1000);
  761
  762                }
  763                Thread.sleep(1000);
  764                Dout.close();
  765                Client.close();
  766            }
  767
  768
  769            public void report(int X) throws Exception
  770            {
  771
  772                ServerSocket Client = new ServerSocket(1111);
  773                Socket SomeSocket = Client.accept();
  774                DataOutputStream Dout = new DataOutputStream
                                             (SomeSocket.getOutputStream());
  775                Dout.writeInt(X);
  776                Thread.sleep(5000);
  777                Dout.close();
  778                Client.close();
  779
  780
  781            }
  782            public void addMessage(String X)
  783            {
  784                Messages.add(X);
  785            }
  786
  812            public static void main(String [] args)  throws Exception
  813            {
  814
  815
  816                softbot Unit1;
  817                float Distance = 0;
  818                int TaskNum = 0;
  819
  820                try{
  821                        Unit1 = new softbot();
  822                        TaskNum = Unit1.numTasks();
  823                        for(int N = 0; N < TaskNum; N++)
  824                        {
  825                            Unit1.doNextTask();
  826
  827                        }
  828                        Unit1.report();
  829                        Unit1.closeLog();
  830
  831                }
  832                catch(Exception E)
  833                {
  834                      Integer Error;
  835                      System.out.println("Error is : " + E);
  836                      Error = new Integer(0);
  837                      int RetCode = Error.intValue();
  838                      if(RetCode == 0){
  839                         RetCode = 999;
  840                      }
  841                      ServerSocket Client = new ServerSocket(1111);
  842                      Socket SomeSocket = Client.accept();
  843                      DataOutputStream Dout = new DataOutputStream
                                               (SomeSocket.getOutputStream());
  844                      Dout.writeInt(RetCode);
  845                      Dout.close();
  846                      Client.close();
  847
  848
  849                }
  850
  851
  852
  853        }
  854
  855
  856
  857
  858    }


But recall in our extended scenario, one of the tasks the robot has to execute is to retrieve the object. Our robot build uses the PhantomX Pincher Arm from Trossen Robotics, and it is based on the Arbotix controller, which is an Arduino compatible platform. Our robot has more than one microcontroller. We use serial and Bluetooth connections to communicate between the microcontrollers. We show some of the communication details in Chapter 11, “Putting It All Together: How Midamba Programmed His First Autonomous Robot.” Figure 10.7 shows a photo of an Arduino/EV3-based robot.

There are two arms on the robot, each having a different degree of freedom and gripper types as discussed in Chapter 7. The PhantomX Pincher and the Tetrix-based robot arms are highlighted in Figure 10.7. The BURT Translation input shown in Listing 10.9 shows some of the basic activities the Arduino-based Arbotix controller has to perform.

Image

Figure 10.7 A photo of an Arduino EV3-based robot

BURT Translation Listing 10.9 Some Basic Activities the Arduino-Based Arbotix Controller Performs

BURT Translation INPUT Image


Softbot  Frame
Name:  Unit1
Parts:
Actuator Section:
Servo and its gripper (for movement)

Actions:
Step 1: Check the voltage going to the servo
Step 2: Check each servo to see if it's operating and has the correct starting position
Step 3: Set the servos to a center position
Step 4: Set the position of a servo
Step 5: Open the gripper
Step 6: Close the gripper

Tasks:
Position the servo of the gripper in order to open and close the gripper.

End Frame


These are some basic activities that any robot arm controller component would perform. One of the robots that we use for our examples has arms with different controllers and different software but both arms perform basically the same activities. For the EV3, we build our robot arm code on top of the leJOS Java class libraries. For PhantomX, we build our code on top of Bioloid class libraries and AX Dynamixel code. Recall that our STORIES structure includes the things and actions that are part of the scenario, and we use the notion of object-oriented classes to represent the things and the actions. Keep in mind that the robot is one of the things in the scenario, so we also represent the robot and its components using object-oriented classes. The complete BURT Translation for the robot arm capability is shown in Listing 10.10.

BURT Translation Listing 10.10 Some Robot Arm Capabilities

BURT Translations Output: C++ Implementations Image


    1    #include <ax12.h>
    2    #include <BioloidController.h>
    3    #include "poses.h"
    4
    5    BioloidController bioloid = BioloidController(1000000);
    6
    7    class robot_arm{
    8       private:
    9          int ServoCount;
   10       protected:
   11          int id;
   12          int pos;
   13          boolean IDCheck;
   14          boolean StartupComplete;
   15       public:
   16          robot_arm(void);
   ...
//ACTIONS: SECTION 2
   18          void scanServo(void);
   19          void moveCenter(void);
   20          void checkVoltage(void);
   21          void moveHome(void);
   22          void relaxServos(void);
   23          void retrieveObject(void);
   24    };
   25
   ...
//TASKS: SECTION 3
   82    void robot_arm::scanServo(void)
   83    {
   84        id = 1;
   85        Serial.println("Scanning Servo.....");
   86        while (id <= ServoCount)
   87        {
   88            pos =  ax12GetRegister(id, 36, 2);
   89            Serial.print("Servo ID: ");
   90            Serial.println(id);
   91            Serial.print("Servo Position: ");
   92            Serial.println(pos);
   93            if (pos <= 0){
   94                Serial.println("=============================");
   95                Serial.print("ERROR! Servo ID: ");
   96                Serial.print(id);
   97                Serial.println(" not found. Please check connection and
                                       verify correct ID is set.");
   98                Serial.println("=============================");
   99                IDCheck = false;
  100            }
  101
  102            id = (id++)%ServoCount;
  103            delay(1000);
  104        }
  105        if (!IDCheck){
  106            Serial.println("================================");
  107            Serial.println("ERROR! Servo ID(s) are missing from Scan.");
  108            Serial.println("================================");
  109        }
  110        else{
  111                 Serial.println("Servo Check Passed");
  112        }
  ...

  223    void robot_arm::checkVoltage(void)
  224    {
  225        float voltage = (ax12GetRegister (1, AX_PRESENT_VOLTAGE, 1)) / 10.0;
  226        Serial.print ("System Voltage: ");
  227        Serial.print (voltage);
  228        Serial.println (" volts.");
  229        if (voltage < 10.0){
  230            Serial.println("Voltage levels below 10v, please charge battery.");
  231            while(1);
  232        }
  233        if (voltage > 10.0){
  234            Serial.println("Voltage levels nominal.");
  235        }
  236        if (StartupComplete){
  237               ...
  238        }
  239
  240    }
  241
  242    void robot_arm::moveCenter()
  243    {
  244        delay(100);
  245        bioloid.loadPose(Center);
  246        bioloid.readPose();
  247        Serial.println("Moving servos to centered position");
  248        delay(1000);
  249        bioloid.interpolateSetup(1000);
  250        while(bioloid.interpolating > 0){
  251              bioloid.interpolateStep();
  252              delay(3);
  253        }
  254        if (StartupComplete){
  255            ...
  256        }
  257    }
  258
  259
  260    void robot_arm::moveHome(void)
  261    {
  262        delay(100);
  263        bioloid.loadPose(Home);
  264        bioloid.readPose();
  265        Serial.println("Moving servos to Home position");
  266        delay(1000);
  267        bioloid.interpolateSetup(1000);
  268        while(bioloid.interpolating > 0){
  269              bioloid.interpolateStep();
  270              delay(3);
  271        }
  272        if (StartupComplete){
  273             ...
  274        }
  275    }
  ...
  277    void robot_arm::retrieveObject()
  278    {
  279
  280        Serial.println("=======================");
  281        Serial.println("Retrieve Object");
  282        Serial.println("=======================");
  283        delay(500);
  284        id  = 1;
  285        pos = 512;
  286
  287
  288        Serial.print(" Adjusting Servo : ");
  289        Serial.println(id);
  290        while(pos >= 312)
  291        {
  292              SetPosition(id,pos);
  293              pos = pos--;
  294              delay(10);
  295        }
  296        while(pos <= 512){
  297              SetPosition(id, pos);
  298              pos = pos++;
  299              delay(10);
  300        }
  301
  302
  303        id = 3;
  304        Serial.print("Adjusting Servo ");
  305        Serial.println(id);
  306        while(pos >= 200)
  307        {
  308              SetPosition(id,pos);
  309              pos = pos--;
  310              delay(15);
  311        }
  312        while(pos <= 512){
  313              SetPosition(id, pos);
  314              pos = pos++;
  315              delay(15);
  316        }
  317
  318        id = 3;
  319        while(pos >= 175)
  320        {
  321              SetPosition(id,pos);
  322              pos = pos--;
  323              delay(20);
  324        }
  325
  326
  327        id = 5;
  328        Serial.print(" Adjusting Gripper : ");
  329        Serial.println(id);
  330        pos = 512;
  331        while(pos >= 170)
  332        {
  333              SetPosition(id,pos);
  334              pos = pos--;
  335              delay(30);
  336        }
  337        while(pos <= 512){
  338              SetPosition(id, pos);
  339              pos = pos++;
  340              delay(30);
  341        }
  342        // id 5 is the gripper
  343        id = 5;
  344        while(pos >= 175)
  345        {
  346              SetPosition(id,pos);
  347              pos = pos--;
  348              delay(20);
  349        }
  350        while(pos <= 512){
  351              SetPosition(id, pos);
  352              pos = pos++;
  353              delay(20);
  354        }
  355        id = 4;
  356        Serial.print(" Adjusting Servo : ");
  357        Serial.println(id);
  358
  359        while(pos >= 200)
  360        {
  361              SetPosition(id,pos);
  362              pos = pos--;
  363              delay(10);
  364        }
  365        while(pos <= 512){
  366              SetPosition(id, pos);
  367              pos = pos++;
  368              delay(20);
  369        }
  370
  371        if(StartupComplete == 1){
  372            ...
  373        }
  374
  375    }
  376
  377


Listing 10.10 contains a partial C++ class declaration for our PhantomX Pincher robot arm and implementations for some of the major methods. Listing 10.10 shows how the method implementations are built on top of the Bioloid class methods. Note that we are using the Arduino Serial object to communicate what the arm is doing at any point.

For example, line 85 simply shows that we are getting ready to start the servo scanning process. We show the complete code for the robot arm in Chapter 11.

The constructor, which is not shown in Listing 10.10, sets up a baud rate of 9600 and a connection between the arm and the serial port on the microcontroller. We use the Bioloid class by making method calls. For example:

 245        bioloid.loadPose(Center);
 246        bioloid.readPose();

The commands on lines 245 and 246 are used to center the servos. The retrieveObject() method is implemented on lines 277 to 377 and shows examples of setting the positions of the AX servos (this robot arm has five servos), including opening and closing the gripper (Servo 5) using the SetPosition() as shown on lines 344 to 349:

 344        while(pos >= 175)
 345        {
 346              SetPosition(id,pos);
 347              pos = pos--;
 348              delay(20);
 349        }


Image Note

All the components—sensors, motors, actuators, servos, and end-effectors—are implemented as object-oriented classes.


Object-Oriented Robot Code and Efficiency Concerns

You may be wondering whether there is an extra cost for using an object-oriented approach to programming a robot to perform autonomously versus not using an object-oriented approach. This has been a long and hard battle that is not likely to be won anytime soon. The proponents of the C language, or microcontroller assembly, are quick to point out how small the code footprint is when using their languages or how fast it is compared to something like Java, Python, or C++. They may have a point if you aren’t concerned about modeling the environment the robot has to work in or the scenario and situation the robot has to perform in.

Designing and building maintainable, extensible, and understandable environmental and scenario models in microcontroller assembler or in C is considerably more difficult than using the object-oriented approach. So it’s a matter of how you measure cost. If the size of the robot program or the absolute speed is the only concern, then a well optimized C program or microcontroller assembly program is hard to beat (although it could be matched in C++).

However, if the goal is to represent in software the robot’s environment, scenario, situations, intentions, and reasoning (which is done in most autonomous approaches), then the object-oriented approach is hard to beat. A well-designed robot class, situation, and environment classes coded in a way that is easier to understand, change, and extend is rewarding. Also, the bark is sometimes worse than the bite. For instance, Table 10.5 shows the Java classes and byte-code sizes for each class that is necessary for our extended robot scenario.

Image

Table 10.5 The Java Classes and Byte-Code Sizes for the Extended Robot Scenario

The combined size of the classes that need to be uploaded to the EV3 microcontroller is 17412 kb, a little less than 18 k. The EV3 microcontroller has 64 MB of main RAM and 16 MB of flash. So our object-oriented approach to the extended robot scenario with all the STORIES and SPACES components barely scratches the surface. However, it would be accurate to say that our simple extended robot scenario is typical of a full-blown robot implementation of an autonomous task. Size and speed are definitely real concerns of the code. But the trade-off of space and speed for the power of expression in the object-oriented approach is well justified.


Image Tip

After step 1 is completed from Table 10.2, it’s a good time to verify whether the robot actually has the capability to interact with the things and perform the actions specified in the ontology. In some cases, it is clear whether the robot is up to the task before step 1 is completed. If you’re not certain whether the robot is up to the task, checking the robot’s functionality after breaking down the list of things in the scenario serves as a good checkpoint. Otherwise, effort could be wasted programming a robot to do something that it simply cannot do.


What’s Ahead?

In Chapter 11 we will discuss how the techniques presented in this book are used to solve Midamba’s Predicament.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset