9. Robot SPACES

Robot Sensitivity Training Lesson #9: If you’re not intimate with the robot’s programming, then don’t invade its space.

In Chapter 8, “Getting Started with Autonomy: Building Your Robot’s Softbot Counterpart,” we programmed our robot (Unit1) to autonomously approach an object, determine its color, and then report. Unit1’s scenario was simple and its role was simple. One of the primary approaches to autonomous robot programming is to keep the robot’s tasks well defined, the scenario and situations as simple as possible, and the robot’s physical environment predictable and under control.

There are approaches to robotics that attempt to program a robot to deal with unknown, uncontrolled, and unpredictable environments, surprises, and impromptu tasks. But, so far most of these approaches require some sort of remote-control, or teleoperation. Further, the nature of the tasks that the robot can perform under these conditions is limited for many reasons (chief among them safety).

At best this approach to robotics is advanced and not recommended for beginners. Our approach to robot programming relies on well-defined tasks, scenarios, situations, and controlled environments. This allows the robot to be autonomous. Under these circumstances the robot’s limits are dictated by its hardware capabilities and the skill of the robot programmer.

However, even when a robot’s task and situation are well defined, things can go wrong. A robot’s sensors may malfunction, motors and actuators may slip. The robot’s batteries and power supplies may run down. The environment may not be exactly as expected. In Chapter 4, “Checking the Actual Capabilities of Your Robot,” we discussed the process of discovering a robot’s basic capabilities. Recall that even if a robot’s sensors, actuators, and end-effectors are working there is always a limit to their precision. A good approach to autonomous programming is to build these considerations into the robot’s programming, tasks, and role. If we don’t, we cannot reasonably expect the robot to complete its tasks.

For example, what if the object from Unit1’s scenario in Chapter 8 had a color that was outside the range (mauve or perhaps fuchsia) of the 16 colors that Unit1 could recognize? What would happen in this case? What if there was some obstacle between Unit1 and the object preventing Unit1 from getting close enough to determine the color, or preventing Unit1 from being able to scan the color even if it is close enough? Of course, we should know what the limitations of Unit1’s color sensor are, and we should not expect it to detect the color of an object that falls outside the range.

Further, in a well-defined environment, we should be aware of any potential obstacles between Unit1 and its target. But what if Unit1’s actuators slip a little, and it only travels 15 cm when it was supposed to travel 20 cm? This could throw off our plans and programming for Unit1. Since no situation or scenario can be completely controlled and the unanticipated is inevitable, what do we do when things don’t turn out exactly as planned? We use an approach to programming expectation that we refer to as SPACES.

A Robot Needs Its SPACES

SPACES is an acronym for Sensor Precondition/Postcondition Assertion Check of Environmental Situations. We use robot SPACES to verify whether it is okay for the robot to carry out its current and next task. Robot SPACES is an important part of our approach to programming a robot to execute its tasks autonomously. If a robot’s SPACES has been violated, corrupted, or unconfirmed, the robot is directed to report the SPACES violation and to stop task execution. If the robot’s SPACES check out, then it means it is okay for the robot to execute its current task and possibly begin the execution of its next task.

The Extended Robot Scenario

In the robot scenario from Chapter 8, Unit1 is located in a small room containing a single object. Unit1 is playing the role of an investigator and is assigned the task of locating the object, determining its color, and reporting that color. In the extended scenario, Unit1 has the additional task of retrieving the object and returning it to the robot’s initial position. An RSVP is a visual plan of the robot’s scenario/situation and its task.

Recall from Chapter 3, “RSVP: Robot Scenario Visual Planning,” that an RSVP contains the following:

Image A diagram of the physical layout of the robot’s scenario

Image A flowchart of the robot’s execution routine(s)

Image A statechart showing the situation transitions that take place in the scenario

Figure 9.1 is a layout of Unit1’s scenario. It shows the size of the area, the location of Unit1 within the area, and the location of the object Unit1 is to approach.

Image

Figure 9.1 The layout of Unit1’s scenario.

The visual layout of the robot’s scenario should specify the proper shapes, sizes, distances, weights, and materials of anything that the robot has to interact with in its scenario, and anything that is relevant to the robot’s role in the scenario.

For example, the layout of Unit1’s physical area is 200 cm × 300 cm. The object that Unit1 has to approach is a sphere that weighs 75 grams and has a circumference of approximately 18 cm. The coordinate (0,0) is located in the southwest corner of the area and represents Unit1’s starting position and final position.

To start with, we use a simple two-dimensional coordinate system to describe the robot’s location relative to the object’s location. The robot is located at coordinate (0,0), and the object is located at coordinate (100,150), approximately in the center of the area. Why all the detail? Why do we need to specify locations, weights, sizes, distances, and so on? If this were a remote-controlled robot, the operator would effectively be the eyes and ears of the robot, and some of this detail would not be necessary. The operators would then use common sense and experience while controlling the robot with the remote control. But the robot does not have its own common sense and experience (at least not yet). Since we are programming our robot to be autonomous, the robot needs its own information and knowledge to be effective. Table 9.1 lists the four basic ways for an autonomous robot to obtain information and knowledge about its scenario.

Image

Table 9.1 Four Basic Ways Autonomous Robots Obtain Information for a Scenario

Here we use a combination of methods 1 and 2 from Table 9.1 to enable the robot to have enough information to perform its task. Both methods require detail of the physical aspects of the robot’s scenario and environment. Part of the information and knowledge is explicitly given to Unit1 through programming, and part the robot experiences through the use of its sensors. For this scenario, Unit1 is only equipped with an ultrasonic range finder sensor, a color sensor, a method of movement (in this case tractors), and a robot arm. Figure 9.2 is a photo of Unit1.

Image

Figure 9.2 A photo of Unit1.

The REQUIRE Checklist

Once you know what scenario the robot will be in and what role the robot will play, the next step is to determine whether the robot is actually capable of executing the task. The REQUIRE checklist can be used for this purpose. Table 9.2 shows a simple REQUIRE checklist.

Image

Table 9.2 REQUIRE Checklist for the Extended Scenario


Image Note

Remember that REQUIRE stands for Robot Effectiveness Quotient Used in Real Environments.


In our extended scenario, the robot has to identify the color of an object. Since our robot does have a color sensor, we assume the robot sensors are up to the task. The robot has to retrieve a plastic sphere that weighs about 75 grams. Can Unit1 do this? Table 9.3 is a simple capability matrix for Unit1.

Image

Table 9.3 A Simple Capability Matrix for Unit1

The capability matrix specifies that Unit1’s robot arm can lift about 500 grams. So 75 grams should be no problem. Unit1 has tractors and can move around the designated area, so we have a “yes” for the actuator check. Unit1 has two microcontrollers; a Mindstorms EV3 controller and an Arduino Uno controller (for the robot arm) are also both up to their tasks. Recall that the robot’s overall potential effectiveness can be measured by using a simple REQUIRE checklist.

Image Sensors           – 25%

Image End-Effectors – 25%

Image Actuators        – 25%

Image Controllers      – 25%

Since we have a “yes” in every column on the REQUIRE checklist, we know before the robot tries to execute the task that it has a 100% potential to execute the tasks. However, the potential to execute the tasks and actually executing the tasks are not always the same thing. It is useful to look at the four REQUIRE indicators after the robot has attempted to execute the tasks to see how it performed in each area. Maybe sensors need to be changed or sensor programming needs to be tweaked. Perhaps the robot arm does not have the right DOF (Degrees of Freedom), or could lift 75 grams but not hold 75 grams long enough.

The REQUIRE checklist can be used in a before/after manner to determine whether the robot has the potential to perform the tasks and how well the robot actually performs the task. If the robot cannot pass the checklist initially, the indicators have to be adjusted, changed, or improved until the checklist is passed. Otherwise, we know the robot will not be capable of effectively performing the tasks. If the robot passes the checklist, and you have a diagram of the physical layout of the robot’s scenario, the next step is to produce a diagram of the procedure that shows step-by-step what actions the robot needs to execute to successfully perform its role in the scenario. This is an important part of the process for several reasons:

Image It helps you figure out what the robot is doing prior to making the effort to write the code.

Image It helps you discover aspects or details about the scenario that you had not thought about.

Image You can use this diagram as a reference for future use for design and documentation purposes.

Image You can use it as a communication tool when sharing the idea of the robot’s task with others.

Image It helps you find mistakes or errors in your robot’s logic.

Figure 9.3 shows a simplified version of a flowchart for Unit1’s extended scenario actions. We marked eight of the actions that the robot has to execute as steps 1 through 8. We use these process boxes as a shortcut here. For example, the robot initialization in step 1 represents several steps such as initializing the sensors and setting the sensors into the proper modes, for example, analog or digital, initializing the motors and setting the initial speed for the motors, and positioning the robot arm into its initial position.

Image

Figure 9.3 A simplified version of Unit1’s extended scenario actions flowchart


Image Note

Note that the boxes for step 1 and step 3 have double lines. Remember from Chapter 3 that double lines mean that those boxes represent multiple steps or a subroutine that can be broken down into simpler steps.


Robot initialization includes the robot checking to see whether it has enough power supply to complete the task. The initialization also includes the robot checking its initial location. Hopefully the robot is starting out in the right place to begin with. We could have separate boxes for each of these steps, but for now, to keep things simple we use a single process box for initialization in step 1 and the robot travel in step 3.

The initialization done in step 1 is usually done in some sort of setup() or startup() routine, and in all the examples in this book those routines are part of a Java or C++ (Arduino) constructor. In step 2, step 4, and step 7 from Figure 9.4, we check Unit1’s SPACES. That is, we check the preconditions, postconditions, and assertions about the robot’s situation. A precondition is some condition that must be true at the start of, or prior to, some action taking place. A postcondition is a condition that should be true immediately after an action has taken place.

What Happens If Pre/Postconditions Are Not Met?

If the preconditions for a robot’s action have not been met or are not true, what should we do? If the postconditions for a robot’s action are not true, what does that mean? Should the robot continue to try to perform the action if the action’s preconditions are not true?

For instance, in step 2 from Figure 9.3, there is a postcondition check for the robot’s initialization process. The postcondition checks to see whether the sensors have been put in the proper modes and if the motors have been set to the proper speeds. The postcondition checks to see whether the robot’s arm is in the correct starting position and so on.

If these conditions haven’t been met, should we send the robot on its mission anyway? Is that safe for the robot? Is that prudent for the situation? Notice that our flowchart says if the postconditions in step 2 are not met, we want the robot to stop. If the postconditions are not met, for all we know the robot is undergoing a significant malfunction and we don’t want the robot to even attempt to execute the task. When the pre/postconditions or assertions are not met or are not true, we call this violating the robot’s SPACES.

What Action Choices Do I Have If Pre/Postconditions Are Not Met?

What are the available choices for the robot’s next action if some precondition or postcondition is not met? We look at three basic actions the robot could take. Although there are more, they are essentially variations on these three:

1. The robot could ignore the pre/postcondition violation and continue trying to execute any action that it can still execute.

2. The robot could attempt to fix or rectify the situation in some way by retrying an action or adjusting some parameter or the position of one or more of its parts, actuators, or end-effectors. This would amount to the robot trying to make the pre/postcondition true if it can and then proceed with the task.

3. The robot can report the nature of the pre/postcondition violation, put itself into a safe state, and then stop or shut down.

Much of the challenge of robot programming is related to the problem of what to do next if something does not go as planned or expected. Entire approaches to robot programming have been developed around the problems of unknown environments, surprises within the environment, a changing or evolving situation, and robot malfunction. But to get us started with programming a robot to be autonomous, we program for situations that are under our complete control, where the environment is well defined, and where we can anticipate, test for, and react to preconditions and postconditions that are met or not met. We start with closed situations and scenarios and learn how to give a robot level 1 through level 3 autonomy (refer to Table 9.1).

In Chapter 12, “Open Source SARAA Robots for All!,” we explain SARAA (Safe Autonomous Robot Application Architecture) in comparison with pure reactive approaches to robot autonomy.

A Closer Look at Robot Initialization Postconditions

The robot’s initialization routine is executed in step 1 in the flowchart shown in Figure 9.3. Since step 1 uses a process box, there are multiple steps or a subroutine of steps involved. We use our BURT translator to show the list of actions taken in the initialization routine. Listing 9.1 shows the BURT translation for Unit1’s constructor (i.e., initialization routine) from step 1 in Figure 9.3.

Listing 9.1 Burt Translation Initialization Routine/Constructor

BURT Translator INPUT Image


Softbot  Frame
Name:  Unit1

Initialization Routine Start:
Step 1: Initialize ultrasonic sensor
Step 2: Initialize color sensor
Step 3: Setup left and right motors
Step 4: Initialize arm servos
        Set arm to initial angle of 100
Step 5: Set robot's initial position to 0,0
Step 6: Set wheel diameter to  7
        Set track width to 32
        ...
If robot's startup routine is successful then continue, otherwise report problems,
and shutdown.
Initialization Routine  End:


BURT Translator OUTPUT Image


{
    1  public unit1() throws InterruptedException,Exception
    2  {
    3
    4     try{
    5            Exception SoftbotError;
    6            //Set up ultrasonic sensor
    7            Vision = new UltrasonicSensor(SensorPort.S4);
    8            if(Vision == null){
    9               Messages.add("Could Not Initialize Ultrasonic Sensor on Port 4");
   10               SoftbotError = new Exception("101");
   11               throw SoftbotError;
   12            }
   13            Vision.enable();
   14            //Set up color sensor
   15            ColorVision = new HiTechnicColorSensor(SensorPort.S2);
   16            if(ColorVision == null){
   17               Messages.add("Could Not Initialize Color Sensor on Port 2");
   18               SoftbotError = new Exception("100");
   19               throw SoftbotError;
   20            }
   21            // set up motors
   22            CF = new TetrixControllerFactory(SensorPort.S1);
   23            if(CF == null){
   24               Messages.add("Could Not Setup Servo Port");
   25               SoftbotError = new Exception("102");
   26               throw SoftbotError;
   27            }
   28
   29            LeftMotor = MC.getRegulatedMotor(TetrixMotorController.MOTOR_1);
   30            RightMotor = MC.getRegulatedMotor(TetrixMotorController.MOTOR_2);
   31            if(LeftMotor == null || RightMotor == null){
   32               Messages.add("Could Not Initialize Motors");
   33               SoftbotError = new Exception("103");
   34               throw SoftbotError;
   35            }
   36
   37            LeftMotor.setReverse(true);
   38             RightMotor.setReverse(false);
   39             LeftMotor.resetTachoCount();
   40             RightMotor.resetTachoCount();
   41
   42          //Set up arm servos
   43          Messages.add("Tetrix Controller Factor Constructed");
   44           MC = CF.newMotorController();
   45           SC = CF.newServoController();
   46          Gripper = SC.getServo(TetrixServoController.SERVO_2);
   47
   48
   49          Arm = SC.getServo(TetrixServoController.SERVO_1);
   50          if(Arm == null){
   51             Messages.add("Could Not Initialize Arm");
   52             SoftbotError = new Exception("104");
   53             throw SoftbotError;
   54          }
   55          Arm.setRange(750,2250,180);
   56          Arm.setAngle(100);
   57          // Set Robot' initial Position
   58          RobotLocation = new location();
   59          RobotLocation.X = 0;
   60          RobotLocation.Y = 0;
   61
   62
   63          //Set  Wheel Diameter and Track
   64          WheelDiameter = 7.0f;
   65          TrackWidth = 32.0f;
   66
   67          Situation1 = new situation();  // creates new situation
   68
   69
   70     }
   71  }
//Burt Translation End Constructor


Power Up Preconditions and Postconditions

The first preconditions and postconditions are usually encountered in the constructor. Recall the constructor is the first code that gets executed whenever an object is created. When the softbot (control code) for Unit1 starts up, the first thing executed is the constructor. The initialization, startup, or power up routines are especially important for an autonomous robot. If something goes wrong in the startup sequence, all bets are off. The robot’s future actions are not reliable if the power up sequence fails in some way.

In the BURT translator for Listing 9.1, we show six of Unit1’s startup routine steps. Keep in mind that at this stage of the RSVP, when we are specifying the design of the steps the robot is to take, we want to express each step simply and make it easy to understand. It’s important for the list of instructions to be clear, complete, and correct. Once you understand the instructions you will give the robot in your language, it’s time to translate those instructions into the robot’s language. The output translation in Listing 9.1 shows what the steps will look like once they are translated into Java. Ideally we name the variables, routines, and methods so that they match the input design language as closely as practical.

This is our first postcondition. We call it a postcondition because we check whether it’s true after some list of actions has been attempted or executed. In this case, the actions are steps 1 through 6. Our initial policy is that it’s better to be safe than sorry. If any of the actions fail in steps 1 through 6, we do not send the robot on its mission. For example, if the ultrasonic sensor could not attach to port 3, or if we could not set the left and right motors, this would be a SPACES violation because one of the postconditions of our constructor requires that the start routines have to be successful. So how do we code preconditions and postconditions?


Image Note

Notice the rule in the translator’s input: If a robot’s startup routine is successful then continue; otherwise report problems and shut down.


Coding Preconditions and Postconditions

Let’s look at lines 5 through 12 from the BURT translation in Listing 9.1:

    5                Exception SoftbotError;
    6                //Set up ultrasonic sensor
    7                Vision = new UltrasonicSensor(SensorPort.S4);
    8                if(Vision == null){
    9                   Messages.add("Could Not Initialize Ultrasonic Sensor on Port 4");
   10                  SoftbotError = new Exception("101");
   11                  throw SoftbotError;
   12                }

We instruct the robot to initialize the ultrasonic sensor on line 7. Lines 8 through 12 determine what happens if that action could not be performed. There is an instruction to initialize on line 7 and a condition to check afterward to see whether that action was taken. This is what makes it a postcondition. First the action is tried; then the condition is checked. There are some other interesting actions taken if the ultrasonic sensor is not initialized, that is, (Vision == null). We add the message

"Could Not Initialize Ultrasonic Sensor on Port 4"

to the Messages ArrayList. The Messages ArrayList is used to log all the important robot actions and actions that the robot failed to execute. This ArrayList is later either saved for future inspection or transmitted over a serial, Bluetooth, or network connection to a computer so that it can be viewed. After a message is added, we create a SoftbotError Exception(“101”) exception object and then we throw this object. Any code after line 11 in the constructor is not executed. Instead, robot control is passed to the first exception handler that can catch objects of type exception and the robot comes to a halt.

Notice lines 11, 19, 26, and 34 all throw an Exception object. If any one of these lines is executed, the robot comes to a halt and does not proceed any further. If any one of those lines is executed it means that a postcondition was not met, and the first missed postcondition ultimately causes the robot to come to a halt. Notice in Listing 9.1 earlier in the chapter that the constructor has five postcondition checks:

// Postcondition 1
8     if(Vision == null){
      ...
12    }

// Postcondition 2
16    if(ColorVision == null){
      ...
20    }

// Postcondition 3
23    if(CF == null){
      ...
27     }

// Postcondition 4
31    if(LeftMotor == null || RightMotor == null){
      ...
35    }

// Postcondition 5
50    if(Arm == null){
      ...
54    }

What is the effect of these five postcondition checks in the constructor? If there are any problems with the robot’s vision, color vision, servos, motors, or arm, the robot’s mission is cancelled plain and simple. In this case, the checks are made using if-then control structures. Recall the control structures introduced in Chapter 3. We could use any of five control structures shown in Figure 9.4 to check preconditions/postconditions and assertions.

Image

Figure 9.4 Basic control structures to check preconditions and postconditions

Structure 1 in Figure 9.4 says if a condition is true, then the robot can take some action. If Structure 1 is used to check a condition after a set of actions have been performed, then Structure 1 is being used to check a postcondition.

If Structure 1 is being used to check a condition before one or more actions will be taken, then Structure 1 is being used to check a precondition. Where Structure 1 is a one-time check, Structures 2 and 3 perform one or more actions while a condition is true (Structure 2) or until a condition becomes true (Structure 3). Structure 4 is used when a condition is to be selected from a group of conditions and a separate action(s) needs to be taken depending on which condition is true. Structure 5 is used to handle abnormal, unanticipated conditions. What is important is that we decide which conditions if not met are showstoppers and which are not. The setup routine, startup routine, initialization routine, and constructor are the first place pre/postconditions should be used. If the robot does not power up or start out successfully, usually it’s downhill from there. There are exceptions. We could include recovery and resumption routines. We could endow our robot with fault tolerance and redundancy routines. We introduce some of these techniques later, but for now, we stick with the basics. The robot is only to proceed if the power up sequence is successful; otherwise, “abort mission”!

In step 3 of the flowchart from Figure 9.3 earlier in the chapter, the robot is given the instruction to travel to the object’s location. On lines 58 through 60 in Listing 9.1, we set the robot’s initial location. For the robot to travel to the object’s location, the robot needs to know where it is starting from and where to travel to. This information is part of the robot’s situation and scenario. The initial X and Y location equals 0 and is a precondition of the robot’s travel action. If the robot is not at the proper location to start with, then any further directions that it follows will not be correct. Keep in mind that we are programming our robot to be autonomous. It will not be directed by remote control to the correct location. The robot will be following a set of preprogrammed directions. If the robot’s starting position is incorrect (i.e., the precondition is not met), then the robot will not successfully reach its destination.

Once the robot gets to the object’s location, the robot must determine the object’s color. Is there a precondition here? Yes! The robot cannot determine the object’s color if the object is not at the designated location. So the precondition is the object must be at the designated location. Step 4 in Figure 9.4 performs this precondition check. Let’s take a look at the actual programming of steps 3 and 4 to see what this looks like. In step 3, the robot is given the instruction to travel. The method of travel differs depending on the robot’s actuators and whether the robot’s build is

Image Bipedal

Image Quadruped/hexaped, and so on

Image Tractors/wheels

Image Underwater


Image Note

See Chapter 7, “Programming Motors and Servos,” for a closer look at programming robot motors.


At the lowest levels (level 1 and level 2 of the ROLL) of programming, the robot travelling (or moving) is accomplished by directly programming the motors using microcontroller commands directed to motor/servo ports and pins.

Activating motors and servos is what causes a robot’s movement. Programming the motors to rotate in one direction causes wheels, legs, propellers, and so on to move in that direction. The accuracy of the movement depends on how much precision the motor has and how much control you have over the motor. Unregulated or DC motors may be appropriate for certain kinds of movement, and step-controlled motors may be more appropriate for others.

Low-level motor programming can be used to translate motor rotations or steps into distance. However, even low-cost robot environments, such as RS Media, Arduino, and Mindstorms EV3 and NXT, have motor/movement class libraries that already handle a lot of the low-level detail of programming motors and servos to move. Table 9.4 shows examples of some of the commonly used class libraries for robot motor control.

Image

Table 9.4 Examples of Commonly Used Motor and Servo Classes for Low-Cost Robots

At higher robot programming levels (for example, levels 3 through 5 of the ROLL), programming the robot to travel() involves using classes such as the ones shown in Table 9.4 and calling methods or functions provided by those classes. For example, the Arduino environment has a Servo class. The Servo class has a write() method. If we wanted to make an Arduino servo move 90 degrees, we could code the following:

#include <Servo.h>

Servo  Servo1;  //Create an object of type Servo called Servo1
int  Angle = 90;
void setup()
{
    Servo1.attach(9)   //Attach the servo on pin 9
    if (Servo1.attached()){
        Servo1.write(Angle);
    }
}

This gets the servo to move 90 degrees. And if the servo is connected to the robot’s wheels, tractors, legs, and so on, it causes some kind of 90-degree movement. But what does this type of program have to do with traveling or walking? General travel(), walk(), and move() procedures can be built based on methods and functions that classes like the Arduino Servo or the leJOS TetrixRegulatedMotor provides. So you would build your routines on top of the built-in class methods. Before using a class, it is always a good idea to familiarize yourself with the class’s methods and basic functionality. For instance, some of the commonly used member functions (or methods) for the Arduino Servo class are shown in Table 9.5.

Image

Table 9.5 Commonly Used Methods of the Arduino Servo Class

If your robot is intended to be mobile, you should include some kind of generic travel(), moveForward(), moveBackward(), reverse(), and stop() routines that are built on top of the class methods provided by the library for your microcontroller. Routines involving sensor measurements or motor or servo movement ultimately use some unit of measure. Our travel routine from step 3 in Figure 9.3 has to assume some unit of measure. Will the robot’s moveForward(), travel(), or reverse() routines use kilometers, meters, centimeters, and so on? When you program a robot to move, you should have specific units of measure in mind. In our programming examples, we use centimeters. In Figure 9.3, steps 3 through step 5, we program the robot to travel to the object, check to see whether the object is there, and take some action. The BURT translation in Listing 9.2 shows our level 5 instructions and their level 3 Java implementations.

Listing 9.2 BURT Translation Traveling to Object

BURT Translator INPUT Image


Softbot  Frame
Name:  Unit1 Level 5

Travel to the object Algorithm Start:
if the object is there  {this is precondition}
determine its color.
Travel to the object Algorithm End.


BURT Translator OUTPUT: Level 3 Image


//Begin Translation

1  Unit1 = new softbot();
2  Unit1.moveToObject();
    ...
5  Thread.sleep(2000);
6  Distance = Unit1.readUltrasonicSensor();
7  Thread.sleep(4000);
8  if(Distance <= 10.0){
9     Unit1.getColor();
10    Thread.sleep(3000);
11  }

//Translation End.


The code in Listing 9.2 directs the robot to moveToObject() and then take a reading with the ultrasonic sensor. Keep in mind that ultrasonic sensors measure distance. The assumption here is that the robot will move within 10 cm of the object. Notice on Line 8 that we check to see whether Distance from the object is <= 10.0. This is the precondition that we referred to in step 4 of Figure 9.4. If the precondition has not been met, what do we do?

If the robot is at the correct location and there is no object within 10 cm, then how can it determine the object’s color? If there is an object located 12, 15, or 20 centimeters from the robot, how do we know that is the object we want to measure? In our case, we specify an object that is 10 cm or less from the robot’s stopping position. If there is no object there, we want the robot to stop and report the problem. It’s not a requirement that the robot always stop when its SPACES have been violated. However, it is important to have some consistent well-thought-out plan of action for the robot if its SPACES are violated during the execution of its program.

Where Do the Pre/Postconditions Come From?

How do we know the robot should stop within 10 cm of the object? When we give the robot the command to moveToObject() in Listing 9.2, what object are we referring to? Where is it? Recall that SPACES is an acronym for Sensor Precondition/Postcondition Assertion Check of Environmental Situations. Situations is a keyword in this acronym. In our approach to programming robots to be autonomous, we require the robot to be programmed for specific scenarios and situations. The situation is given to the robot as part of its programming. The pre/postconditions are a natural part of the situation or scenario that has been given to the robot. That is, the situation or scenario dictates what the preconditions or postconditions are. Recall the high-level overview of our robot’s situation from Chapter 8:

The Extended Robot Scenario

Unit1 is located in a small room containing a single object. Unit1 is playing the role of an investigator and is assigned the task of locating the object, determining its color, and reporting that color. In the extended scenario, Unit1 has the additional task of retrieving the object and returning it to the robot’s initial position.

By the time this high-level situation is translated into details the robot can act upon, a lot of questions are answered. For example, the extended robot scenario makes several statements that immediately pose certain questions:

Image Statement: Unit1 is located in a small room.

Image Question(s): Where in the room?

Image Statement: ...containing a single object.

Image Question(s): Where is the object? How big is it?

Image Statement: ...returning it to the robot’s initial position.

Image Question(s): Where is the initial position? Where should the object be placed?

These kinds of statements and questions help to make the preconditions and postconditions. Let’s take a look at our Java implementation of Unit1.moveToObject() shown in Listing 9.3.

Listing 9.3 BURT Translation of moveToObject() method

BURT Translator OUTPUT: Java Implementation Image


    1    public void moveToObject() throws Exception
    2    {
    3         RobotLocation.X = (Situation1.TestRoom.SomeObject.getXLocation() -
                                RobotLocation.X);
    4         travel(RobotLocation.X);
    5         waitUntilStop(RobotLocation.X);
    6         rotate(90);
    7         waitForRotate(90);
    8         RobotLocation.Y = (Situation1.TestRoom.SomeObject.getYLocation() -
                                RobotLocation.Y);
    9         travel(RobotLocation.Y);
   10         waitUntilStop(RobotLocation.Y);
   11         Messages.add("moveToObject");
   12
   13    }


The robot has an X and a Y location. The robot’s X location is determined on line 3 in Listing 9.3, and the robot’s Y location is determined on line 8 of Listing 9.3. The instruction

RobotLocation.X = (Situation1.TestRoom.SomeObject.getXLocation() - RobotLocation.X);

subtracts the robot’s current X position from the location of the object in the test room. This gives the distance in centimeters the robot is to travel east or west. If RobotLocation.X is positive after the subtraction, the robot travels west, and if RobotLocation.X is negative, the robot travels east. A similar calculation is used to determine how far the robot travels north or south with the instruction:

RobotLocation.Y = (Situation1.TestRoom.SomeObject.getYLocation() - RobotLocation.Y);

If RobotLocation.Y is positive after the subtraction, the robot travels north, or the robot travels south if RobotLocation.Y is negative after the subtraction. But notice the following object construction:

Situation1.TestRoom.SomeObject

This means there is an object called Situtation1 that has a component named TestRoom, and TestRoom has a component named SomeObject. The component SomeObject has a getXLocation() and getYLocation() method that returns the X,Y coordinates (in centimeters) of SomeObject. We look closer at the technique of programming the robot with one or more situations in Chapter 10, “An Autonomous Robot Needs STORIES.” Here, we show in Listing 9.4 the class declarations for situation, x_location, room, and something.

Listing 9.4 BURT Translations for situation, x_location, room, and something Classes

BURT Translations Output: Java Implementations Image


    1      class x_location{
    2         public int X;
    3         public int Y;
    4         public x_location()
    5         {
    6
    7             X = 0;
    8             Y = 0;
    9         }
   10
   11      }
   12
   13
   14      class something{
   15         x_location Location;
   16         int Color;
   17         public something()
   18         {
   19             Location = new x_location();
   20             Location.X = 0;
   21             Location.Y = 0;
   22             Color = 0;
   23         }
   24         public void setLocation(int X,int Y)
   25         {
   26
   27             Location.X = X;
   28             Location.Y = Y;
   29
   30         }
   31         public int getXLocation()
   32         {
   33             return(Location.X);
   34         }
   35
   36         public int getYLocation()
   37         {
   38             return(Location.Y);
   39
   40         }
   41
   42         public void setColor(int X)
   43         {
   44
   45             Color = X;
   46         }
   47
   48         public int getColor()
   49         {
   50            return(Color);
   51         }
   52
   53      }
   54
   55      class room{
   56         protected int Length = 300;
   57         protected int Width = 200;
   58         protected int Area;
   59         public something SomeObject;
   60
   61         public  room()
   62         {
   63             SomeObject =  new something();
   64             SomeObject.setLocation(20,50);
   65         }
   66
   67
   68         public int  area()
   69         {
   70             Area = Length * Width;
   71             return(Area);
   72         }
   73
   74         public  int length()
   75         {
   76
   77             return(Length);
   78         }
   79
   80         public int width()
   81         {
   82
   83             return(Width);
   84         }
   85      }
   86
   87      class situation{
   88
   89         public room TestRoom;
   90         public situation()
   91         {
   92             TestRoom = new room();
   93
   94         }
   95     }


With Listing 9.4 in mind, we declare

situation  Situation1;

to be a component of Unit1’s softbot frame. Classes like room, situation, something, and x_location provide the basis for the pre/postconditions that make up our robot’s SPACES. If we look at line 64 in Listing 9.4, we see that the object is located at (20,50). If the precondition

if(Distance <= 10.0)

from line 8 in Listing 9.2 is satisfied, this means that once the robot has executed its moveToObject() instruction it should be <= 10 centimeters from the location (20,50).

SPACES Checks and RSVP State Diagrams

Recall the introduction to state diagrams in Chapter 3. We use each state to represent a situation in the scenario. So we can say a scenario consists of a set of situations. A good rule of thumb is to have one or more pre/postcondition checks before every situation the robot can be in during the scenario. If the SPACES are not violated, it is safe or okay for the robot to proceed from its current situation to the next situation. Figure 9.5 shows the seven situations the object color recognition and retrieval scenario has.

Image

Figure 9.5 A state diagram that shows the seven situations of the robot’s scenario.

Drawing the type of diagram in Figure 9.5 helps you plan your robot’s autonomy. It helps to identify and clarify what situations the robot will be in during the scenario. This kind of diagram helps you see where the robot’s SPACES are. Typically, at least one pre/postcondition needs to be met in every situation. For autonomous robots, there is usually more than one pre/postcondition per situation. Let’s revisit the three basic tools of the RSVP we used for our extended robot scenario. The first tool is a physical layout of the robot’s scenario from Figure 9.1.

This layout gives us some idea where the robot starts out, where it has to go, where the object is located, what size the area is, and so on. At this level the layout is a snapshot of the robot’s initial situation within the scenario. The second tool is the flowchart of the set of instructions the robot is to carry out from Figure 9.3. This tool shows the primary actions the robot is to perform autonomously and where the preconditions and postconditions occur within the robot’s set of instructions. This tool gives us a picture of the planned actions and decisions the robot must execute autonomously. The final tool is the situation/scenario state diagram from Figure 9.5 that shows how the scenario is broken down into a set of situations and how the robot can legally move between situations. For example, in Figure 9.5, the robot cannot go from a traveling situation directly to a grabbing situation, or from a grabbing situation to a sensing situation. The situation/scenario statechart shows us how the situations are connected in the scenario and where the SPACES are likely to occur.

Although the RSVP is used as a set of graphic plans that support the identification and planning of the robot’s SPACES and REQUIRE checklist, keep in mind that components depicted in those three diagrams must have C++ or Java counterparts (depending on which language you are using for implementation). Listing 9.4 contains the Java classes used to implement the situations of our robot’s extended scenario, and the BURT translations in Listing 9.1 and Listing 9.2 are examples of how some of the robot’s SPACES are implemented in Java. We showed how SPACES can be used to identify the particular C++ or Java class components and methods and Arduino classes such as the Servo class. From this we can see that the visual tools and techniques of the RSVP are helpful both for the visualizing and the planning of our robot’s autonomy.

What’s Ahead?

In Chapter 10, “An Autonomous Robot Needs STORIES,” we show the complete program for our robot’s extended scenario. We take a closer look at SPACES and exception handling. We also introduce you to robot STORIES, the final piece of the puzzle to programming a robot to execute tasks autonomously.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset