8. Getting Started with Autonomy: Building Your Robot’s Softbot Counterpart

Robot Sensitivity Training Lesson #8: It’s not about the robot’s hardware; it’s “who” the robot is on the inside that matters.

By definition, all real robots have some kind of end-effectors, sensors, actuators, and one or more controllers. While all these are necessary components, a collection of these components is not sufficient to be called a robot. These components can be used as standalone parts and can be put into many other kinds of machines and devices. It’s how these components are combined, connected, and coordinated using programming that moves them in the direction of robot.

A robot is only as useful as its programming. Every useful autonomous robot has a softbot counterpart that ultimately gives the robot purpose, direction, and definition. For every programmable piece of hardware within or attached to a robot, there is a set of instructions that must be built to control it. Those instructions are used to program what functions each component performs given some particular input. Collectively, this set of instructions is the robot’s softbot counterpart and captures the robot’s potential behavior. For autonomous robots, the softbot counterpart controls the robot’s behavior.


Image Note

We use the term softbot because it helps us avoid the confusion over the robot’s microcontroller versus the robot controller, or the robot’s remote control.


The softbot plays the part of the robot controller. We use the term softbot, and that will help us avoid the confusion over the robot’s microcontroller vs. the robot controller, or the robot’s remote control. For our autonomous robots, the softbot is the set of instructions and data that control the robot. There are many approaches to building the softbot counterpart. The approaches range from explicitly programming every action the robot takes, to providing only reactive or reflexive programming, which allows the robot to choose its own actions as it reacts to its environment. So the robot can have a proactive softbot, a reactive softbot, or some hybrid combination of the two. This difference in softbot construction gives us three kinds of autonomous robots:

Image Proactive autonomous robots

Image Reactive autonomous robots

Image Hybrid (proactive and reactive) autonomous robots

The range of completely proactive to completely reactive gives us the five basic levels of autonomy control shown in Table 8.1.

Image

Table 8.1 Five Basic Levels of Robot Autonomy Control


Image Note

In this book, we show you how to build simple level 1 and level 2 softbots. For a detailed look at level 3 through level 5 control strategies, see Embedded Robotics by Thomas Braun and Behavior-Based Robotics by Ronald C. Arkin.


Softbots: A First Look

To see how this works, we use a simple robot build. Our simple robot, which we call Unit1, has the capability to move forward, backward, and turn in any direction. Unit1 only has two sensors:

Image Ultrasonic sensor: A sensor that measures distance

Image 16-color light sensor: A sensor that identifies color

To get started, we created a scenario where there is a single object in the room with the robot. The robot’s task is to locate the object, identify its color, and then report back. We want our robot to execute this task autonomously (i.e., without the aid of a remote control or human operator). The five essential ingredients for a successful robot autonomy are:

Image A robot with some capabilities

Image A scenario or situation

Image A role for the robot to play in the scenario or situation

Image Task(s) that can be accomplished with the robot’s capabilities and that satisfy the robot’s role

Image Some way to program the robot to autonomously execute the tasks

Table 8.2 lists these five essential ingredients for a successful autonomous robot and who or what supplies each ingredient. Now that we are sure we have the five essential ingredients, the next step is to lay out the basic sections of Unit1’s softbot frame. A softbot frame has at least four sections:

Image Parts

Image Actions

Image Tasks

Image Scenarios/situations

Image

Table 8.2 The Five Essential Ingredients with Unit1

We describe these sections in simple English and then use our BURT translator to show what the simple English looks like translated into a computer language that supports object-oriented programming. In this simple example, we use the object-oriented language Java in the BURT translator. Listing 8.1 shows our first cut layout of Unit1’s softbot frame.

Listing 8.1 First Cut Layout of Unit1 Softbot Frame

BURT Translation INPUT Image


Softbot  Frame
Name:  Unit1
Parts:
Sensor Section:
Ultrasonic Sensor
Light Sensor
Actuator Section:
Motors  with decoders (for movement)

Actions:
Step 1: Move forward some distance
Step 2: Move backward some distance
Step 3: Turn left some degrees
Step 4: Turn right some degrees
Step 5: Measure distance to object
Step 6: Determine color of object
Step 7: Report

Tasks:
Locate the object in the room, determine its color, and report.

Scenarios/Situations:
Unit1 is located in a small room containing a single object. Unit1 is playing the
role of an investigator and is assigned the task of locating the object, determining
its color, and reporting that color.

End Frame.


Parts Section

The Parts section should contain a component for each sensor, actuator, and end-effector the robot has. In this case, Unit1 only has two sensors.

The Actions Section

The Actions section contains the list of basic behaviors or actions that the robot can perform—for example, lifting, placing, moving, taking off, landing, walking, scanning, gliding, and so on. All these actions will be tied to some capability provided by the robot’s build, its sensors, actuators, and end-effectors. Think about the basic actions the robot can perform based on its physical design and come up with easy-to-understand names and descriptions for these actions. An important part of robot programming is assigning practical and meaningful names and descriptions for robot actions and robot components. The names and descriptions that we decide on ultimately make up the instruction vocabulary for the robot. We discuss the instruction vocabulary and the ROLL model later in this chapter.

The Tasks Section

Whereas the Actions section is particular to the robot and its physical capabilities, the Tasks section describes activities specific to a particular scenario or situation. Like the Actions and Parts sections, the names and descriptions of tasks should be easy to understand, descriptive, and ultimately make up the robot’s vocabulary. The robot uses functionality from the Actions section to accomplish scenario-specific activities listed in the Tasks section.

The Scenarios/Situations Section

The Scenarios/Situations section contains a simple (but reasonably complete) description of the scenario that the robot will be placed in, the robot’s role in the scenario, and the tasks the robot is expected to execute. These four sections make up the basic parts of a softbot frame. As we see later, a softbot frame can occasionally have more than these four sections. But these represent the basic softbot skeleton. The softbot frame acts as a specification for both the robot and the robot’s scenario. The softbot frame in its entirety must ultimately be translated into instructions that the robot’s microcontroller can execute.

The Robot’s ROLL Model and Softbot Frame

The softbot frame is typically specified using language from level 4 through level 7 (see Figure 8.2). During the specification of the softbot frame, we are not trying to use any specific programming language, but rather easy-to-understand names and descriptions relevant to the task at hand. The softbot frame is a design specification for the actual softbot and is described using the task vocabulary, the robot’s basic action vocabulary, and terminology taken directly from the scenario or situation the robot will be placed in.

Image

Figure 8.2 The robot’s ROLL model introduced in Chapter 2

Using language, vocabulary, and instructions from levels 4 through 7 allows you to initially focus on the robot’s role, tasks, and scenario without thinking about the details of the microcontroller programming language or programming API. The softbot frame allows you to think about the robot’s instructions from the point of view of the scenario or situations as opposed to the view of the microcontroller or any other piece of robot hardware.

Of course, the softbot frame must ultimately be translated into level 3 or level 2 instructions with some programming language. But specifying the robot and the scenario using language levels 4 through 7 helps to clarify both the robot’s actions and the expectations of the role the robot is to perform in the scenario and situation.

We start with a level 4 through level 7 specification and we end with a level 3 (sometimes 2) specification, and the compiler or interpreter takes the specification to level 1. Figure 8.3 shows which ROLL model levels to use with the sections of the softbot frame.

Image

Figure 8.3 Softbot frame levels and corresponding ROLL model levels

We can develop our robot’s language starting at the lowest level hardware perspective and moving up to the scenario or situation the robot will be used in, or we can approach the design of our robot language starting with the scenario down to the robot’s lowest level hardware.

During the initial design phases of the robot’s programming, it is advantageous to approach things from the scenario to the hardware. You should specify the softbot frame using names and descriptions that are meaningful and appropriate for levels 4 through 7 initially. Once the scenario is well understood and the expectations of the robot are clear, then we can translate levels 4 through 7 into a level 3 specification or a level 2 specification. In this book, we assume that you are responsible for each level of the robot’s softbot frame specification.

In some robot applications the responsibilities are divided. The individual programming device drivers for the robot’s parts may be different from the individual writing the level 3 robot instructions. The individual writing the level 3 instructions might not be responsible for writing the levels 4 through 7 specification, and so on. This is usually the case when a team of robot programmers is involved, or when robot software components originate from different sources. However, we describe the complete process starting with the high level (Natural-Language-Like) specification of the softbot frame using levels 4 through 7 to the translation of the softbot frame to Java or C++.


Image Tip

Notice for Ingredient #5 in Table 8.2 that Unit1 uses Java. In particular, Unit1 uses Java and the LeJOS Java class library running on Linux on the Mindstorms EV3 microcontroller. We use our BURT translator to show the translation between the softbot frame and Java.


BURT Translates Softbots Frames into Classes

The Tasks section of the softbot frame represents what is known as the agent loop, and the Parts, Actions, Situations, and Scenarios are implemented as objects and object methods. In Chapters 8 through 10, we present only introductory level material for objects and agents as it relates to robot programming. For a more detailed discussion of the subjects, see The Art of Agent-Oriented Modeling by Leon S. Sterling and Kuldar Taveter.


Image Note

It is important to note that each softbot frame is translated into one or more class specifications using an object-oriented language such as Java or C++. Technically, a softbot frame is an object and agent-oriented approach to specifying a robot controller.


A BURT Translation from Softbot Frame to Java Code

The BURT Translator consists of an Input section and an Output section. The Input section contains a natural language, or higher-level description of robot specifications and instructions, and the Output section contains a translation to a lower-level description of the robot specification or instructions. The Output level is always a lower-level than the Input level. The Input section might contain ROLL model level 4 instructions, and the Output section might contain the level 3 translation. The Input level could be level 5, and the Output level could be level 2, and so on. In some cases BURT shows a translation from one high level to another high level—for instance, level 6 instructions (robot task vocabulary) translated into level 4 (robot base vocabulary). The BURT Translation in Listing 8.2 shows an initial translation of Unit1’s softbot frame to Java code.

Listing 8.2 BURT Translation of Unit1’s Softbot Frame to Java

BURT Translation INPUT Image


Softbot  Frame
Name:  Unit1
Parts: SECTION 1
Sensor Section:
Ultrasonic Sensor
Light Sensor
Actuator Section:
Motors  with decoders (for movement)

Actions:  SECTION 2
Step 1: Move forward some distance
Step 2: Move backward some distance
Step 3: Turn left some degrees
Step 4: Turn right some degrees
Step 5: Measure distance to object
Step 6: Determine  color of object
Step 7: Report

Tasks: SECTION 3
Locate the object in the room, determine its color, and report.

Scenarios/Situations: SECTION 4
Unit1 is located in a small room  containing  a single object. Unit1 is playing the
role of an investigator and is assigned the task of locating the object, determining
its color, and reporting that color.

End Frame.


BURT Translations Output: Java Implementations Image


class basic_ robot{

//PARTS: SECTION 1
// Sensor Section
   protected  EV3UltrasonicSensor  Vision;
   protected HiTechnicColorSensor ColorVision;
// Actuators
   protected TetrixRegulatedMotor LeftMotor;
   protected TetrixRegulatedMotor RightMotor;
   DifferentialPilot  D1R1Pilot;
//Situations/Scenarios: SECTION 4
   PrintWriter Log;
   situation Situation1;
   location RobotLocation;


//ACTIONS: SECTION 2
   basic_robot()
   {
       Vision = new EV3UltrasonicSensor(SensorPort.S3);
       Vision.enable();
       Situation1 = new situation();
       RobotLocation = new location();
       RobotLocation.X = 0;
       RobotLocation.Y = 0;
       ColorVision = new HiTechnicColorSensor(SensorPort.S2);
       Log = new PrintWriter("basic_robot.log");
       Log.println("Sensors  constructed");
       //...
   }

   public void travel(int Centimeters)
   {
       D1R1Pilot.travel(Centimeters);
   }

   public int getColor()
   {
       return(ColorVision.getColorID());
   }

   public void rotate(int Degrees)
   {
       D1R1Pilot.rotate(Degrees);
   }

//TASKS: SECTION  3

   public void moveToObject() throws Exception
   {
       travel(Situation1.TestRoom.TestObject.Location.X);
       waitUntilStop(Situation1.TestRoom.ObjectLocation.X);
       rotate(90);
       waitForRotate(90);
       travel(Situation1.TestRoom.TestObject.Location.Y);
       waitUntilStop(Situation1.TestRoom.ObjectLocation.Y);

   }

   public void identifyColor()
   {
      Situation1.TestRoom.TestObject.Color = getColor();
   }

   public void reportColor()
   {
       Log.println("color = " +  situation1.TestRoom.TestObject.Color);
   }

   public void  performTask() throws Exception
   {
       moveToObject();
       identifyColor();
       reportColor();
   }

   public static void main(String [] args)  throws Exception
   {
       robot   Unit1 = new basic_robot();
       Unit1.performTask();
   }

}


The softbot frame is divided into four sections:

Image Parts

Image Actions

Image Tasks

Image Scenarios/situations

By filling out each of these four sections, you are developing a complete idea of:

Image What robot you have

Image What actions the robot can take

Image What tasks you expect the robot to perform

Image What scenario/situation the robot will perform the task in

Recall criterion #2 or our seven criteria for defining a true robot in Chapter 1, “What Is a Robot Anyway?”:

Criterion #2 Programmable Actions and Behavior

There must be some way to give a robot a set of instructions detailing

Image What actions to perform

Image When to perform actions

Image Where to perform actions

Image Under what situations to perform actions

Image How to perform actions

The four sections of the softbot frame allow us to completely describe the robot’s programmable actions and behavior. Usually, if you do not have all the information to fill out these four sections, there is something about the robot’s role and responsibilities in the given situation that is not understood, has not been planned for, or has not been considered. On the other hand, once these sections are complete, we have a roadmap for how to implement the robot’s autonomy.

The specification of the softbot frame should not be in a programming language; it should be stated completely in a natural language like Spanish, Japanese, English, and so on. Some pseudocode can be used if needed to clarify some part of the description. Each of the four sections of the softbot frame should be complete so that there is no question what, when, where, and how the robot is to perform. Once the sections of the softbot frame are complete and well understood, we can code them in an appropriate object-oriented language. We specify object-oriented language for softbot frame implementation because classes, inheritance, and polymorphism all play critical parts in the actual implementation.

For Every Softbot Frame Section There Is an Object-Oriented Code Section

If we look at BURT Translation Listing 8.2, there is a Java code section for every softbot frame section. Although some of the low-level detail is left out of our initial draft, we have shown the Java code for all the major components of the softbot frame. BURT Translation Listing 8.2 shows the actual Java code uploaded to our robot for execution.

Section 1: The Robot Parts Specification

Section 1 contains the declarations of the components that the robot has: the sensors, motors, end-effectors, and communication components. Any programmable hardware component can be specified in Section 1, especially if that component is going to be used for any of the situations or scenarios the robot is being programmed for. For our first autonomous robot example, we have a basic_robot class with modest hardware:

Image Ultrasonic sensor

Image Light color sensor

Image Two motors

Whatever tasks we have planned for the robot have to be accomplished with these parts. Notice that in the specification in the softbot frame, we only have to list the fact that the robot has an ultrasonic sensor and a light color sensor. And if we look at the BURT Translation of Section 1, we see the declarations of the actual parts and Java code for those parts:

protected EV3UltrasonicSensor Vision;
protected HiTechnicColorSensor ColorVision;

We have the same situation with the motors. In the softbot frame, we simply specify two motors with decoders, and that eventually is translated into the following appropriate Java code declarations:

protected TetrixRegulatedMotor LeftMotor;
protected TetrixRegulatedMotor RightMotor;

In this case, the sensor and motor components are part of the leJOS Library. The leJOS is Java-based firmware replaced for the LEGO Mindstorms robot kits. It provides a JVM-compatible class library that has Java classes for much of the Mindstorms robotic functionality. A softbot frame can be and is usually best off when it is platform independent. That way particular robot parts, sensors, and effectors can be selected after it is clear what the robot needs to actually perform. For instance, the implementation of the softbot frame could be using Arduino and C++, and we might have code like the following that is used by the Arduino environment:

Servo LeftMotor;
Servo RightMotor;


Image Tip

Look at the softbot frame as part of a design technique that allows you to design the set of instructions that your robot needs to execute in some particular scenario or situation, without having to initially worry about specific hardware components or robot libraries.


Once we have specified the softbot frame, it can be implemented using different robot libraries, sensor sets, and microcontrollers. In this book, we use Java for our Mindstorms EV3 NXT-based robot examples as well as for RS Media robot examples, and the Arduino platform for our C++ examples.

Section 2: The Robot’s Basic Actions

Section 2 has a simple description of the basic actions the robot can perform. We don’t want to lists tasks that the robot is to perform in this section. We do want to list basic actions that are independent of any particular tasks and represent the fundamental capabilities of the robot, for example:

Image Walk forward

Image Scan left, scan right

Image Lift arm, and so on

The BURT Translation Listing 8.1 shows that our basic_robot has seven basic actions:

Image Move forward some distance

Image Move backward some distance

Image Turn left some degrees

Image Turn right some degrees

Image Measure distance to object

Image Determine color of object

Image Report (Log)

After specifying the robot’s basic capabilities, we should have some hint to whether the robot will be able to carry out the tasks it will be assigned. For example, if our scenario required that the robot climb stairs, or achieve various altitudes, the list of actions that our basic_robot can perform would appear to come up short. The robot capabilities listed in the Actions section are a good early indicator of whether the robot is up to the task.

The tasks that the robot is assigned to perform must ultimately be implemented as a combination of one or more of the basic actions listed in the Actions section. The actions are then in turn implemented by microcontroller code. Figure 8.4 shows the basic relationship between tasks and actions.

Image

Figure 8.4 Basic relationship between tasks and actions

The Log action in Section 2 allows the robot to save its sensor values, motor settings, or other pieces of information that are accumulated during its actions and execution of tasks. There should always have to be a code counterpart for anything in the softbot frame, but there is not always a softbot frame component for a piece of code that is used. Take a look at the following code shown previously in Listing 8.2 Section 2 of the Java code containing the basic_robot() constructor:

   basic_robot()
   {
       Vision = new EV3UltrasonicSensor(SensorPort.S3);
       Vision.enable();
       Situation1 = new situation();
       RobotLocation = new location();
       RobotLocation.X = 0;
       RobotLocation.Y = 0;
       ColorVision = new HiTechnicColorSensor(SensorPort.S2);
       Log = new PrintWriter("basic_robot.log");
       Log.println("Sensors  constructed");
       //...
   }

The softbot frame does not mention this action.

The constructor is responsible for the power-up sequence of the robot. It can be used to control what happens when the robot is initially turned on. This includes any startup procedures for any components, port settings, variable initializations, calibrations, speed settings, power checks, and so on. A lot of hardware and software startup housekeeping takes place when a robot is first powered on. This is best put in the constructor but can sometimes be put into initialization routines like the setup() procedure the Arduino programming environment uses. This level of detail can be put into the softbot frame depending on the level of autonomy design being specified. Here we omit it to keep our initial softbot frame as simple as practical.

Notice that the list of basic actions in Section 2 of the softbot frame are translated into implementations in the Action Code section. For example:

Softbot Frame Section 2:
Determine color of object

is ultimately translated into the following basic_robot Java method in Section 2 as a full implementation:

public int getColor()
{
return(ColorVision.getColorID());
}

Here the getColor() method uses the ColorVision object to scan the color of an object and return its color id. This code is the implementation of the getColor() instruction. Notice that the softbot frame descriptions and the name of Java code member functions and methods convey the same idea, for example:

Determine Color of Object and getColor()

This is not a coincidence. Another important use of the softbot frame is to give the robot programmer hints, indicators, and ideas for how to name routines, procedures, object methods, and variables. Remember the robot’s ROLL model. We try to keep level 3 method, routine, and variable names as close as practical to the corresponding level 4 and level 5 descriptions. By comparing each section in the BURT Translation with its Java code equivalent, we can see how this can be accomplished.

Section 3: The Robot’s Situation-Specific Tasks

Whereas the Actions section is used to describe the robot’s basic actions and capabilities independent of any particular task or situation, the Tasks section is meant to describe tasks that the robot will execute for a particular scenario or situation.


Image Note

Actions are robot specific; tasks are situation/scenario specific.


In our sample scenario, the robot’s tasks involve approaching some object and reporting its color. The softbot frame lists the tasks as:

Locate the object in the room, determine its color and report.

The BURT Translation shows the actual implementation for the three tasks:

Move to object
Determine its color
Report

Tasks are implemented using the basic_robot actions described in Section 2. Listing 8.3 shows the implementation of moveToObject() shown in the BURT Translation.

Listing 8.3 Definition of moveToObject() Method

BURT Translations Output: Java Implementations Image


     public void moveToObject() throws Exception
     {
         travel(Situation1.TestRoom.TestObject.Location.X);
         waitUntilStop(Situation1.TestRoom.ObjectLocation.X);
         rotate(90);
         waitForRotate(90);
         travel(Situation1.TestRoom.TestObject.Location.Y);
         waitUntilStop(Situation1.TestRoom.ObjectLocation.Y);
     }


Notice that the travel() and rotate() actions were defined in the Actions section and are used to help accomplish the tasks.

Synchronous and Asynchronous Robot Instructions

BURT Translation Listing 8.3 contains two other actions that we haven’t discussed yet: waitUntilStop() and waitForRotation(). In some robot programming environments, a list of instructions can be considered totally synchronous (sometimes referred to as blocking). That is, the second instruction won’t be executed until the first instruction has completed. But in many robot programming environments, especially robots consisting of many servos, effectors, sensors, and motors that can operate independently, the instructions may be executed asynchronously (sometimes referred to as nonblocking). This means that the robot may try to execute the next instruction before the previous instruction has finished. In Listing 8.3, the waitUntilStop() and waitForRotate() force the robot to wait until the travel() and rotate() commands have been completely executed. The moveToObject() does not take any special arguments, so the question might be asked, move to what object? Where? Look at the argument of the travel() function calls in Listing 8.3. They tell the robot exactly where to go:

Situation1.TestRoom.ObjectLocation.X
Situation1.TestRoom.ObjectLocation.Y

What is Situation1, TestRoom, and TestObject? Our approach to programming autonomous robots requires that the robot be instructed to play some specific role in some scenario or situation. The softbot frame must contain a specification for the scenario and situation.

Section 4: The Robot’s Scenario and Situation

Section 4 of our softbot frame specifies the robot’s scenario as:

Unit1 is located in a small room containing a single object. Unit1 is playing the
role of an investigator and is assigned the task of locating the object, determining,
and reporting its color.

The idea of specifying a particular situation for the robot to perform a particular task and play a particular role is at the heart of programming robots to act autonomously. Figure 8.5 shows three important requirements for autonomous robot program design.

Image

Figure 8.5 Three important requirements for autonomous robot program design

The BURT Translation Listing 8.2 shows the declaration necessary for our situation:

situation Situation1;
location RobotLocation;

However, it doesn’t show the actual implementation. First, it is important to note the relationship between the basic_robot class and the robot’s situation. For a robot to act autonomously, every robot class has one or more situation class as part of its design. We can implement model situations using object-oriented languages such as C++ and Java. So the basic_robot class is used to describe our robot in software, and a situation class is used to describe our robot’s scenario or situation in software. We describe the process of capturing the detail of a scenario/situation in a class in detail in Chapter 9, “Robot SPACES.” Let’s take the first look at a simple definition of our robot’s situation shown in Listing 8.4.

Listing 8.4 Definition of situation Class

BURT Translations Output: Java Implementations Image


class situation{
   public room TestRoom;
   public situation()
   {
       TestRoom = new room();
   }
}


The situation class consists of a single data element of type room named Test room and a constructor that creates an instance of room. So BURT Translation Listing 8.4 shows us that our basic_robot class has a situation that consists of a single room. But what about the object that the robot is supposed to approach and determine the color for? It’s not part of the situation. The object is located in the room. So, we also have a room class shown in BURT Translation Listing 8.5.

Listing 8.5 The room Class

BURT Translations Output: Java Implementations Image


class room{
   protected int Length = 300;
   protected int Width = 200;
   protected int Area;
   public something TestObject;

   public  room()
   {
       TestObject = new something();
   }

   public int area()
   {
       Area = Length * Width;
       return(Area);
   }

   public int length()
   {
       return(Length);
   }

   public int width()
   {
       return(Width);
   }
}


The basic_robot has a situation. The situation consists of a single room. The room has a Length, Width, Area, and TestObject. The robot can find out information about the TestRoom by calling area(), length(), or width() methods. Also, the TestObject is part of the room class. The TestObject is also implemented as an object of type something. Listing BURT Translation Listing 8.6 shows the definitions of the classes something and location.

Listing 8.6 Definitions of the something and location Classes

BURT Translation Output: Java Implementations Image


class location{
   public int X;
   public int Y;
}

class something{
   public int Color;
   public location Location;
   public something()
   {
       Location = new location();
       Location.X = 20;
       Location.Y = 50;
   }
}


These classes allow us to talk about the object and its location. Notice in Listing 8.6 that a something object will have a color and a location. In this case, we know where the object is located: coordinates(20,50), but we do not know what color the object is. It is Unit1’s task to identify and report the color using its color sensor. So the situation, room, something, and location classes shown previously in BURT Translation Listing 8.4 through Listing 8.6 allow us to describe the robot’s scenario/situation as Java code. Once we describe the robot, its tasks, and its situation in code, we can direct the robot to execute that code without the need for further human intervention or interaction.

The Java code in BURT Translation Listing 8.7 shows us how the robot will implement its tasks for the given situation.

BURT Translation Output: Java Implementation

Listing 8.7 BURT Translation for moveToObject(), identifyColor(), Image reportColor(), and performTasks() Methods


public void moveToObject() throws Exception
{
    travel(Situation1.TestRoom.TestObject.Location.X);
    waitUntilStop(Situation1.TestRoom.ObjectLocation.X);
    rotate(90);
    waitForRotate(90);
    travel(Situation1.TestRoom.TestObject.Location.Y);
    waitUntilStop(Situation1.TestRoom.ObjectLocation.Y);
}

public void identifyColor()
{
    Situation1.TestRoom.TestObject.Color = getColor();
}

public void reportColor()
{
    Log.println("color = " + situation1.TestRoom.TestObject.Color);
}

public void performTask() throws Exception
{
    moveToObject();
    identifyColor();
    reportColor();
}


To have the robot execute its task autonomously, all we need to do is issue the commands:

      public static void main(String [] args)  throws Exception
      {
           basic_robot Unit1 = new basic_robot();
           Unit1.performTask();
      }

and Unit1 executes its task.

Our First Pass at Autonomous Robot Program Designs

Having Unit1 approach an object and report its color was our first pass at an autonomous robot design. In this first pass, we presented an oversimplified softbot frame, scenario, and robot task so that you have some idea of what the basic steps and parts are to autonomous robot programming. But these oversimplifications leave many questions: What if the robot can’t find the object? What if the robot goes to the wrong location? What happens if the robot’s color sensor cannot detect the color of the object? What if there is an obstacle between the robot’s initial position and the position of the object? How do we know the robot’s initial position? In the next couple of chapters, we provide more details to our simplified softbot frame, and we expand on techniques of representing situations and scenarios using classes.

We ask Unit1 to not only report the object’s color but retrieve the object as well. And we take a closer look at the scenario and situation modeling and how they are implemented in Java or C++ environments. Keep in mind that all our designs are implemented as Arduino, RS Media, and NXT-Mindstorms based robots. Everything you read can be and has been successfully applied in those environments.

What’s Ahead?

“SPACES,” is an acronym for Sensor Precondition/Postcondition Assertion Check of Environmental Situations. In Chapter 9, we will discuss how SPACES can be used to verify that it’s okay for the robot to carry out its autonomous tasks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset