11. Putting It All Together: How Midamba Programmed His First Autonomous Robot

Robot Sensitivity Training Lesson #11: A robot needs power to maintain control.

We started our robotic journey with poor, unfortunate Midamba who found himself marooned on a deserted island. Let’s review Midamba’s predicament.

Midamba’s Initial Scenario

When we last saw Midamba in our robot boot camp his electric-powered Sea-Doo had run low on battery power. Midamba had a spare battery, but the spare had sat for a long time and had started to leak. The battery had corrosion on its terminals and was not working properly.

With one dying battery and one corroded battery, Midamba had just enough power to get to a nearby island where he might possibly find help. Unfortunately for Midamba, the only thing on the island was an experimental chemical facility totally controlled by autonomous robots.

Midamba figured if there was some kind of chemical in the facility that could neutralize the corrosion, he could clean his spare battery and be on his way. There was a front office to the facility, but it was separated from the warehouse where most of the chemicals were stored, and the only entrance to the warehouse was locked.

Midamba could see robots on the monitors moving about in the warehouse transporting containers, marking containers, lifting objects, and so on, but there was no way to get in. All that was in the front office was a computer, a microphone, and a manual titled How to Program Autonomous Robots by Cameron Hughes and Tracey Hughes. There were also a few robots and some containers and beakers, but they were all props. Perhaps with a little luck Midamba could find something in the manual that would allow him to instruct one of the robots to find and retrieve the chemical he needed.

Midamba Becomes a Robot Programmer Overnight!

Midamba finished reading the book and understood all he could. Now all that was left was to somehow get the robots to do his bidding. He watched the robots work and then stop at what appeared to be some kind of program reload station. He noticed each robot had a bright yellow and black label: Unit1, Unit2, and so on. Also, each robot seemed to have different capabilities. So Midamba quickly sketched out his situation.

Midamba’s Situation #1

“I’m in what appears to be a control room. The robots are in a warehouse full of chemicals, and some of those chemicals can probably help me neutralize the corrosion on my battery. But I’m not quite sure what kind of corrosion I have. If the battery is alkaline-based, I will need some kind of acidic chemical to clean the corrosion. If the battery is nickel-zinc or nickel-cadmium-based, I will need some kind of alkaline or base chemical to clean the corrosion. If I’m lucky there might even be some kind of battery charger in the warehouse. According to the book, each of these robots must have a language and a specific set of capabilities. If I could find out what each robot’s capabilities are and what language each robot uses, maybe I could reprogram some of them to look for the material I need. So first I have to find out what languages the robots use and second find out what capabilities the robots have.”

As Midamba viewed each robot he recognized grippers and end-effectors that he saw in the book. He noticed that some robots were using what must be distance measuring sensors, and some robots appeared to be sampling or testing chemicals in the warehouse. Other robots seemed to be sorting containers based on colors or special marking. But that wasn’t enough information. So Midamba ransacked the control room looking for more information about the robots, and as luck would have it, he hit the jackpot! He found the capability matrix for each robot. After a quick scan he was especially interested in the capabilities of the robots labeled Unit1 and Unit2. Table 11.1 is the capability matrix for Unit1 and Unit2.

Image

Table 11.1 Capability Matrix for Unit1 and Unit2

The robots had ARM7, ARM9, and Arduino UNO controllers. Midamba could see that Unit2 had distance and color sensors. Unit1 had robotic arms, cameras, and some kind of chemical measurement capabilities.

He used the matrix to see what languages the robots could be programmed in and discovered that most of the robots could use languages such as Java and C++. Now that Midamba knew what languages were involved and what the robots’ basic capabilities were, all he needed to do was put together some programming that would allow the robots to get those batteries working!

Midamba quickly read the book and managed to get some of the basics for programming robots to execute a task autonomously. As far as he could see, the basic process boiled down to five steps:

1. Write out the complete scenario that includes the robot’s role in simple-to-understand language, making sure that the robots have the capabilities to execute that role.

2. Develop a ROLL model to use for programming the robot.

3. Make an RSVP for the scenario.

4. Identify the robot’s SPACES from the RSVP.

5. Develop and upload the appropriate STORIES to the robot.

Although he didn’t understand everything in the book, these did seem to be the basic steps. So we will follow Midamba and these five steps and see where it gets us.

Step 1. Robots in the Warehouse Scenario

In a nutshell, Midamba’s solution to the problem requires a robot to give him a basic inventory of what kind of chemicals are in the warehouse. After he gets the inventory, he can determine which (if any) chemicals could be useful in neutralizing the battery corrosion. If there are potentially useful chemicals, he needs to verify that by having a robot analyze the chemical and then in some way have the robot deliver the chemical to him so it can be used. The basic scenario can be summarized using the following simple language.

Midamba’s Facility Scenario #1

Using one or more robots in the facility, scan the facility and return an inventory of the containers that are found. Determine whether any of the containers hold substances that can be used to neutralize and clean the kind of corrosion that can occur with alkaline or nickel-based batteries. If such containers are found, retrieve them and return them to the front office.

This type of high-level summary is the first step in the process. It clarifies what the main objectives are and what role the robot(s) will play in accomplishing those objectives. Once you have this type of high-level summary of the scenario, it is useful to either consult an existing robot capability matrix or construct one to immediately see whether the robots meet the REQUIRE aspects of the scenario. If they don’t, they must be improved or you must stop at this point realizing that the robot(s) will not be able to complete the tasks. The whole point of writing this type of description is so that it is clear what the goals are and exactly what to expect the robot to do. Ultimately, giving the robot simple instructions that would accomplish the task is the goal in this situation. For example:

Robot, go find something that will clean the corrosion off the battery.

In fact, that’s precisely the command Midamba wants to give the robot. But robots don’t yet understand language at this level. So the ROLL model is used as a translation mechanism to translate between the human language used to express the initial instructions and the language the robot will ultimately use to execute the instructions.

Make the high-level summary of the situation as detailed as necessary to capture all major objects and actions. It is true that the simplified overview does not have all the details and in practice the description is refined until it is complete. The description is complete when enough detail to describe the robot’s role and how it will execute that role within the scenario is supplied. Now that Midamba understands exactly what he wants the robot to do, he should extract the vocabulary that is used in the scenario. For example, in Facility Scenario #1, some of the vocabulary is:

Image Scan

Image Facility

Image Inventory

Image Return

Image Such

Image Containers

Image Retrieve, and so on

Remember from Chapter 2, “Robot Vocabularies,” these types of words represent the scenario and situation vocabulary at the human level and must ultimately be translated into vocabulary that the robot can process. Identifying and removing ambiguity is one of the benefits that can be realized when spelling out the complete scenario.

What does this phrase mean?

If such containers are found retrieve them.

Can it be a little clearer?

Scan the facility.

Is this clear enough to begin the process of converting it to robot language, or is more detail needed? These are subjective questions and will be different depending on experience with converting between human language and robot language.

Breaking Down the Scenario into Situations

In some cases it’s easier to first break down a scenario into its sequence of situations and then figure out the particulars for each situation. For example, in Facility Scenario #1, there are several initial situations:

Image Robots must report their location within the facility.

Image One robot must scan the facility to see what chemicals are available and then report what is found.

Image One robot must determine whether one of the chemicals is a match for the job.

Image If useful chemicals are found, one of the robots must retrieve the chemicals.

Image Code must be uploaded to the robot to change its current programming.

Once you have the scenario broken down into situational components, refine the situation and then identify an appropriate set of commands, variables, actions (i.e., vocabulary) for each situation. Let’s look at the situation refinement for the Facility Scenario. Table 11.2 shows the initial description and the first cut at refinement.

Image

Table 11.2 Initial Situation Refinement

Step 2. The Robot’s Vocabulary and ROLL Model for Facility Scenario #1

Conceptually, if successful in instructing the robot to carry out its task in each situation, the robot will be able to execute its role for the scenario in its entirety. Now that there is a first cut at the scenario/situation breakdown, the robot’s initial level 5 to level 7 ROLL model should be easier to get. Remember the robot’s vocabulary at this level is a compromise between human natural language and the robot’s Level 1 and level 2 microcontroller language.


Image Tip

The situations taken together make up the scenario.


Table 11.3 is an initial (and partial) draft of the robot’s level 5 through level 7 vocabulary.

Image

Table 11.3 Draft of the Robot’s Level 5 Through Level 7 Vocabulary for Facility Scenario #1

Midamba will need a more detailed vocabulary as he progresses, but this is a good start. Identifying a potential robot level 5 to level 7 vocabulary at this point serves many purposes. First, it will help him complete the RSVP process for programming the robot. These terms can be used in the flowcharts, statecharts, and area descriptions.

Second, each vocabulary term will ultimately be represented by a variable, class, method, function, or set of procedures in the robot’s STORIES code component. So the robot’s initial vocabulary gives Midamba a first look at important aspects of the program. Finally, it will help clarify precisely what the robot is to do by helping remove ambiguity and fuzzy ideas. If the robot’s instructions and role are not clear, it cannot be reasonably expected that the robot will execute its role following those instructions.

Step 3. RSVP for Facility Scenario #1

Midamba is in a front office or observational room for what appears to be a warehouse. In the process of ransacking the office looking for details on the robots, Midamba came across several floorplan layouts for the facility. He noticed that there were sections marked chemicals on the northwest corner of the facility as well as a section of chemicals located in the southeast corner of the facility. A practical first step of any RSVP is to obtain or generate a visual layout of the area(s) where the robot is to perform its tasks.


Image Tip

All the RSVP components and capability matrices for this book were generated with LIBRE office. The area, robot POVs, flowcharts, and statecharts were generated with LIBRE draw. The scenario and situation descriptions were generated using LIBRE writer. We use LIBRE spreadsheet to lay out the capability matrix for each robot.


Figure 11.1 is a visual layout for the Facility Scenario #1 where the robots are located.

Image

Figure 11.1 Facility Scenario #1 Visual Layout


Image Note

Each component of the RSVP plays an important role in programming a robot to execute a new task or set of tasks. Once a visual layout of the area and objects the robot is to interact with is developed, one of the most critical steps in the RSVP process is to convert the basic visual layout of the area into what we call a robot point of view (POV) diagram.


Visual Layouts of a Robot POV Diagram

To see what we mean, let’s consider the robot’s sensors. If the robot has only an ultrasonic sensor and perhaps a color sensor, the robot can only sense things by distance and waves of light. This means the robot can only interact with objects based on their distance or color.

Yes, the robot’s interaction with its environment is limited to its sensors, end-effectors, and actuators. One of the primary purposes of generating a layout of the area and objects is to visualize it from your perspective so that you can later represent it in the robot’s perspective.

So the robot’s POV diagram of the visual layout represents everything the robot interacts with from the perspective of the robot’s sensors and capabilities. If all the robot has is a magnetic field sensor, the diagram has to represent everything as some aspect of a magnetic field. If the robot can only take steps in increments of 10 cm, then distances that the robot has to travel within the area have to be represented as some number relative to 10 cm.

If a robot has to retrieve objects made of different materials and different sizes and the robot’s end-effectors only have weight, width, pressure, and resistance parameters, all the objects that the robot will interact with in the area have to be described in terms of weight, width, pressure, and resistance. The generation of the visual layout is basically a two-step process:

1. Generate a visual layout from the human perspective.

2. Convert or mark that layout with the robot’s POV for everything in the visual layout.

These steps are shown in Figure 11.2.

Image

Figure 11.2 Layout conversion from human to robot’s POV

Once a robot POV diagram has been generated and the robot’s action flowchart has been constructed, specifying the robot’s SPACES is easier. Figure 11.3 takes the initial visual layout and marks the areas recognizable from the robot’s POV.

Image

Figure 11.3 The initial robot POV diagram

Notice in Figure 11.3 that there are no longer areas marked chemicals on the layout because the robot currently has no programming designed to identify chemicals. The areas are marked with dotted lines indicating the robot’s POV. These areas are marked because that’s the only information on the original floorplan that can potentially be recognized by the robot’s sensors at this point.

Midamba’s Facility Scenario #1 (Refined)

So one of the initial useful situations that can be implemented as a program is for: “Using a robot that is mobile and that can navigate the entire facility, have the robot travel to each area in the facility and systematically take a series of photos of each area in the facility that has containers of chemicals. Once the robot has traveled to each area and has taken the necessary photos, the robot should use a reporting procedure to make the photos available.”

Graphical Flowchart Component of the RSVP

If Midamba implements this situation, he will have more information about the warehouse areas. Since Unit2 has a camera attached, Unit2 will be put to work. There is a rough layout of the area (refer to Figure 11.3); a sequence of instructions that Unit2 will execute to accomplish the task as needed. Figure 11.4 is an excerpt from the flowchart (the second component of the RSVP) of the actions that Unit2 should perform.

Image

Figure 11.4 A flowchart excerpt of Unit2’s tasks

Figure 11.4 is a simplification of the complete flowchart constructed for Unit2. However, it does show the main processing. It shows how Midamba should approach some of the instructions the robot has to execute. From the capability matrix, Unit2 is a bipedal robot equipped with infrared sensors, touch sensors, and a camera. It is driven by a 200 MHz ARM9 and a custom 16-bit processor. The processors have 64 MB and 32 MB, respectively. Figure 11.5 is a photo of Unit2.

Image

Figure 11.5 Photo of Unit2 (RS Media)

The system check is implemented in the constructor. Notice in Figure 11.4 that the SPACES requirement will shut Unit2 down if it does not pass the constructor. As Figure 11.4 shows, the main task of Unit2 is to take photos of what’s actually in the warehouse area. Unit2 travels to the specified location and then turns to face the area where chemicals are supposed to be stored and takes a photo. Unit2 is running Linux and uses a Java library for the programming. Although a Java library is used in this instance to program the RS Media, there is a C development tool change available for RS Media that can be downloaded from rsmediadevkit.sourceforge.net.

This is the Java command used to instruct Unit2 to take a photo:

System.out.println(Unit2.CAMERA.takePhoto(100));

Unit2 has an component named CAMERA, and that CAMERA has a method named takePhoto(). The 100 specifies how long the robot is to pause before actually taking the snapshot. The photo taken is captured in a standard jpeg format. The System.out.println() method has been connected to the SD card in the root. The photo is then stored on the robot’s SD card using the System.out.println() method, and it can be retrieved from there.

Notice in the flowchart in Figure 11.4 that the robot’s head is adjusted to take photos at different levels. The task that Unit2 performs is often part of the mapping phase of a robotics project. In some cases, the area that the robot will perform in has already been mapped, or there may be floorplans or blueprints that describe the physical area. In other cases, the robot(s) may perform a preliminary surveillance of the area to provide the programmer with enough information to program the primary tasks. In either case, scenario/situation programming requires a detailed understanding of the environment where the robot will operate.

Since Midamba couldn’t physically go out into the warehouse area, he needed some idea of what was out there. Unit2 provided Midamba additional information about the warehouse through the photos that it took and uploaded to the computer in the front office. Now Midamba could clearly see that at the floor level, there were glass containers along the northwest corner and southeast corner of the building. The containers seemed to be partially filled with liquids of some type. The containers in the northwest corner had blue labels and some kind of geometric figure, and the containers in southeast corners had yellow labels and geometric figures.

Luckily for Midamba, the containers had no lids and would provide easy access for Unit1’s chemical sensors. Midamba also noticed in the northwest corner there appeared to be some kind of electronics on shelves right above the chemicals. If his luck held out, one of the components might be some kind of battery charger. So now that Midamba had a more complete picture of the area, all he needed to do was plan a sequence of instructions for Unit1. This would involve investigating and analyzing the chemicals and the electronic components and retrieving anything that proved to be useful. Figure 11.6 shows a refined robot POV diagram that includes information Unit2 retrieved through photos.

Image

Figure 11.6 A refined POV diagram of the warehouse

Figure 11.6 shows an area that the robot will be able to navigate and interact with based on distance, color, container sizes, container contents, compass, locations, and level. Notice that the containers, one weighing 119 grams and the other 34 grams, are within the weight capability of Arm1. The diameter of each container, 10 cm and 6 cm, is within the range of Arm1’s end-effector grip.

Unit1’s Tools for the Job

Based on the equipment listed in the capability matrix in Table 11.1, Unit1 can use:

Image EV3 ultrasonic sensor to measure distance

Image Modified Pixy CMU-5 Arduino camera for object recognition based on color, location, and shape

Image Vernier pH sensor to analyze the liquids

Image Vernier magnetic sensor in an attempt to find battery charger

Image PhantomX Pincher robotic arm (Arm2) to manipulate the sensors

Image Tetrix robotic arm (Arm1) to retrieve any useful chemicals or electronic components

Figure 11.7 is the flowchart of the instructions that make up the task Unit1 is to execute.

Image

Figure 11.7 The flowchart of Unit1’s tasks


Image Note

Figure 11.7 is a simplification of Unit1’s set of instructions. The actual diagram has 10 pages with far more detail. The complete flowchart for Figure 11.4 and Figure 11.7 is available (along with the complete designs and source code for all the examples of this book) at www.robotteams.org.


Here, we highlight some of the more basic instructions and decisions that the robot has to make. We want to bring particular attention to several of the SPACES checks contained in Figure 11.7:

Image Did the robot pass the system check?

Image At the correct location?

Image pH within range?

Image Blue object located?

These types of precondition and postcondition (SPACES) checks are at the heart of programming robots to execute tasks autonomously. We show a few of these decision points in Figure 11.7 for exposition purposes. There were actually dozens for this application. Depending on the application, an autonomous robot program might have hundreds of precondition/postcondition checks.

These decision points represent the critical areas in the robot’s programming. If they are not met, it is simply not possible for the robot to execute its tasks. So an important part of the RSVP flowchart component is to highlight the SPACES checks and critical decision points.

Robot “Holy Wars,” conflicting schools of robot design and programming, robotics engineering careers made and destroyed are all centered around approaches to handling these kinds of decisions made within an autonomous robot’s programming. Table 11.4 generalizes some of the most challenging areas and some of the common approaches to those areas.

Image

Table 11.4 Challenging Areas of Autonomous Robot Programming

The approaches in Table 11.4 can generally be divided into deliberative and reactive. When we use the term deliberative in this book, we mean programming explicitly written by hand (nonautomatic means). We use the term reactive to mean instructions learned through machine learning techniques, various puppet-mode approaches, and bio-inspired programming techniques. And of course, there are hybrids of these two basic approaches. The areas listed in Table 11.4 are focus points of robot programming and where handling the Sensor Precondition Postcondition Assertion Check of Environmental Situations (SPACES) will determine whether the robot can accomplish its tasks.


Image Tip

There is no one-size-fits-all rule for handling pre/postconditions and assertions.


Everything is dependent on the robot build, the scenario/situation that the robot is in, and the role that the robot will play in the scenario. However, we offer two useful techniques here:

1. At every decision point, use multiple sensors (if possible) for situation verification.

2. Check those sensors against the facts that the robot has established and stored for the situation.

The first technique should involve different types of sensors if possible. For instance, if the robot is supposed to acquire a blue object, you might use a color sensor or camera that identifies the fact that the object is blue. A touch or pressure sensor can be used to verify the fact that the robot actually has grabbed the object, and a compass can be used to determine the robot’s position.

The second technique involves using the facts the robot has established about the environment. These facts are also called its knowledgebase. If the robot’s facts say that the object is supposed to be located at a certain GPS location, is blue, and weighs 34 grams, these facts should be compared against the collective sensor measurements.

Collectively, we call these two techniques PASS (propositions and sensor states). The propositions are statements or facts that the robot has established to be true about the environment (usually from its original programming or ontology), and the sensor states are measurements that have been taken by the robot’s sensors and end-effectors. Applying PASS to a situation does not necessarily guarantee that the robot is correct in its localization, navigation, object recognition, and so on, but it does add one more level of confidence to the robot’s autonomy.

When looking at the decision points and SPACES in Figure 11.7, keep in mind that in practice it is helpful to consult as many relevant sensors as practical, as well as the robot’s knowledgebase, at every critical decision point. In using our approach to storing the information about the scenario in the robot, the facts are stored in the STORIES component introduced in Chapter 10, “An Autonomous Robot Needs STORIES.”

State Diagram Component of the RSVP

The state diagram for the RSVP is made easier by the scenario/situation breakdown. Figure 11.8 is the scenario state diagram for Facility Scenario #1.

Image

Figure 11.8 The Facility Scenario #1 state diagram.

Each circle in Figure 11.8 represents a major situation in the scenario. The robot cannot effectively proceed to the next situation until the SPACES in the current situation are successfully met. We recommend using the PASS technique at each of the SPACES. One situation ordinarily leads to the next, and the situations are usually sequentially dependent, so the robot does not have the option to complete some and not complete others. The state diagram gives us a clear picture of how the robot’s autonomy must progress through the scenario as well as what all the major succeed/fail points will be.

Midamba’s STORIES for Robot Unit1 and Unit2

Once Midamba was done with the RSVP for his predicament, all he needed to do was develop STORIES for Unit1 and Unit2, upload them, and hopefully he would be on his way. Recall from Chapter 10 one of the major functions of the STORIES component is to capture a description of the situation, objects in the situation, and actions required in the situation as object-oriented code that can be uploaded to the robot. Here we highlight six of the object types:

Image Scenario/situation object

Image pH sensor object

Image Magnetic field sensor object

Image Robotic arm object

Image Camera object

Image Bluetooth/serial communication object

Midamba had to build each of these objects with either Arduino C/C++ code/class libraries or leJOS Java code/class libraries depending on the sensors used.1 All these objects are part of the STORIES component code that had to be uploaded to Unit1 and Unit2 for the robots to execute our Facility Scenario. These STORIES objects are an improvement over our extended scenario objects discussed in Chapter 10. One of the first improvements is the addition of the scenario class and object shown in BURT Translation Listing 11.1.

BURT Translation Listing 11.1 The scenario Class

BURT Translations Output: Java Implementations Image


//Scenario/Situation: SECTION 4
  269       class scenario{
  270          public ArrayList<situation> Situations;
  271          int SituationNum;
  272          public scenario(softbot Bot)
  273          {
  274              SituationNum = 0;
  275              situation  Situation1 = new situation(Bot);
  276              Situations.add(Situation1);
  277
  278
  279          }
  280          public situation nextSituation()
  281          {
  282              if(SituationNum < Situations.size()){
  283              {
  284                 return(Situations.get(SituationNum));
  285                 SituationNum++;
  286
  287              }
  288              else{
  289
  290                      return(null);
  291              }
  292          }
  293
  294       }


In Chapter 10, we introduced a situation class. A situation may have multiple actions, but what if we have multiple situations? Typically, a scenario can be broken down into multiple situations. Recall from Figure 11.8 that our state diagram divided our warehouse scenario into multiple situations. Recall our situation refinements from Table 11.2. Line 270 in Listing 11.1 is as follows:

  270       public ArrayList<situation> Situations;

This shows that our scenario class can have multiple situations. Using this technique, we can upload complex scenarios to the robot consisting of multiple situations. In fact for practical applications, the scenario object is the primary object declared as belonging to the softbot. All other objects are components of the scenario object. The ArrayList shown on line 270 of Listing 11.1 can contain multiple situation objects. The scenario object accesses the next situation by using the nextSituation() method. In this case, we simply increment the index to get the next situation. But this need not be the case. The nextSituation() method can be implemented using whatever selection criteria is necessary for retrieving the object out of the Situations ArrayList list. BURT Translation Listing 11.2 is an example of one of our situation classes.

BURT Translation Listing 11.2 A situation Class

BURT Translations Output: Java Implementations Image


//Scenario/Situation: SECTION 4
  228       class situation{
  229
  230          public room Area;
  231          int ActionNum = 0;
  232          public ArrayList<action>  Actions;
  233          action RobotAction;
  234          public situation(softbot  Bot)
  235          {
  236              RobotAction = new action();
  237              Actions = new ArrayList<action>();
  238              scenario_action1 Task1 = new scenario_action1(Bot);
  239              scenario_action2 Task2 = new scenario_action2(Bot);
  240              scenario_action3 Task3 = new scenario_action3(Bot);
  241              scenario_action4 Task4 = new scenario_action4(Bot);
  242              Actions.add(Task1);
  243              Actions.add(Task2);
  244              Actions.add(Task3);
  245              Actions.add(Task4);
  246              Area = new room();
  247
  248          }
  249          public void nextAction() throws Exception
  250          {
  251
  252              if(ActionNum < Actions.size())
  253              {
  254                 RobotAction = Actions.get(ActionNum);
  255              }
  256              RobotAction.task();
  257              ActionNum++;
  258
  259
  260          }
  261          public int numTasks()
  262          {
  263              return(Actions.size());
  264
  265          }
  266
  267       }


Notice that this situation object consists of an area and a list of actions. Where do we get details for our situations? The RSVP components and the ROLL model Levels 3 through 7 are the sources for the objects that make up each situation. Notice that the situation has an Area declared on line 230 and initialized on line 246. But what does Area consist of? Recall our refined robot POV diagram from Figure 11.6. This diagram gives us the basic components of this situation. BURT Translation Listing 11.3 shows the definition of the room class.

BURT Translation Listing 11.3 Definition of the room Class

BURT Translations Output: Java Implementations Image


//Scenario/Situation: SECTION 4
  174       class room{
  175          protected int Length = 300;
  176          protected int Width = 200;
  177          protected int Area;
  178          public something BlueContainer;
  179          public something YellowContainer;
  180          public something  Electronics;
  181
  182          public  room()
  183          {
  184              BlueContainer =  new something();
  185              BlueContainer.setLocation(180,125);
  186              YellowContainer = new something();
  187              YellowContainer.setLocation(45,195);
  188              Electronics = new something(25,100);
  189          }
  190
  191
  192
  193          public int  area()
  194          {
  195              Area = Length * Width;
  196              return(Area);
  197          }
  198
  199          public  int length()
  200          {
  201
  202              return(Length);
  203          }
  204
  205          public int width()
  206          {
  207
  208              return(Width);
  209          }
  210       }


Here we show only some of the components of the room for exposition purposes. The room has a size; it contains a blue container and a yellow container and some electronics. Each thing has a location. The containers are declared using the something class from Listing 10.5 in Chapter 10. The scenario, situation, and room classes are major parts of the STORIES component because they are used to describe the physical environment where the robot executes its tasks. They are also used to describe the objects that the robot interacts with. Here, we show enough detail for the reader to understand how these classes must be constructed. Keep in mind that many more details would be needed to fill in the scenario, situation, and area classes. For example, the something class from BURT Translation Listing 10.5 has far more detail. Consider the following:

class something{;
     x_location Location;
     int Color;
     float  Weight;
     substance  Material;
     dimensions  Size;
}

These attributes, the getter, setter methods as well as the basic error checking, make up only the basics of this class. The more autonomous the robot is, the more detail the scenario, situation, and something classes require. Here the scenarios and situations are kept simple so that the beginner can see and understand the basic structures and coding techniques being used. Every scenario and situation has one or more actions in addition to the things within the scenario. Notice lines 232 to 245 of Listing 11.2; these lines define the actions of the situation. BURT Translation Listing 11.4 shows the declaration of the action class.

BURT Translation Listing 11.4 Declaration of the action Class

BURT Translations Output: Java Implementations Image


//ACTIONS: SECTION 2
   31       class action{
   32          protected softbot Robot;
   33          public action()
   34          {
   35          }
   36          public action(softbot Bot)
   37          {
   38              Robot = Bot;
   39
   40          }
   41          public void task() throws Exception
   42          {
   43          }
   44       }
   45
   46       class scenario_action1  extends action
   47       {
   48
   49          public scenario_action1(softbot Bot)
   50          {
   51              super(Bot);
   52          }
   53          public void task() throws Exception
   54          {
   55              Robot.moveToObject();
   56
   57          }
   58       }
   59
   60
   61
   62       class scenario_action2 extends action
   63       {
   64
   65          public  scenario_action2(softbot Bot)
   66          {
   67
   68              super(Bot);
   69          }
   70
   71          public  void task() throws Exception
   72          {
   73              Robot.scanObject();
   74
   75          }
   76       }
   77
   78
   79
   80       class scenario_action3  extends  action
   81       {
   82
   83          public  scenario_action3(softbot Bot)
   84          {
   85
   86              super(Bot);
   87          }
   88
   89          public  void task() throws Exception
   90          {
   91              Robot.phAnalysisOfObject();
   92
   93          }
   94       }
   95
   96
   97
   98
   99       class scenario_action4  extends  action
  100       {
  101
  102          public  scenario_action4(softbot Bot)
  103          {
  104
  105              super(Bot);
  106          }
  107
  108          public  void task() throws Exception
  109          {
  110              Robot.magneticAnalysisOfObject();
  111
  112          }
  113
  114
  115       }


There are four basic actions that we show for this situation. Keep in mind there are more. Action 3 and Action 4 are particularly interesting because they are remotely executed. The code in Listing 11.4 is Java and is running on an EV3 microcontroller, but the phAnalysisofObject() and the magneticAnalysisOfObject() methods are actually implemented by Arduino Uno based components on Unit1. So these methods actually send simple signals through Bluetooth to the Arduino components. Although the implementation language changes, the idea of classes and objects that represent the things in the scenario remains the same. BURT Translation Listing 11.5 shows the Arduino C++ code used to implement the pH analysis, magnetic field analysis, and Bluetooth communication code.

BURT Translation Listing 11.5 Arduino C++ Code for pH and Magnetic Field Analysis and Bluetooth Communication

BURT Translations Output: Arduino C++ Implementations Image


//PARTS: SECTION 1
//Sensor Section
    2    #include <SoftwareSerial.h>   //Software Serial Port
    3    #define RxD 7
    4    #define TxD 6
    5
    6    #define DEBUG_ENABLED  1
    7
    8    SoftwareSerial blueToothSerial(RxD,TxD);
    9    // 0.3 setting on Vernier Sensor
   10    // measurement unit is Gauss
   11
   12    class analog_sensor{
   13       protected:
   14           int Interval;
   15           float Intercept;
   16           float Slope;
   17           float Reading;
   18           float Data;
   19           float Voltage;
   20       public:
   21           analog_sensor(float I, float S);
   22           float readData(void);
   23           float voltage(void);
   24           float sensorReading(void);
   25
   26    };
   27    analog_sensor::analog_sensor(float I,float S)
   28    {
   29        Interval = 3000; // in ms
   30        Intercept = I; // in mT
   31        Slope = S;
   32        Reading = 0;
   33        Data = 0;
   34    }
   35    float analog_sensor::readData(void)
   36    {
   37        Data = analogRead(A0);
   38    }
   39    float  analog_sensor::voltage(void)
   40    {
   41        Voltage = readData() / 1023 * 5.0;
   42    }
   43
   44    float analog_sensor::sensorReading(void)
   45    {
   46        voltage();
   47        Reading = Intercept + Voltage * Slope;
   48        delay(Interval);
   49        return(Reading);
   50    }
   51


This is the code executed to check the chemicals and electronics that Midamba is searching for. Figure 11.9 shows a photo of Unit1’s robot arm holding the pH sensor and analyzing the material in one of the containers. This code is designed to work with Vernier analog sensors, using the Vernier Arduino Interface Shield and an Arduino Uno. In this case, we used a SparkFun RedBoard with an R3 Arduino layout.

Image

Figure 11.9 (a) A photo of Unit1 analyzing a liquid in a container using a pH sensor held by its robotic arm (Arm2).

Alkaline battery corrosion is neutralized with substances that have a pH measurement above 7, and nickel battery corrosion is neutralized with substances that have a pH measure below 7. Figure 11.10 shows a photo of Unit1’s robot arm holding the magnetic field sensor used to check the electronics for live battery chargers. We chose the 0.3mT (Tesla Setting) using Gauss.

Image

Figure 11.10 (a) A photo of Unit1’s robotic arm (Arm2) holding a magnetic field sensor.

The Arduino code takes the measurement and then sends the measurement over Bluetooth back to the EV3 controller where the information is stored as part of the robot’s knowledgebase. We used a Bluetooth shield with the Arduino RedBoard and the Vernier shield to accomplish the Bluetooth connection. Figure 11.11 is a photo of the sensor array component that Unit1 uses.

Image

Figure 11.11 The sensor array component connected to the three boards along with the other sensors.

BURT Translation Listing 11.6 shows the main loop of the Arduino controller and how sensor readings are sent both to the serial port and to the Bluetooth connection.

BURT Translation Listing 11.6 Main Loop of the Arduino Controller

BURT Translations Output: Arduino C++ Implementations Image


//PARTS: SECTION 1
//Sensor Section
   53    analog_sensor  MagneticFieldSensor(-3.2,1.6);
   54    analog_sensor  PhSensor(13.720,-3.838);
   55
   56    int ReadingNumber=1;
   57
   58
   59    void setup()
   60    {
   61        Serial.begin(9600); //initialize serial communication at 9600 baud
   62        pinMode(RxD, INPUT);
   63        pinMode(TxD, OUTPUT);
   64        setupBlueToothConnection();
   65    }
//TASKS: SECTION 3
   66    void loop()
   67    {
   68        float Reading;
   69        char InChar;
   70        Serial.print(ReadingNumber);
   71        Serial.print(" ");
   72        Reading = PhSensor.sensorReading();
   73        Serial.println(Reading);
   74        blueToothSerial.println(Reading);
   75        delay(3000);
   76        blueToothSerial.flush();
   77        if(blueToothSerial.available()){
   78           InChar = blueToothSerial.read();
   79           Serial.print(InChar);
   80        }
   81        if(Serial.available()){
   82           InChar  = Serial.read();
   83           blueToothSerial.print(InChar);
   84        }
   85        ReadingNumber++;
   86    }
   87    void setupBlueToothConnection()
   88    {
   89        Serial.println("setting up bluetooth connection");
   90        blueToothSerial.begin(9600);
   91
   92        blueToothSerial.print("AT");
   93        delay(400);
   94        //Restore all setup values to factory setup
   95        blueToothSerial.print("AT+DEFAULT");
   96        delay(2000);
   97        //set the Bluetooth name as "SeeedBTSlave",the Bluetooth
             //name must be less than 12 characters.
   98        blueToothSerial.print("AT+NAMESeeedBTSlave");
   99        delay(400);
  100        // set the pair code to connect
  101        blueToothSerial.print("AT+PIN0000");
  102        delay(400);
  103
  104        blueToothSerial.print("AT+AUTH1");
  105        delay(400);
  106
  107        blueToothSerial.flush();
  108    }



Image Note

The Bluetooth connection is set up in lines 87 to 107. The transmit pin is set to pin 7 on the Bluetooth shield, and the receive pin is set to pin 6. We used version 2.1 of the Bluetooth shield.


Autonomous Robots to Midamba’s Rescue

We highlighted Midamba’s RSVP and some of the major components of the robots and softbots he built. The robot, the softbot components, and this approach to programming robots to be autonomous ultimately helped Midamba out of his predicament.

Figure 11.14 shows the story of how Midamba programmed the Unit1 and Unit2 robots to retrieve the neutralizer he needed to clean the corrosion from his battery.

Image

Figure 11.14 How Midamba programmed the Unit1 and Unit2 robots to retrieve the neutralizer.

Image
Image

Endnote

1. Although it is possible to program a robot using a single language, our robot projects almost always result in a combination of C++ robot libraries (mostly Arduino) and Java libraries (e.g., Android, leJOS), using sockets and Bluetooth for communication.

What’s Ahead?

In Chapter 12, “Open Source SARAA Robots for All!”, we wrap up the book by discussing the open source, low-cost robot kits and components that were used in the book. We review the techniques that were used throughout the book and also discuss SARAA (Safe Autonomous Robot Application Architecture), our approach to developing autonomous robots.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset