Robot Sensitivity Training Lesson #12: Machine learning is no substitute for spending quality programming time with your robot.
This book provided you with an introductory approach to instructing a robot to execute tasks autonomously using deliberative, scenario-based programming techniques. In addition to being concerned with how to represent the list of instructions to the robot, these approaches focused on
How to represent the robot’s physical environment within the robot’s programming
How to represent the scenario the robot is in within the robot’s programming
How to code the robot’s role and actions within the scenario
We introduced you to simple object-oriented and agent-oriented programming techniques as a starting point to address each of the preceding focus areas. We emphasized robot autonomy only within well-understood and predefined scenarios. In particular, we avoided the notion of attempting to program a robot to act autonomously in an environment unknown to the robot and unknown to the programmer.
Introducing you to the concept of programming a robot to execute tasks autonomously has its own set of challenges without complicating matters by adding surprises. Although there are approaches to programming a robot to execute tasks autonomously in an unknown environment, these approaches require advanced robotics knowledge and are beyond the scope of an introductory book.
If you are interested and want to know more about programming robots to handle the unknown, we recommend Ronald Arkin’s Behavior-Based Robotics and Thomas Braun’s Embedded Robotics: Mobile Robot Design and Applications with Embedded Systems. If you think you are ready for an intermediate to advanced discussion of our deliberative approach to robot programming we recommend Agent Technology from a Formal Perspective by Christopher A. Rouff et al.
The robots used in this book were low-cost, entry-level robots. We used the following, as seen in Figure 12.1:
LEGO EV3 Mindstorms robot controller
Arduino Uno
SparkFun Red Board Arduino robot controller
Trossen’s Phantom Pincher robotic arm
Arduino compatible Arbotix robot controller
WowWee’s RS Media robot with embedded Linux
Arduino Bluetooth shield for communication between controllers
Pixy (CMUcam5) camera
Servos and parts from Tetrix
as depicted in Figure 12.2.
We used a combination of Vernier, HiTechnic, and LEGO Mindstorms sensors. Our goal was to introduce you to the basics of programming autonomous robots using low-cost, entry-level robots, parts, and sensors. Although we did not use any Raspberry Pi or Beagle Bone-based robot builds, the ideas in this book can be used with any true robot (recall our robot definition from Chapter 1, “What Is a Robot, Anyway?”) that has a controller and supports an object-oriented programming language.
We advocate programming robots to act autonomously only within predefined scenarios and situations. If the scenario and situation that the robot is to perform in is well known and understood, safety precautions can be built in from the start. This helps the robot to be safer for interaction with humans, the robot’s environment, and other machines. Those of us who program robots have a responsibility to build as many safeguards as necessary to prevent harm to life, the environment, and property. Scenario/situation-based programming helps the programmer to identify and avoid safety pitfalls. While scenario/situation programming is not sufficient alone to prevent safety mishaps, it is a step in the right direction. Regardless of a robot’s ultimate set of tasks, if autonomy is involved, safety must be taken into consideration.
In this book, we introduced you to seven techniques for programming a robot to execute its tasks autonomously:
Softbot frames
ROLL models
REQUIRE
RSVP
SPACES
STORIES
PASS
Collectively these programming techniques make up what we call SARAA (Safe Autonomous Robot Application Architecture). We call the robots that have this architecture SARAA robots. When implemented correctly, these programming techniques produce a knowledge-based robot controller. Therefore, a SARAA robot is a knowledge-based robot that can act autonomously within preprogrammed scenarios and situations. At Ctest Laboratories (www.ctestlabs.org), SARAA is being designed to work specifically within open source robotics platforms such as Arduino, Linux, and the ROS (Robot Operating System). If the scenarios and situations that SARAA robots are programmed for are well understood and properly defined, then a SARAA robot design helps promote robot safety.
This is true in part because the SPACES and PASS components are specifically designed to address sensor, actuator, end-effector, and robot logic malfunctions, misconstructions, failures, and faults. SARAA robots are context-sensitive by definition. Figure 12.3 shows the basic architecture of a SARAA robot.
To perform useful tasks, multiple microcontrollers are typically needed for the robot to be fully functional. That is not because we have any specific robot design in mind, but because the nature of things like robot vision, robotic arms, robot navigation, and so on, often require their own dedicated microcontroller. This means that the softbot component shown in Figure 12.3 must have some way to communicate and coordinate the multiple controllers. Figure 12.4 shows the communications architecture for the multiple microcontrollers used.
In our robotics lab, we rely primarily on Bluetooth, XBee, and serial communications between the components. All these technologies have open-source implementations. The entire SARAA architecture can be implemented completely within an open source hardware/software environment.
This book was written with a light introduction to SARAA and to programming autonomous robots in general. We kept Midamba’s scenario and the other example scenarios/situations simple so that the reader wouldn’t get lost in too many details. But to be certain, these were extremely simplified scenarios and situations.
You are encouraged to start with a small project and specific situation/scenario and practice fully implementing (with as much detail as necessary) the RSVP and then the STORIES component for your robot. Start with a single task and then build. Start with a single situation and then add another situation. In this way, you build a library of situations.
Once you have a scenario and all its situations defined and tested, then add another scenario to the robot. Using this approach, you eventually will have a robot that can handle multiple scenarios and many situations. But the key is to start small and build. Be patient. Be thorough.
The complete RSVPs, STORIES, and source code for Midamba’s scenario can be downloaded from www.robotteams.org along with the actual techniques for neutralizing corrosion on alkaline or nickel-based batteries. In addition to these, we have video of the Unit1
and Unit2
robots autonomously solving Midamba’s dilemma.