3. RSVP: Robot Scenario Visual Planning

Robot Sensitivity Training Lesson #3: Don’t instruct the robot to perform a task you can’t picture it performing.

As described in Chapter 2, “Robot Vocabularies,” the robot vocabulary is the language you use to assign a robot tasks for a specific situation or scenario. And once a vocabulary has been established, figuring out the instructions for the robot to execute using that vocabulary is the next step.

Making a picture or a “visual representation” of the scenario and instructions you want the robot to perform can be great way to ensure your robot performs the tasks properly. A picture of the instructions the robot will perform allows you to think through the steps before translating them to the code. Visuals can help you understand the process, and studying that visual can improve its development by seeing what has to be done and elucidating that which may otherwise pose a problem. We call this the RSVP (Robot Scenario Visual Planning). The RSVP is a visual that helps develop the plan of instructions for what the robot will do. The RSVP is composed of three types of visuals:

Image A floorplan of the physical environment of the scenario

Image A statechart of the robot and object’s states

Image Flowcharts of the instructions for the tasks

These visuals ensure that you have a “clear picture” of what has to be done to program a robot to do great feats that can save the world or light the candles on a cake. RSVP can be used in any combination. Flowcharts may be more useful than statecharts for some. For others, statecharts are best. All we suggest is that a floorplan or layout is needed whether statecharts or flowcharts are utilized.

The saying “a picture is worth a thousand words” means that a single image can convey the meaning of a complex idea as well as a large amount of descriptive text. We grew up with this notion while in grade school especially when trying to solve word problems; “draw a picture” of the main ideas of the word problem and magically it becomes clear how to solve it. That notion still works. In this case, drawing a picture of the environment, a statechart, and flowcharts will be worth not only a thousand words but a thousand commands. Developing an RSVP allows you to plan your robot navigation through your scenario and work out the steps of the instructions for the tasks in the various situations. This avoids the trials and errors of directly writing code.

Mapping the Scenario

The first part of the RSVP is a map of the scenario. A map is a symbolic representation of the environment where the tasks and situations will take place. The environment for the scenario is the world in which the robots operate. Figure 3.1 shows the classic Test Pad for NXT Mindstorms robot.

Image

Figure 3.1 A robot world for NXT Mindstorms Test Pad

A Test Pad like the one shown in Figure 3.1 is part of the Mindstorms robot kits. This Test Pad is approximately 24 inches wide, 30 inches long, and has a rectangular shape. There are 16 colors on the Test Pad and 38 unique numbers with some duplicates. There is a series of straight lines and arcs on the pad. Yellow, blue, red, and green squares are on the Test Pad along with other colored shapes in various areas on the pad. It is the robot’s world or environment used for the initial testing of NXT Mindstorms robots’ color sensors, motors, and so on.

Like the Test Pad, a floorplan shows the locations of objects that are to be recognized like colored squares, objects the robot will interact with, or obstacles to be avoided. If objects are too high or too far away, sensors may not be able to determine their location. Determining the path the robot must navigate to reach those locations can also be planned by using this map.

The dimension of the space and of the robot (the robot footprint) may affect the capability of the robot to navigate the space and perform its tasks. For example, for our BR-1 robot, what is the location of the cake relative to the location of the robot? Is there a path? Are there obstacles? Can the robot move around the space? This is what the map helps determine.


Image Tip

Next to the actual robot, the robot’s environment is the most important consideration.


Creating a Floorplan

The map can be a simple 2D layout or floorplan of the environment using geometric shapes, icons, or colors to represent objects or robots. For a simple map of this kind, depicting an accurate scale is not that important, but objects and spaces should have some type of relative scale.

Use straight lines to delineate the area. Decide the measurement system. Be sure the measurement system is consistent with the API functions. Use arrows and the measurements to mark the dimensions of the area, objects, and robot footprint. It’s best to use a vector graphics editor to create the map. For our maps we use Libre Office Draw. Figure 3.2 shows a simple layout of a floorplan of the robot environment for BR-1.

Image

Figure 3.2 A layout of the floorplan for the BR-1 robot environment

In Figure 3.2, the objects of interest are designated: locations of the robot, the table, and the cake on the table. The floorplan marks the dimensions of the area and the footprint of the robot. The lower-left corner is marked (0,0) and the upper-right corner is marked (300,400). This shows the dimensions of the area in cm. It also marks distances between objects and BR-1. Although this floorplan is not to scale, lengths and widths have a relative relationship. BR-1’s footprint length is 50 cm and width is 30 cm.

BR-1 is to light the candles on the cake. The cake is located at the center of an area that is 400 cm × 300 cm. The cake has a diameter of 30 cm on a table that is 100 cm × 100 cm. That means the robot arm of BR-1 should have a reach of at least 53 cm from the edge of the table to reach the candle at the farthest point in the X dimension.

The maximum extension of the robot arm to the tip of the end-effector is 80 cm, and the length of the lighter adds an additional 10 cm. The task also depends on some additional considerations:

Image The height of the candle

Image The height of the cake

Image The length of BR-1 from the arm point to the top of the candle wick

Image The location of the robot

Figure 3.3 shows how to calculate the required reach to light the candle. In this case, it is the hypotenuse of a right triangle. Leg “a” of the triangle is the height of the robot from the top of the wick to the robot arm joint which is 76 cm, and leg “b” is the radius of the table plus the 3 cm to the location of the farthest candle on the cake, which is 53 cm.

Image

Figure 3.3 Calculating the length of the robot arm as the hypotenuse of a right triangle

So the required reach of the robot arm, end-effector, and lighter is around 93 cm. But the robot’s reach is only 90 cm. So BR-1 will have to lean a little toward the cake or get a lighter that is 3 cm longer to light the wick.


Image Note

Determining the positions and required extension of a robot arm is far more complicated than this simple example and is discussed in Chapter 9, “Robot SPACES.” But what is important in the example is how the layout/floorplan helps elucidate some important issues so that you can plan your robot’s tasks.


The Robot’s World

For the robot to be automated it requires details about its environment. Consider this: If you are traveling to a new city you know nothing about, how well will you be able to do the things you want to do? You do not know where anything is. You need a map or someone to show you around and tell you “here is a restaurant” and “here is a museum.” A robot that is fully automated must have sufficient information about the environment. The more information the robot has, the more likely the robot can accomplish its goal.


Image Note

The robot’s world is the environment where the robot performs it tasks. It’s the only world the robot is aware of. Nothing outside that environment matters, and the robot is not aware of it.


All environments are not alike. We know environments are dynamic. The robot’s environments can be partially or fully accessible to a robot. A fully accessible environment means all objects and aspects of the environment are within the reach of the robot’s sensors. No object is too high, low, or far away from the robot to detect or interact with. The robot has all the necessary sensors to receive input from the environment. If there is a sound, the robot can detect it with its sound sensor. If a light is on, the robot can detect it with its light sensor.

A partially accessible environment means there are aspects of the environment the robot cannot detect or there are objects the robot cannot detect or interact with because it lacks the end-effector to pick it up or the location sensor to detect it. An object that is 180 cm from the ground is out of the reach of the robot with a 80 cm arm extension and a height of 50 cm. What if BR-1 is to light the candles once the singing begins and it does not have a sound sensor? Sound is part of the environment; therefore, it will not be able to perform the task. So when creating the floorplan for a partially accessible environment, consider the “robot’s perspective.” For example, for objects that are not accessible by the robot, use some visual indicator to distinguish those for the objects the robot can access. Use color or even draw a broken line around it.

Deterministic and Nondeterministic Environments

What about control? Does the robot control every aspect of its environment? Is the robot the only force that controls or manipulates the objects in its environment? This is the difference between a deterministic and nondeterministic environment.

With a deterministic environment, the next state is completely determined by the current state and the actions performed by the robot(s). This means if the BR-1 robot lights the candles, they will stay lit until BR-1 blows them out. If BR-1 removes the dishes from the table, they will stay in the location they’re placed.

With a nondeterministic environment, like the one for the birthday party scenario, BR-1 does not blow out the candles. (It would be pretty mean if it did.) Dishes can be moved around by the attendees of the party, not just BR-1. What if there are no obstacles between BR-1 and its destination and then a partygoer places an obstacle there? How can BR-1 perform its tasks in a dynamic nondeterministic environment?

Each environment type has its own set of challenges. With a dynamic nondeterministic environment, the robot is required to consider the previous state and the current state before a task is attempted and then make a decision whether the task can be performed.

Table 3.1 lists some of the types of environments with a brief description.

Image

Table 3.1 Some Types of Environments with a Brief Description

RSVP READ SET

Many aspects of the environment are not part of the layout or floorplan but should be recorded somehow to be referenced when developing the instructions for the tasks. For example, the color, weight, height, and even surface type of the objects are all detectable characteristics that are identified by sensors or affect motors and end-effectors as well as the environment type, identified outside forces, and their impact on objects.


Image Note

A READ (Robot Environmental Attribute Description) set is a construct that contains a list of objects that the robot will encounter, control, and interact with within the robot’s environment. It also contains characteristics and attributes of the objects detectable by the robot’s sensors or that affect how the robot will interact with that object.


Some of these characteristics can be represented in the floorplan. But a READ set can contain all the characteristics. Each type of environment should have its own READ set.

For example, color is a detectable characteristic identified by a color or light sensor. The object’s weight determines whether the robot can lift, hold, or carry the object to another location based on the torque of the servos. The shape, height, and even the surface determine whether the object can be manipulated by the end-effector.

Any characteristic of the environment is part of the READ set, such as dimensions, lighting, and terrain. These characteristics can affect how well sensors and motors work. The lighting of the environment, whether sunlight, ambient room light, or candle light, affects the color and light sensor differently. A robot traveling across a wooden floor is different from the robot traveling across gravel, dirt, or carpet. Surfaces affect wheel rotation and distance calculations.

Table 3.2 is the READ set for the Mindstorms NXT Test Pad.

Image

Table 3.2 READ Set for the Mindstorms NXT Test Pad

The READ set for the Test Pad describes the workspace including its type (fully accessible and deterministic), all the colors, and symbols. It describes what will be encountered by a robot when performing a search, such as identifying the blue square. The sets list the attributes and values of the physical workspace, colors, and symbols on the Test Pad.

For a dynamic environment such as our birthday party scenario, the READ set can contain information pertaining to the outside forces that might interact with the objects. For example, there are initial locations for the dishes and cups on the table, but the partygoers may move their dishes and cups to a new location on the table. The new locations should be represented in the READ set along with the time or the condition this occurred. Once the party is over and BR-1 is to remove those dishes and cup, each location should be updated. Table 3.3 is the READ set for the birthday party for the BR-1.

Image
Image

Table 3.3 READ Set for the Birthday Party Scenario

This READ set has three additional columns:

Image Force

Image Time/Condition

Image New Value

Force is the source of the iteration with the object; this force is anything working in the environment that is not the robot. The Time/Condition denotes when or under what condition the force interacts with the object. The New Value is self-explanatory.

Pseudocode and Flowcharting RSVP

Flowcharting is an RSVP used to work out the flow of control of an object to the whole system. It is a linear sequence of lines of instructions that can include any kind of looping, selection, or decision-making. A flowchart explains the process by using special box symbols that represent a certain type of work. Text displayed within the boxes describes a task, process, or instruction.

Flowcharts are a type of statechart (discussed later in this chapter) since they also contain states that are converted to actions and activities. Things like decisions and repetitions are easily represented, and what happens as the result of a branch can be simply depicted. Some suggest flowcharting before writing pseudocode. Pseudocode has the advantage of being easily converted to a programming language or utilized for documenting a program. It can also be easily changed. A flowchart requires a bit more work to change when using flowcharting software.

Table 3.4 list advantages and disadvantages of pseudocode and flowcharting. Both are great tools for working out the steps. It is a matter of personal taste which you will use at a particular time in a project.

Image

Table 3.4 Advantages and Disadvantages of Pseudocode and Flowcharting

The four common symbols used in flowcharting are

Image Start and stop: The start symbol represents the beginning of the flowchart with the label “start” appearing inside the symbol. The stop symbol represents the end of the flowchart with the label “stop” appearing inside the symbol. These are the only symbols with keyword labels.

Image Input and output: The input and output symbol contains data that is used for input (e.g., provided by the user) and data that is the result of processing (output).

Image Decisions: The decision symbol contains a question or a decision that has to be made.

Image Process: The process symbol contains brief descriptions (a few words) of a rule or some action taking place.

Figure 3.4 shows the common symbols of flowcharting.

Image

Figure 3.4 The common symbols of flowcharting

Each symbol has an inbound or outbound arrow leading to or from another symbol. The start symbol has only one outbound arrow, and the stop symbol has only one inbound arrow. The “start” symbol represents the beginning of the flowchart with the label “start” appearing inside the symbol.

The “stop” symbol represents the end of the flowchart with the label “stop” appearing inside the symbol. These are the only symbols with keyword labels. The decision symbol will contain a question or a decision that has to be made. The process symbol will contain brief descriptions (a few words) of a rule or some action taking place. The decision symbol has one inbound arrow and two outbound arrows. Each arrow represents a decision path through the process starting from that symbol:

Image TRUE/YES

Image FALSE/NO

The process, input, and output symbols have one inbound and one outbound arrow. The symbols contain text that describes the rule or action, input or output. Figure 3.5 shows the “Lighting candles” flowchart.

Image

Figure 3.5 The lighting candles flowchart

Notice at the beginning of the flowchart, below the start symbol, BR-1 is to wait until the singing begins. A decision is made on whether the singing has started. There are two paths: If the singing has not started, there is a FALSE/NO answer to the question and BR-1 continues to wait. If the singing has started, there is a TRUE/YES answer and BR-1 enters a loop or decision.

If there are candles to light, that is the decision. If yes, it gets the position of the next candle, positions the robot arm to the appropriate position to ignite the wick, and then ignites the wick. An input symbol is used to receive the position of the next candle to light. The BR-1 is to light all the candles and stops once complete.

Flow of Control and Control Structures

The task a robot executes can be a series of steps performed one after another, a sequential flow of control. The term flow of control details the direction the process takes, which way program control “flows.” Flow of control determines how a computer responds when given certain conditions and parameters. An example of sequential flow of control is in Figure 3.6. Another robot in our birthday scenario is BR-3. Its task is to open the door for the guests. Figure 3.6 shows the sequential flow of control for this task.

Image

Figure 3.6 The flowchart for BR-3

The robot goes to the door, opens it, says, “Welcome,” and then closes the door and returns to its original location. This would look like a rather inconsiderate host. Did the doorbell ring, signaling BR-3 that guests were at the door? If someone was at the door, after saying “Welcome,” did BR-3 allow the guest to enter before closing the door? BR-3 should be able to act in a predictable way at the birthday party. That means making decisions based on events and doing things in repetition.

A decision symbol is used to construct branching for alternative flow controls. Decision symbols can be used to express decision, repetition, and case statements. A simple decision is structured as an if-then or if-then-else statement.

A simple if-then decision for BR-3 is shown in Figure 3.7 (a). “If Doorbell rings, then travel to door and open it.” Now BR-3 will wait until the guest(s) enters before it says “Welcome.” Notice the alternative action to be taken if the guest(s) has not entered. BR-3 will wait 5 seconds and then check if the guest(s) has entered yet. If Yes then BR-3 says “Welcome” and closes the door. This is shown in Figure 3.7 (b) if-then-else; the alternative action is to wait.

Image

Figure 3.7 The flowchart for if-then and if-then-else decisions

In Figure 3.7, the question (or condition test) to be answered is whether the doorbell has rung. What if there is more than one question/condition test that has to be met before BR-3 is to open the door? With about BR-1, what if there were multiple conditions that had to be met before lighting the candles:

Image “If there is a singing AND the lighter is lit then light the candles.”

Image In this case, both conditions have to be met. This is called a Nested decision or condition.

What if there is a question or condition in which there are many different possible answers and each answer or condition has a different action to take? For example, what if as our BR-1 or BR-3 travels across the room it encounters an object and has to maneuver around the object to reach its destination. It could check the range of the object in its path to determine the action to take to avoid it. If the object is within a certain range, BR-1 and BR-3 turn to the left either 90 degrees or 45 degrees, travel a path around the object, and then continue on their original path to their destinations as shown in Figure 3.8.

Image

Figure 3.8 Robots obstacle avoidance

Using the flowchart, this can be expressed as a series of decisions or a case statement. A case is a type of decision where there are several possible answers to a question. With the series of decisions, the same question is asked three times, each with a different answer and action. With a case statement, the question is expressed only one time. Figure 3.9 contrasts the series of decisions in the case statement, which is simpler to read and understand what is going on.

Image

Figure 3.9 Contrast case statement from a series of decisions

Repetition or looping is shown in Figure 3.10. In a loop, a simple decision is coupled with an action that is performed before or after the condition test. Depending on the result, the action is performed again. In Figure 3.10 (a), the action will be performed at least once. If the condition is not met (singing has not started—maybe everyone is having too much fun), the robot must continue to wait. This is an example of a do-until loop, “do” this action “until” this condition is true. A while loop performs the condition test first and if met, then the action is performed. This is depicted in Figure 3.10 (b), while singing has not started, wait. BR-1 will loop and wait until singing starts, as in the do-until loop. The difference is a wait is performed after the condition is met. Another type is the for loop, shown in Figure 3.10 (c), where the condition test controls the specific number of times the loop is executed.

Image

Figure 3.10 Repetition flowcharts for (a) do-until, (b) while, and (c) for loops

Subroutines

When thinking about what role your robot is to play in a scenario or situation, the role is broken down into a series of actions. BR-1’s role is to be a host at a birthday party. This role is broken down into four states:

Image Idle

Image Traveling

Image Lighting candles

Image Waiting

Image Removing dishes

This can be broken down into a series of actions or tasks:

1. Wait until singing begins.

Image Travel to birthday cake table.

Image Light the candles on the cake.

Image Travel to the original location.

2. Wait until party is over.

Image Remove dishes from cake table.

Image Travel back to original location.

These are short descriptions of tasks. Each task can be further broken down into a series of steps or subroutines. “Lighting candles” is a composite state that is broken down into other substates:

Image Locating wick

Image Igniting wick

Actually, “Remove dishes from cake table” and “Travel back to original location” should also be broken down into subroutines. Removing dishes from the cake table requires the positioning of the robot arm to remove each plate and cup subroutines, and traveling requires the rotating of motors subroutines.

Figure 3.11 shows the flowcharting for LightingCandles and its subroutines LocatingWick and IgnitingWick.

Image

Figure 3.11 Flowcharting for LightingCandles and its subroutines LocatingWick and IgnitingWick

A subroutine symbol is the same as a process symbol, but it contains the name of the subroutine with a vertical line on each side of the name of the subroutine. The name of a subroutine can be a phrase that describes the purpose of the subroutine.

Flowcharts are then developed for those subroutines. What’s great about using subroutines is the details don’t have to be figured out immediately. Figuring out how the robot will perform a task can be put off for a while. The highest level processes can be worked out and then later actions/tasks can be broken down.

A subroutine can be identified and generalized from similar steps used at different place, in the robot’s process. Instead of repeating a series of steps or developing different subroutines, the process can be generalized and placed in one subroutine that is called when needed. For example, the traveling procedure started out as a series of steps for BR-1 to travel to the cake table (TableTravel) and then a series of steps to travel back to its original location (OriginTravel). These are the same tasks with different starting and ending locations. Instead of subroutines that use the starting and ending locations, a Travel subroutine requires both the current and final locations of the robot to be used.

Statecharts for Robots and Objects

A statechart is one way to visualize a state machine.


Image Note

A state machine models the behavior of a single robot or object in an environment. The states are the transformations the robot or object goes through when something happens.


For example, a “change of state” can be as simple as a change of location. When the robot travels from its initial location to the location next to the table, this is a change of the robot’s state. Another example is that the birthday candles change from an unlit state to a lit state. The state machine captures the events, transformations, and responses. A statechart is a diagram of these activities. The statechart is used to capture the possible situations for that object in that scenario. As you recall from Chapter 2, a situation is a snapshot of an event in the scenario. Possible situations for the BR-1 are

Image Situation 1: BR-1 waiting for signal to move to new location

Image Situation 2: BR-1 traveling to cake table

Image Situation 3: BR-1 next to cake on a table with candles that have not yet been lit

Image Situation 4: BR-1 positioning the lighter over the candles, and so on

All these situations represent changes in the state of the robot. Changes in the state of the robot or object take place when something happens, an event. That event can be a signal, the result of an operation, or just the passing of time. When an event happens, some activity happens, depending on the current state of the object. The current state determines what is possible.

The event works as a trigger or stimulus causing a condition in which a change of state can occur. This change from one state to another is called a transition. The object is transitioning from stateA, the source state, to stateB, the target state. Figure 3.12 shows a simple state machine for BR-1.

Image

Figure 3.12 State machine for BR-1

Figure 3.12 shows two states for BR-1: Idle or Traveling. When BR-1 is in an Idle state, it is waiting for an event to occur. This event is a signal that contains the new location for the robot. Once the robot receives this signal, it transitions from Idle to Traveling state. BR-1 continues to travel until it reaches its target location. Once reached, the robot transitions from the Traveling state back to an Idle state. Signals, actions, and activities may be performed or controlled by the object or by outside forces. For example, the new location will not be generated by BR-1 but by another agent. BR-1 does have the capability to check its location while traveling.

Developing a Statechart

As discussed earlier, a state is condition or situation of an object that represents a transformation during the life of the object. A state machine shows states and transitions between states. There are many ways to represent a state machine. In this book, we represent a state machine as a UML (Unified Modeling Language) statechart. Statecharts have additional notations for events, actions, conditions, parts of transitions, and types and parts of states.


Image Note

In a statechart, the nodes are states and the arcs are transitions. The states are represented as circles or as rounded-corner rectangles in which the name of the state is displayed. The transitions are arcs that connect the source and the target state with an arrow pointing to the target state.


There are three types of states:

Image Initial: The default starting point for the state machine. It is a solid black dot with a transition to the first state of the machine.

Image Final: The ending state, meaning the object has reached the end of its lifetime. It is represented as a solid dot inside a circle.

Image Composite state and substate: A state contained inside another state. That state is called a superstate or composite state.

States have different parts. Table 3.5 lists the parts of states with a brief description. A state node displaying its name can also display the parts listed in this table. These parts can be used to represent processing that occurs when the object transitions to the new state. There may be actions to take as soon as the object enters and leaves the state. There may be actions that have to be taken while the object is in a particular state. All this can be noted in the statechart.

Image

Table 3.5 Parts of a State

Figure 3.13 shows a state node and format for actions, activities, and internal transition statements.

Image

Figure 3.13 A state node and the format of statements

The entry and exit action statements have this format:

Image Entry/action or activity

Image Exit/action or activity

This is an example of an entry and exit action statement for a state called Validating:

Image entry action: entry / validate(data)

Image exit action: exit / send(data)

Upon entering the Validating state, the validate(data) function is called. Upon exiting this state, the exit action send(data) is called.

Internal transitions occur inside the state. They are events that take place after entry actions and before exit actions if there are any. Self-transitions are different from internal transitions. With a self-transition, the entry and exit actions are performed. The state is left; the exit action is performed.

Then the same state is reentered and the entry action is performed. The action of the self-transition is performed after the exit action and before the entry action. Self-transitions are represented as a directed line that loops and points back to the same state.

An internal or self-transition statement has this format:

Image Name/action or function

For example:

Image do / createChart(data)

“do” is the label for the activity, the function “createChart(data)” is executed.

There are several parts of a transition, the relationship between two states. We know that triggers cause transitions to occur, and actions can be coupled with triggers. A met condition can also cause a transition. Table 3.6 lists the parts of a transition.

Image

Table 3.6 Parts of a Transition

An event trigger has a similar format as a state action statement:

Image Name/action or function

Image name [Guard] / action or function


Image Note

A guard condition is a boolean value or expression that evaluates to True or False. It is enclosed in square brackets. The guard condition has to be met for the function to execute. It can be used in a state or transition statement.


For example, for the internal transition statement, a guarded condition can be added:

Image do [Validated] / createChart(data)

Figure 3.14 is the statechart for BR-1.

Image

Figure 3.14 Statechart for BR-1

There are four states: Idle, Traveling, Lighting candles, Waiting and Removing dishes. When transitioning from Idle to Traveling, BR-1 gets the new location and knows its mission:

Image do [GetPosition] / setMission()


Image Note

Validated is a boolean value. It is a condition that has to be met for createChart() to execute.


There are two transitions from the Traveling state:

Image Traveling to Lighting candles

Image Traveling to Removing dishes

Traveling transitions to LightingCandles when its target is reached and its Mission is candles. Traveling transitions to Removing dishes when its target is reached and its Mission is dishes. To transition from LightingCandles, “candles” mission must be complete. To transition from RemovingDishes to the final state, all missions must be completed.

LightingCandles is a composite state that contains two substates: LocatingWick and IgnitingWick. Upon entering the Lighting candles state, the boolean value Singing is evaluated. If there is singing, then the candles are to be lit. First the wick has to be located, then the arm is moved to that location, and finally the wick can be lit. In the LocatingWick state, the entry action evaluates the expression:

Image Candles > 0

If True, the state exits when the position of the first or next candle is retrieved and then the robot arm is moved to the position (Pos).

The position of the wick is retrieved, so BR-1 transitions to “IgnitingWick.” Upon entry, the lighter is checked to see if it is lit. If lit the candle wick is lit, (an internal state). To exit this state, Candles > 0, then “LocatingWick” state is reentered. If Candles = 0, then BR-1 transitions to “Waiting” state. BR-1 waits until the party is over. Then BR-1 can remove all the dishes. In the “Waiting” state, there is a self-transition “PartyNotOver.” Remember, with a self-transition, the exit and entry actions are performed as the state is exited and reentered. In this case, there are no exit actions, but there is an entry action “wait 5 minutes”. The guard condition is checked, “PartyNotOver.” If the party is not over, then the state is reentered and the entry action is executed; BR-1 waits for 5 minutes. Once the party is over, then BR-1 transitions to “RemovingDishes.” This is the last state. If boolean value AllMissionsCompleted is True, BR-1 transitions to the final state. But some objects may not have a final state and work continuously. Statecharts are good for working out the changing aspect of an object during the lifetime of the object. It shows the flow of control from one state of the object to another state.

What’s Ahead?

In Chapter 4, “Checking the Actual Capabilities of Your Robot,” we discuss what your robot is capable of doing. This means checking the capabilities and limits of the microcontroller, sensors, motors, and end-effectors of the robots.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset