# 3.1. Modeling the State of the Vacuum Cleaning Robot#

We introduce discrete states as an abstraction of navigating through continuous spaces.

In this section, we describe the state space for our vacuum cleaning robot.
Like the trash sorting robot, the vacuum cleaning robot has a discrete state space. However, unlike the trash sorting robot, the state at the current time depends on the state at the previous time. To model this, we introduce the notion of a **discrete-time system**, which we will use extensively throughout this book. In the next section we will then see how we can use actions to (more or less) control the state of the robot over time.

To make our vacuum-cleaning robot example a bit more realistic, we will also make it clear that transitions between arbitrary states are not possible: teleportation for mobile robots has unfortunately not been invented yet! We will use a *graph* to encode this aspect of the the state space for the vacuum cleaning robot, which establishes a deep connection with the notion of graph search in classical AI. In Section 3.5 we will not use graph search but a probabilistic version of it to plan in such state spaces.

## 3.1.1. Defining the Robot’s State#

Just rooms, states are…

The representation of a robot’s state should include all information that is necessary for the robot to act effectively in its environment to achieve its goals. For a vacuum cleaning robot, this might include the exact location of the robot (e.g., x-y coordinates in a map of the house), the heading direction of the robot, whether the floor is carpeted, or the location of any pets or children that might be moving throughout the house.

In this chapter, we will abstract away many of these details by making the following assumptions:

The robot can move in any direction, which means that its heading angle is not important.

The robot is equipped with low-level navigation software that will allow it to move from one room to another, through doorways, etc. (though not with 100% reliability).

The robot is equipped with path planning software to clean the floor in a particular room (e.g., execute random motions, or follow a boustrophedon path).

The robot is equipped with collision-avoidance software, so that it need not worry about the presence of obstacles (e.g., furniture, small dogs, or children).

Taken together, these assumptions allow us to abstract away all of the geometry of the vacuum cleaning problem,
and to *define the state of the robot as the room* of the house in which it is located.

As a running example, we will consider a simple house floor plan that includes five rooms: the living room, a kitchen, an office, a hallway, and the dining room. The floor plan is illustrated below, in Figure 3.1.

As we did for the trash sorting robot, we define the state to be a discrete random variable with a prior probability distribution.

In this chapter, we will consider the case of **discrete time systems**.
In particular, rather than considering the time as a continuous variable, we will consider the system evolution
at a set of discrete moments in time, \(t \in \{ t_1, t_2 \dots \}\).
Hence, we can refer to time using the index \(k \in \{ 1, 2 \dots \}\), knowing that each \(k\) corresponds to a known time stamp \(t_k\), and we denote the random state at time \(t_k\) by \(X_k\).
This approach is appropriate for systems whose state changes qualitatively at distinct
moments of time (e.g., as when the robot moves from the living room to the kitchen),
rather than evolving continuously as time passes (e.g., the position of a robot that rolls along the floor).
While the physical vacuum cleaning robot rolls along the floor to reach its destination,
our representation of state includes only the room in which the robot is currently located.
Therefore, the only interesting moments in time are when the robot traverses from one room to the next, or, as we will see later in this chapter, when the robot executes an action or takes a sensor measurement.

Since the robot will execute a sequence of actions over time, our prior distribution \(P(X_1)\) will only encode knowledge about the **initial state** \(X_1\) of the robot, and the probability distribution for future states must be determined using
probabilistic inference.
In this chapter, we assume that the robot *always* starts out in the office, where its charging station is located,
i.e., \(P(X_1 = \text{Office}) = 1.\)

The following code defines a variable and encodes this prior probability distribution, which we can then pretty-print:

```
# vacuum.rooms = ["Living Room", "Kitchen", "Office", "Hallway", "Dining Room"]
X = VARIABLES.discrete_series("X", [1], vacuum.rooms)
prior = gtsam.DiscreteDistribution(X[1], "0/0/1/0/0")
pretty(prior)
```

*P(X1):*

X1 | value |
---|---|

Living Room | 0 |

Kitchen | 0 |

Office | 1 |

Hallway | 0 |

Dining Room | 0 |

We use this discrete distribution to convey our knowledge that the robot *always* starts out in the office.

## 3.1.2. The State Space for our Vacuum Cleaning Robot#

Connected rooms, un-directionally.

For the trash sorting robot, the state was defined as the category of the current item in the work space. This category had no dependence on the category of the previous piece of trash, nor did it have any effect on the category of the subsequent piece of trash. There were no constraints on the transition from the state at time \(t_k\) to the state at time \(t_{k+1}\). Therefore, it was sufficient to merely enumerate the set of possible states; there were no important relationships between states that required representation.

This is not the case for our vacuum cleaning robot.
For example, as can be seen from
Figure 3.1,
if the robot is currently in the office, it cannot transition directly to the living room;
it must first transition to the hallway before it can transition to the living room.
For the vacuum cleaning robot, room adjacency is an important relationship, and therefore
it should be encoded into our representation.
This can be accomplished using a connectivity graph.
Each vertex of this graph represents a state (i.e., a specific room in the house), and two
vertices, \(x_i, x_j\) are connected by an edge if and only if it the transition between
these two states is possible. Since our robot can move in any direction, if it
can transition from \(x_i\) to \(x_j\), then it can also transition from \(x_j\) to \(x_i\)
(as is the case, e.g., for \(x_i = \text{Hallway}\) and \(x_j = \text{Dining Room}\)).
Therefore, we represent the state space by an *undirected* graph.
Taken together, the collection of states along with connectivity information is referred to as the **state space**,
which in this can be represented by the connectivity graph shown in
Figure 3.2.

## 3.1.3. Bayesian vs. Frequentist Interpretations or Probability#

Believing with probabilities.

In this book, we take a **Bayesian view** of probability, rather than a frequentist one.
This means that we view probabilities as describing our *knowledge* about
events, rather than tallying up the frequencies by which they occur.
For example, think of the weather forecaster talking about the probability of rain tomorrow: this represents a belief about a future state, rather than statistics about previous rainy days.
Probabilities viewed this way can be used to describe knowledge about
the state of the world, and how actions affect the state of an agent and
the world.

This is to be contrasted with a **frequentist view**, where probabilities
are used to describe the frequencies of events in a series of repeated
trials. A Bayesian, instead, might qualify knowledge about an
event that has not even happened yet, let alone multiple times. Of
course, in most cases this belief is based on experience, i.e., lots of
repeated events in the past, and so it can be seen that perhaps these
views are not so different after all.

In this chapter we will get comfortable with the Bayesian view, and learn how to calculate with them. This will allow use to fuse knowledge from different sources, e.g., how uncertain actions and noisy measurements can nevertheless yield enough information about our current state to make reasonable decisions.

## 3.1.4. GTSAM 101#

The code above should feel familiar, but we used a new `Variables`

method called `discrete_series`

to define a *time series* of state variables. The signature of this method is

```
def discrete_series(self, character: str, indices: Iterable[int],
domain: List[str]) -> Dict[int, DiscreteKey]:
"""Create several discrete variables with Symbol names.
Args:
character (str): a single character.
indices: (Iterable[int]): a set of integer indices.
domain (List[str]): names for the different values.
Returns:
Dict[int, DiscreteKey], i.e., [(gtsam.Key, cardinality)]
"""
```

For example, the following creates a series of 5 state variables:

```
states = VARIABLES.discrete_series('X', [1, 2, 3], vacuum.rooms)
```

```
print(states)
```

```
{1: (6341068275337658369, 5), 2: (6341068275337658370, 5), 3: (6341068275337658371, 5)}
```

When we print the results, we see that we now get a dictionary of DiscreteKeys, i.e., integer tuples of the form *(Key, cardinality)*. However, the “Keys” now seem to be very large integers. This is because for series of variables we use the `gtsam.Symbol`

type, composed of a single character and an integer index:

```
symbol = gtsam.Symbol('X', 1)
print(symbol.string())
```

```
X1
```

GTSAM internally stores symbols as a 64-bit integer key, with the 8-bit character in the most significant bits, which explains the large integer value:

```
print(symbol.key())
```

```
6341068275337658369
```

You can see that this corresponds to the first state above. However, as before, pretty printing translates these into a nicer looking strings wherever it matters.