‹header›
‹date/time›
Click to edit Master text styles
Second level
Third level
Fourth level
Fifth level
‹footer›
‹#›
Bumper can determine if a collision has occurred and can sense whether the contacted item is sliding (like a puck) or is fixed (like a wall)
The light sensors  are ‘mounted’ higher than the pucks but walls block light.
The walls and pucks reflect infrared radiation (IR) .
The light sources hang from the ceiling.
From pg 4, Robot Programming, Joseph L. Jones, McGraw-Hill, 2004.
This is a graphical depiction of the system design aka architecture
Note the divisons devoted to sensing, intelligence and actuation
Intelligence takes input from sensors and sends its output to actuators
Intelligence section is composed of several PRIMITVE behaviors and an arbiter
From pg 4, Robot Programming, Joseph L. Jones, McGraw-Hill, 2004.
This is a graphical depiction of the system design aka architecture
Note the divisons devoted to sensing, intelligence and actuation
Intelligence takes input from sensors and sends its output to actuators
Intelligence section is composed of several PRIMITVE behaviors and an arbiter
Ain a sense this robot is an artificial creature with intelligence
T serves as a contrasts between the methods and approaches in 2 branches of AI:
GOFAI and BB-AI
AI asks “how do we accomplish tasks?” (or “do things”)
IN GOFAI view, the types of tasks are mental games or essentially cognitive in nature
The “we” is human-centric
Symbolic reasoning assumes that we know how our mind works thru introspection!
Often systems forgo perception and action and consdier them to be add-on modules for later with clean, easy to use interface
When instantiating these tasks in a robot - such as the Stanford CART or SHAKEY they noted something similar to what Marvin Minsky states in his book “Society of Mind”
One can also note the relative time periods over the course of evolution and what amount was spent acquiring different skills and competence levels.
A vast proportion of time was from single cell to fish and insects (about 2.5 billion years)
Another billion or so to evolve to mammals..lots of effort to get some basic stuff so that it was right and a good foundation for more adaptation
The great apes only evolved starting 18 million years ago!
Humans around 1.5 million…etc
Note the incremental nature of sophistication and probable dependencies.
These observations about evolution and the complexity of everyday task (for robot builders) suggest an
Alternate route to constructing intelligence.
-one that relies on
-Neuro-biology ---look at how brains, sensors and muscles are wired up
-- ethology: look at natural animal behavior and how it achieves performance
-- forgo introspection and rely on psychophysics
-
-Build an autonomous agent that physically is an autonomous mobile robot and that carries out tasks in a real, unstructured environment
Let’s consider some realizations that guide BB-AI:
From Brooks, Science, Sept 91:
The robots are situated in the world - they do not deal with abstract descriptions, but with the “here” and “now” of the environment that directly influences the behavior of the system.
Airlines reservation system: situated but not embodied
Factory robot: embodied but not situated
See pg 6 of AI Memo 864, A Robust Layered Control System for a Mobile Robot
The horizontal decomposition in vertical slices forms a chain through which information flows from the robot’s environment, via sensing, through the robot and back to the environment, via action
Each piece must have one instance built for the robot to run at all.
If you change a vertical piece you have to preserve the interface or propagate the change altering neighbours’ functionality
In 80’s robots could not deliver realtime performance in a dynamic world with this decomp.
Task based vs sequential flow of information.
A vertical decomp slices the problem on the basis of desired external manifestations of the robot control system. That is “behaviors”. Behaviors are incremental layers of competence with upper depending on lower  but both running at once,
with some control over who gets to direct the actuators (priority or arbitration or subsumption).
Arose from: (Brooks, Science, 91: p 1228)
Realization (Agre and Chapman) that most of what people do is ot problem-solving or planning but benign, routine activity in a dynamic word. The agent doesn’t have to name the objects in the world and manipulate their symbols, the objects are defined through the interactions of the agent with the world
Roschenchein and Kaelbling:  you can talk legitimately about an agent’s beliefs and goals but the agent may not  be manipulating symbolic data structures at run time
Brooks: internal world models are not necessary in addition to being impossible to obtain. Coherent intelligence emerges from separable actions of an agent as they all independently interact with the world and are arbitrated by some priority scheme.
So how does one develop a BB-architecture for one’s own robot - given this compelling argument to do so:
From p 173, Jones, “Robot Programming: A Practical Guide toBB Robotics
From Jones, pg 130-132