Intelligent Mobile Robots
I did follow the course Intelligent Mobile Robots at De Montfort University as part of my Computational Intelligence and Robotics education. To do some revision and to share some information a brief summary is provided on this page.
Introduction
Analyse and evaluate a wider range of sensor processing, architectures and control mechanisms for mobile robots.
Various approaches to incorporating intelligence in mobile robot behaviour.
Critically evaluate and hence determine the most important paradigms currently proposed for mobile robot behaviour, including the ability to reason.
The Basics
ActivMedia Pioneers
- Industrial quality robots
- Indoor and outdoor robots
- They have both internal and external sensors.
- They have several additions and modules
- They have integrated computers onboard beside a microcontroller
Basics of a robot are sensors and control (and maybe actuators).
The subsumption architecture is a reactive robot architecture often applied within the field of behaviour based robotics.
A subsumption architecture is a way of decomposing complicated intelligent behaviour into many "simple" behaviour modules, which are in turn organized into layers. Each layer implements a particular goal of the agent, and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer.
The main advantages of the methodology are:
- the modularity,
- the emphasis on iterative development & testing of real-time systems in their target domain, and
- the emphasis on connecting limited, task-specific perception directly to the expressed actions that require it.
These innovations allowed the development of the first robots capable of animal-like speeds. [4]
Main disadvantages of this model are:
- the inability to have many layers, since the goals begin interfering with each other,
- the difficulty of designing action selection through highly distributed system of inhibition and suppression, and
- the consequent rather low flexibility at runtime.
(Wikipedia, 2008)
A Mobile Robot consists of four subsystems, namely: Locomotion, Sensing, Reasoning and Communication.
Locomotion
Locomotion is the process of causing an autonomous robot or vehicle to move. Dynamics is the study of motion in which the cause of motion is modelled. Kinematics is the study of the mathematics of motion without considering the causes. Understanding of the dynamics is important for programmers and engineers. Understanding the kinematics is important for engineering robots. Mobile robots can be categorized in relation to locomotion based on their application domains. Application domains are terrestrial (wheeled and walking), aquatic (water robots), airborne (mimic aircrafts or birds) and space (operation in outer space microgravity).
Sensing
Sensing is about the measurement of properties of the robot itself and its surrounding environment. Sensors can be categorized into internal and external sensors. They can also be categorized based on sensing type and capabilites, e.g. visual and nonvisual sensors.
Reasoning
Reasoning is the robot’s first step towards acting within the environment. It is also the next step from data fusion. Reasoning involves representation and inferencing. Representation includes: Actions and plans, Environment and space, Knowledge and beliefs. Localization is particular type of reasoning about the robot itself.
Communication
Communication is about how the robot may communicate with an outside operator or other robots. Communications can be categorised as tethered or untehtered. Tethered communication devices limits mobility and range, and are not timely to set up and maintain.
Sensors
Sensors are highly important for robotics because they are the main source of information about the external environment (and the internal state).
Sensors can be categorized into visual and nonvisual, internal (proprioceptive) and external (or active and passive). External sensors can be divided into contact (like touch) and noncontact sensors (e.g. camera). Active sensors transmit a modality in order to obtain information, whereas passive sensors just read data from the environment.
Based on the data data returned sensors can be: (1) range sensors, (2) absolute position sensors, (3) environmental sensors or (4) inertial sensors.
The process of sensing (usually) involves the following flow: information in the environment; sensor/transducer; observation or image; perceptual schema; percept; motor schema; robot action which in turn influences the environment again.
The relationship between physical properties e of interest in the environment and the sensor reading r using the sensor model [pmath]r = f(e)[/pmath]. The sensor model should include a model for dealing with internal and external noise. Sensor models attempt to reduce the modality or dimensionality of the environment which results in an inverse problem.
Important characteristics of sensors are: speed of operation, cost, error rate, robustness, computational requirements and power, weight and size requirements.
Real sensors are noisy; real sensors return an incomplete description of the environment and real sensor cannot usually be modeled completely.
Inertial sensors, like gyroscopes and accelerometers, measure derivatives of the robots position.
Visual sensors are based on light. They consist of two main components, nl. an optical system (focus the light) and an optoelectrical system (transforms light into digital/electrical signal). Two important factors of vision systems are feature extraction and obtaining depth.
Data Fusion
Data fusion, is generally defined as the use of techniques that combine data from multiple sources and gather that information in order to achieve inferences, which will be more efficient than if they were achieved by means of a single source. (Wikipedia, 2008)
Data fusion is important for the interpretation of the sensor readings. The interpretation enables the robot to understand the environment in order to plan, navigate, react or interact with that environment. Data fusion is related to perception.
A robot has usually more than one sensor. The sensor can be of one type, single modality or multi modality. The collation of the data coming from the robots sensors is called data fusion. Data fusion has to be accomplished within three domains.
- sensor interpretation: collation of measurements from different sensors.
- representing space: collation of measurements from different positions.
- temporal logic and reasoning: collation of measurements from different times.
In the processes of knowledge representation and reasoning these three domains can be collated together. Some data needs some preprocessing (part of data fusion) before it can be used in the actual fusion (eg. edge detection in vision systems).
State Space models, Kalman Filtering, Mapping
Knowledge Representation
There are representation techniques such as frames, rules and semantic networks which have originated from theories of human information processing. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inferencing (i.e. drawing conclusions) from knowledge.
Behaviour Representation
Situation calculus is the formal basis of most symbolic AI planners. It suffers the frame problem (which conditions are not affected by an action?) and the qualification problem.
STRIPS is a practical implementation derived from situation calculus. STRIPS operators take the following format: Operator name, Pre-conditions, Delete list, Add list. Delete and Add lists are replaced with post-conditions in more recent STRIPS implementations.
Mapping and Localization
Mapping and localization
Planning and Control
Planning and Control
In artificial intelligence, STRIPS (Stanford Research Institute Problem Solver) is an automated planner developed by Richard Fikes and Nils Nilsson in 1971. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today. (Wikipedia, 2008)
Navigation and Path Planning
Navigation and Path Planning
Pose Maintenance
Servoing is based on the use of the sensor readings to relate the robot to a target state (goal).
Adaptive Localization at Multiple Scales (ALMS) is a process whereby a robot estimates its pose with respect to a mapped environment. ALMS is an example of Extended Kalman Filtering (EKF).
Machine Learning
Machine Learning
Reinforcement Learning
Reinforcement Learning is a machine learning technique that is closely associated with robotics because it relies on sensing the environment and learning control strategies to deal with the different states the robot may occupy.
The MDP (Markov Decision Process) is a learning formalism.
Q-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. A strength with Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. (Wikipedia, 2008)
In many learning applications the goal is to establish a control policy that maps a discrete input space onto a discrete output space such that maximum cumulative reinforcement (reward) is obtained, Q-learning is one mechanism that is applicable to such learning situations.
Adaptive Systems and Feedback Controllers
Adaptrive Systems and Feedback Controllers
Communication
Communication
Multiple Robots
Cooperation between distributed robotic systems
Applications of Mobile Robots
Applications of Mobile Robots
The Future of Mobile Robotics
The future is uncertain đŸ˜‰
More computing resources, higher autonomy
Important Topics
Locomotion, Sensors (non-visual, visual; data fusion), Knowledge representation (Mapping, Pose Maintenance), Control and Planning (Machine Learning and Reinforcement Learning, Neural Networks and Genetics Algorithms), Communication and Applications
landmarks technique, topological mapping, localization
pose maintenance: dead reckoning, servo control
state space models
locomotion, sensors, control, communications for swarm robotics
usage, advantage, disadvantage of STRIPS for planning and control
potential fields for navigation and path planning
Q-learning
data fusion, mapping
neural network control for obstacle avoidance
complex environments
spatial occupancy representations and sensorial maps
techniques for pose maintenance
Terminology: Visual and non-visual sensors, mechatronics, manipulators
Knowledge representation, reasoning, inferencing, mapping
Planning, scheduling, partial ordered planning, STRIPS, UCPOP, distributed control
Machine Learning, adaptive behaviour
Multi-robots, robotic societies.
Cognitive Robots, alternative robots, evolutionary robots.
The main outline is taken from the lecture hand-outs by Alladin Ayesh, other contents are taken from Wikipedia or books (usually referenced). A useful book is Computational Principles of Mobile Robotics by Gregory Dudek and Michael Jenkin.