Motivation, Approaches, and Outstanding Issues / I: |
Why Multiple Robots? / 1: |
Advantages / 1.1: |
Major Themes / 1.2: |
Agents and Multi-Agent Systems / 1.3: |
Multi-Agent Robotics / 1.4: |
Toward Cooperative Control / 2: |
Cooperation-Related Research / 2.1: |
Distributed Artificial Intelligence / 2.1.1: |
Distributed Systems / 2.1.2: |
Biology / 2.1.3: |
Learning, Evolution, and Adaptation / 2.2: |
Design of Multi-Robot Control / 2.3: |
Approaches / 3: |
Behavior-Based Robotics / 3.1: |
Collective Robotics / 3.2: |
Evolutionary Robotics / 3.3: |
Inspiration from Biology and Sociology / 3.4: |
Summary / 3.5: |
Models and Techniques / 4: |
Reinforcement Learning / 4.1: |
Markov Decision Process / 4.1.1: |
Reinforcement Learning Algorithms / 4.1.2: |
Temporal Differencing Techniques / 4.1.3: |
Q-Learning / 4.1.4: |
Multi-Agent Reinforcement Learning / 4.1.5: |
Genetic Algorithms / 4.2: |
Artificial Life / 4.3: |
Artificial Immune System / 4.4: |
Probabilistic Modeling / 4.5: |
Related Work on Multi-Robot Planning and Coordination / 4.6: |
Outstanding Issues / 5: |
Self-Organization / 5.1: |
Local vs. Global Performance / 5.2: |
Planning / 5.3: |
Multi-Robot Learning / 5.4: |
Coevolution / 5.5: |
Emergent Behavior / 5.6: |
Reactive vs. Symbolic Systems / 5.7: |
Heterogeneous vs. Homogeneous Systems / 5.8: |
Simulated vs. Physical Robots / 5.9: |
Dynamics of Multi-Agent Robotic Systems / 5.10: |
Case Studies in Learning / 5.11: |
Multi-Agent Reinforcement Learning: Technique / 6: |
Autonomous Group Robots / 6.1: |
Overview / 6.1.1: |
Sensing Capability / 6.1.2: |
Long-Range Sensors / 6.1.3: |
Short-Range Sensors / 6.1.4: |
Stimulus Extraction / 6.1.5: |
Primitive Behaviors / 6.1.6: |
Motion Mechanism / 6.1.7: |
Formulation of Reinforcement Learning / 6.2: |
Behavior Selection Mechanism / 6.2.2: |
Multi-Agent Reinforcement Learning: Results / 6.3: |
Measurements / 7.1: |
Stimulus Frequency / 7.1.1: |
Behavior Selection Frequency / 7.1.2: |
Group Behaviors / 7.2: |
Collective Surrounding / 7.2.1: |
Cooperation among Ranger Robots / 7.2.2: |
Moving away from Spatially Cluttered Locations / 7.2.2.1: |
Changing a Target / 7.2.2.2: |
Cooperatively Pushing Scattered Objects / 7.2.2.3: |
Collective Manipulation of Scattered Objects / 7.2.2.4: |
Concurrent Learning in Different Groups of Robots / 7.2.3: |
Concurrent Learning in Predator and Prey / 7.2.3.1: |
Chasing / 7.2.3.2: |
Escaping from a Surrounding Crowd / 7.2.3.3: |
Multi-Agent Reinforcement Learning: What Matters? / 8: |
Collective Sensing / 8.1: |
Initial Spatial Distribution / 8.2: |
Inverted Sigmoid Function / 8.3: |
Emerging a Periodic Motion / 8.4: |
Macro-Stable but Micro-Unstable Properties / 8.7: |
Dominant Behavior / 8.8: |
Evolutionary Multi-Agent Reinforcement Learning / 9: |
Robot Group Example / 9.1: |
Target Spatial Distributions / 9.1.1: |
Target Motion Characteristics / 9.1.2: |
Behavior Learning Mechanism / 9.1.3: |
Evolving Group Motion Strategies / 9.2: |
Chromosome Representation / 9.2.1: |
Fitness Functions / 9.2.2: |
The Algorithm / 9.2.3: |
Parameters in the Genetic Algorithm / 9.2.4: |
Examples / 9.3: |
Case Studies in Adaptation / 9.4: |
Coordinated Maneuvers in a Dual-Agent System / 10: |
Issues / 10.1: |
Dual-Agent Learning / 10.2: |
Specialized Roles in a Dual-Agent System / 10.3: |
The Basic Capabilities of the Robot Agent / 10.4: |
The Rationale of the Advice-Giving Agent / 10.5: |
The Basic Actions: Learning Prerequisites / 10.5.1: |
Genetic Programming of General Maneuvers / 10.5.2: |
Genetic Programming of Specialized Strategic Maneuvers / 10.5.3: |
Acquiring Complex Maneuvers / 10.6: |
Experimental Design / 10.6.1: |
The Complexity of Robot Environments / 10.6.2: |
Experimental Results / 10.6.3: |
Lightweight or Heavyweight Flat Posture / 10.6.4: |
Lightweight Curved Posture / 10.6.5: |
Lightweight Corner Posture / 10.6.6: |
Lightweight Point Posture / 10.6.7: |
Collective Behavior / 10.7: |
Group Behavior / 11.1: |
What is Group Behavior? / 11.1.1: |
Group Behavior Learning Revisited / 11.1.2: |
The Approach / 11.2: |
The Basic Ideas / 11.2.1: |
Group Robots / 11.2.2: |
Performance Criterion for Collective Box-Pushing / 11.2.3: |
Evolving a Collective Box-Pushing Behavior / 11.2.4: |
The Remote Evolutionary Computation Agent / 11.2.5: |
Collective Box-Pushing by Applying Repulsive Forces / 11.3: |
A Model of Artificial Repulsive Forces / 11.3.1: |
Pushing Force and the Resulting Motion of a Box / 11.3.2: |
Fitness Function / 11.3.3: |
Task Environment / 11.3.5: |
Simulation Results / 11.3.5.2: |
Generation of Collective Pushing Behavior / 11.3.5.3: |
Adaptation to New Goals / 11.3.5.4: |
Discussions / 11.3.5.5: |
Collective Box-Pushing by Exerting External Contact Forces and Torques / 11.4: |
Interaction between Three Group Robots and a Box / 11.4.1: |
Case 1: Pushing a Cylindrical Box / 11.4.2: |
Pushing Position and Direction / 11.4.2.1: |
Pushing Force and Torque / 11.4.2.2: |
Case 2: Pushing a Cubic Box / 11.4.3: |
The Coordinate System / 11.4.3.1: |
Adaptation to Dynamically Changing Goals / 11.4.3.2: |
Convergence Analysis for the Fittest-Preserved Evolution / 11.4.6.5: |
The Transition Matrix of a Markov Chain / 11.5.1: |
Characterizing the Transition Matrix Using Eigenvalues / 11.5.2: |
Case Studies in Self-Organization / 11.6: |
Multi-Agent Self-Organization / 12: |
Artificial Potential Field (APF) / 12.1: |
Motion Planning Based on Artificial Potential Field / 12.1.1: |
Collective Potential Field Map Building / 12.1.2: |
Overview of Self-Organization / 12.2: |
Self-Organization of a Potential Field Map / 12.3: |
Coordinate Systems for a Robot / 12.3.1: |
Proximity Measurements / 12.3.2: |
Distance Association in a Neighboring Region / 12.3.3: |
Incremental Self-Organization of a Potential Field Map / 12.3.4: |
Robot Motion Selection / 12.3.5: |
Directional1 / 12.3.5.1: |
Directional2 / 12.3.5.2: |
Random / 12.3.5.3: |
Experiment 1 / 12.4: |
Experiment 2 / 12.4.1: |
Evolutionary Multi-Agent Self-Organization / 12.5.1: |
Evolution of Cooperative Motion Strategies / 13.1: |
Representation of a Proximity Stimulus / 13.1.1: |
Stimulus-Response Pairs / 13.1.2: |
Experiments / 13.1.3: |
Comparison with a Non-Evolutionary Mode / 13.2.1: |
Evolution of Group Behaviors / 13.2.3: |
Cooperation among Robots / 13.3.2: |
An Exploration Tool / 13.4: |
Toolboxes for Multi-Agent Robotics / 14: |
Toolbox for Multi-Agent Reinforcement Learning / 14.1: |
Architecture / 14.2.1: |
File Structure / 14.2.2: |
Function Description / 14.2.3: |
User Configuration / 14.2.4: |
Data Structure / 14.2.5: |
Toolbox for Evolutionary Multi-Agent Reinforcement Learning / 14.3: |
Toolboxes for Evolutionary Collective Behavior Implementation / 14.3.1: |
Toolbox for Collective Box-Pushing by Artificial Repulsive Forces / 14.4.1: |
Toolbox for Implementing Cylindrical/Cubic Box-Pushing Tasks / 14.4.1.1: |
Toolbox for Multi-Agent Self-Organization / 14.4.2.1: |
Toolbox for Evolutionary Multi-Agent Self-Organization / 14.5.1: |
Example / 14.6.1: |
True Map Calculation / 14.7.1: |
Initialization / 14.7.2: |
Start-Up / 14.7.3: |
Result Display / 14.7.4: |
References |
Index |
Motivation, Approaches, and Outstanding Issues / I: |
Why Multiple Robots? / 1: |
Advantages / 1.1: |