Robot modeling and control

Robot modeling and control

(Parte 1 de 5)

Robot Modeling and Control

First Edition

Mark W. Spong, Seth Hutchinson, and M. Vidyasagar

JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto




1 INTRODUCTION 1 1.1 Mathematical Modeling of Robots 3 1.1.1 Symbolic Representation of Robots 3 1.1.2 The Configuration Space 4 1.1.3 The State Space 5 1.1.4 The Workspace 5 1.2 Robots as Mechanical Devices 5 1.2.1 Classification of Robotic Manipulators 5 1.2.2 Robotic Systems 7 1.2.3 Accuracy and Repeatability 7 1.2.4 Wrists and End-Effectors 8

1.3 Common Kinematic Arrangements of Manipulators 9

1.3.1 Articulated manipulator (R) 10 1.3.2 Spherical Manipulator (RRP) 1 1.3.3 SCARA Manipulator (RRP) 12 iv CONTENTS

1.3.4 Cylindrical Manipulator (RPP) 13 1.3.5 Cartesian manipulator (P) 14 1.3.6 Parallel Manipulator 15 1.4 Outline of the Text 16 1.5 Chapter Summary 24 Problems 26


2.1 Representing Positions 30 2.2 Representing Rotations 32 2.2.1 Rotation in the plane 32 2.2.2 Rotations in three dimensions 35 2.3 Rotational Transformations 37 2.3.1 Similarity Transformations 41 2.4 Composition of Rotations 42 2.4.1 Rotation with respect to the current frame 42 2.4.2 Rotation with respect to the fixed frame 4 2.5 Parameterizations of Rotations 46 2.5.1 Euler Angles 47 2.5.2 Roll, Pitch, Yaw Angles 49 2.5.3 Axis/Angle Representation 50 2.6 Rigid Motions 53 2.7 Homogeneous Transformations 54 2.8 Chapter Summary 57

3 FORWARD AND INVERSE KINEMATICS 65 3.1 Kinematic Chains 65

3.2 Forward Kinematics: The Denavit-Hartenberg Convention 68

3.2.1 Existence and uniqueness issues 69 3.2.2 Assigning the coordinate frames 72 3.2.3 Examples 75 3.3 Inverse Kinematics 85 3.3.1 The General Inverse Kinematics Problem 85 3.3.2 Kinematic Decoupling 87 3.3.3 Inverse Position: A Geometric Approach 89 3.3.4 Inverse Orientation 97 3.3.5 Examples 98


3.4 Chapter Summary 100 3.5 Notes and References 102 Problems 103

4 VELOCITYKINEMATICS–THEMANIPULATORJACOBIAN113 4.1 Angular Velocity: The Fixed Axis Case 114 4.2 Skew Symmetric Matrices 115 4.2.1 Properties of Skew Symmetric Matrices 116 4.2.2 The Derivative of a Rotation Matrix 117 4.3 Angular Velocity: The General Case 118 4.4 Addition of Angular Velocities 119

4.5 Linear Velocity of a Point Attached to a Moving Frame 121

4.6 Derivation of the Jacobian 122 4.6.1 Angular Velocity 123 4.6.2 Linear Velocity 124

4.6.3 Combining the Angular and Linear Jacobians 126

4.7 Examples 127 4.8 The Analytical Jacobian 131 4.9 Singularities 132 4.9.1 Decoupling of Singularities 133 4.9.2 Wrist Singularities 134 4.9.3 Arm Singularities 134 4.10 Inverse Velocity and Acceleration 139 4.1 Manipulability 141 4.12 Chapter Summary 144 Problems 146

5 PATH AND TRAJECTORY PLANNING 149 5.1 The Configuration Space 150

5.2 Path Planning Using Configuration Space Potential Fields 154

5.2.1 The Attractive Field 154 5.2.2 The Repulsive field 156 5.2.3 Gradient Descent Planning 157 5.3 Planning Using Workspace Potential Fields 158 5.3.1 Defining Workspace Potential Fields 159 vi CONTENTS

5.3.2 Mapping workspace forces to joint forces and torques 161

5.3.3 Motion Planning Algorithm 165 5.4 Using Random Motions to Escape Local Minima 166 5.5 Probabilistic Roadmap Methods 167 5.5.1 Sampling the configuration space 169 5.5.2 Connecting Pairs of Configurations 169 5.5.3 Enhancement 170 5.5.4 Path Smoothing 170 5.6 trajectory planning 171 5.6.1 Trajectories for Point to Point Motion 173

5.6.2 Trajectories for Paths Specified by Via Points 182

5.7 Historical Perspective 184 Problems 186

6 DYNAMICS 187 6.1 The Euler-Lagrange Equations 188 6.1.1 One Dimensional System 188 6.1.2 The General Case 190

6.2 General Expressions for Kinetic and Potential Energy 196

6.2.1 The Inertia Tensor 197 6.2.2 Kinetic Energy for an n-Link Robot 199 6.2.3 Potential Energy for an n-Link Robot 200 6.3 Equations of Motion 200 6.4 Some Common Configurations 202 6.5 Properties of Robot Dynamic Equations 211

6.5.1 The Skew Symmetry and Passivity Properties 212

6.5.2 Bounds on the Inertia Matrix 213 6.5.3 Linearity in the Parameters 214 6.6 Newton-Euler Formulation 215 6.7 Planar Elbow Manipulator Revisited 2 Problems 225

7 INDEPENDENT JOINT CONTROL 229 7.1 Introduction 229 7.2 Actuator Dynamics 231


7.3 Set-Point Tracking 237 7.3.1 PD Compensator 238 7.3.2 Performance of PD Compensators 239 7.3.3 PID Compensator 240 7.3.4 Saturation 242 7.4 Feedforward Control and Computed Torque 244 7.5 Drive Train Dynamics 248 7.6 State Space Design 251 7.6.1 State Feedback Compensator 254 7.6.2 Observers 256 Problems 259

8 MULTIVARIABLE CONTROL 263 8.1 Introduction 263 8.2 PD Control Revisited 264 8.3 Inverse Dynamics 266 8.3.1 Task Space Inverse Dynamics 269 8.4 Robust and Adaptive Motion Control 271 8.4.1 Robust Feedback Linearization 271 8.4.2 Passivity Based Robust Control 275 8.4.3 Passivity Based Adaptive Control 277 Problems 279

9 FORCE CONTROL 281 9.1 Introduction 281 9.2 Coordinate Frames and Constraints 282 9.2.1 Natural and Artificial Constraints 284 9.3 Network Models and Impedance 285 9.3.1 Impedance Operators 288 9.3.2 Classification of Impedance Operators 288 9.3.3 Thevenin and Norton Equivalents 289 9.4 Task Space Dynamics and Control 290 9.4.1 Static Force/Torque Relationships 290 9.4.2 Task Space Dynamics 291 9.4.3 Impedance Control 292 9.4.4 Hybrid Impedance Control 293 Problems 297


10.1 Introduction 299 10.2 Background 300 10.2.1 The Frobenius Theorem 304 10.3 Feedback Linearization 306 10.4 Single-Input Systems 308 10.5 Feedback Linearization for n-Link Robots 315 10.6 Nonholonomic Systems 318 10.6.1 Involutivity and Holonomy 319 10.6.2 Driftless Control Systems 320 10.6.3 Examples of Nonholonomic Systems 320

10.7 Chow’s Theorem and Controllability of Driftless Systems 324

Problems 328

1 COMPUTER VISION 331 1.1 The Geometry of Image Formation 332 1.1.1 The Camera Coordinate Frame 332 1.1.2 Perspective Projection 3 1.1.3 The Image Plane and the Sensor Array 334 1.2 Camera Calibration 334 1.2.1 Extrinsic Camera Parameters 335 1.2.2 Intrinsic Camera Parameters 335 1.2.3 Determining the Camera Parameters 336 1.3 Segmentation by Thresholding 338 1.3.1 A Brief Statistics Review 339 1.3.2 Automatic Threshold Selection 341 1.4 Connected Components 346 1.5 Position and Orientation 348 1.5.1 Moments 349 1.5.2 The Centroid of an Object 349 1.5.3 The Orientation of an Object 350 Problems 353

12 VISION-BASED CONTROL 355 12.1 Approaches to vision based-control 356 12.1.1 Where to put the camera 356 12.1.2 How to use the image data 357 12.2 Camera Motion and Interaction Matrix 357 12.2.1 Interaction matrix vs. Image Jacobian 358


12.3 The interaction matrix for points 359

12.3.1 Velocity of a fixed point relative to a moving camera 360

12.3.2 Constructing the Interaction Matrix 361

12.3.3 Properties of the Interaction Matrix for Points 363

12.3.4 The Interaction Matrix for Multiple Points 363 12.4 Image-Based Control Laws 364 12.4.1 Computing Camera Motion 365 12.4.2 Proportional Control Schemes 366

12.5 The relationship between end effector and camera motions 367

12.6 Partitioned Approaches 369 12.7 Motion Perceptibility 372 12.8 Chapter Summary 374 Problems 375

Appendix A Geometry and Trigonometry 377

A.1 Trigonometry 377

A.1.1 Atan2 377 A.1.2 Reduction formulas 378 A.1.3 Double angle identitites 378 A.1.4 Law of cosines 378

Appendix B Linear Algebra 379

B.1 Differentiation of Vectors 381 B.2 Linear Independence 382 B.3 Change of Coordinates 383 B.4 Eigenvalues and Eigenvectors 383 B.5 Singular Value Decomposition (SVD) 383

Appendix C Lyapunov Stability 387

C.0.1 Quadratic Forms and Lyapunov Functions 389 C.0.2 Lyapunov Stability 390 C.0.3 Lyapunov Stability for Linear Systems 391 C.0.4 LaSalle’s Theorem 392

Appendix D State Space Theory of Dynamical Systems 393 x CONTENTS

D.0.5 State Space Representation of Linear Systems 395

References 397 Index 403


Robotics is a relatively young field of modern technology that crosses traditional engineering boundaries. Understanding the complexity of robots and their applications requires knowledge of electrical engineering, mechanical engineering, systems and industrial engineering, computer science, economics, and mathematics. New disciplines of engineering, such as manufacturing engineering, applications engineering, and knowledge engineering have emerged to deal with the complexity of the field of robotics and factory automation.

This book is concerned with fundamentals of robotics, including kinematics, dynamics, motion planning, computer vision, and control. Our goal is to provide a complete introduction to the most important concepts in these subjects as applied to industrial robot manipulators, mobile robots, and other mechanical systems. A complete treatment of the discipline of robotics would require several volumes. Nevertheless, at the present time, the majority of robot applications deal with industrial robot arms operating in structured factory environments so that a first introduction to the subject of robotics must include a rigorous treatment of the topics in this text.

The term robot was first introduced into our vocabulary by the Czech playwright Karel Capek in his 1920 play Rossum’s Universal Robots, the word robota being the Czech word for work. Since then the term has been applied to a great variety of mechanical devices, such as teleoperators, underwater vehicles, autonomous land rovers, etc. Virtually anything that operates with some degree of autonomy, usually under computer control, has at some point been called a robot. In this text the term robot will mean a computer controlled industrial manipulator of the type shown in Figure 1.1. This type of robot is


Fig. 1.1 The ABB IRB6600 Robot. Photo courtesy of ABB.

essentially a mechanical arm operating under computer control. Such devices, though far from the robots of science fiction, are nevertheless extremely complex electro-mechanical systems whose analytical description requires advanced methods, presenting many challenging and interesting research problems.

An official definition of such a robot comes from the Robot Institute of

America (RIA): A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.

The key element in the above definition is the reprogrammability of robots.

It is the computer brain that gives the robot its utility and adaptability. The so-called robotics revolution is, in fact, part of the larger computer revolution.

Even this restricted version of a robot has several features that make it attractive in an industrial environment. Among the advantages often cited in favor of the introduction of robots are decreased labor costs, increased precision and productivity, increased flexibility compared with specialized machines, and more humane working conditions as dull, repetitive, or hazardous jobs are performed by robots.

The robot, as we have defined it, was born out of the marriage of two earlier technologies: teleoperators and numerically controlled milling machines. Teleoperators, or master-slave devices, were developed during the second world war to handle radioactive materials. Computer numerical control (CNC) was developed because of the high precision required in the machining of certain items, such as components of high performance aircraft. The first

MATHEMATICAL MODELING OF ROBOTS 3 robots essentially combined the mechanical linkages of the teleoperator with the autonomy and programmability of CNC machines.

The first successful applications of robot manipulators generally involved some sort of material transfer, such as injection molding or stamping, where the robot merely attends a press to unload and either transfer or stack the finished parts. These first robots could be programmed to execute a sequence of movements, such as moving to a location A, closing a gripper, moving to a location B, etc., but had no external sensor capability. More complex applications, such as welding, grinding, deburring, and assembly require not only more complex motion but also some form of external sensing such as vision, tactile, or force-sensing, due to the increased interaction of the robot with its environment.

It should be pointed out that the important applications of robots are by no means limited to those industrial jobs where the robot is directly replacing a human worker. There are many other applications of robotics in areas where the use of humans is impractical or undesirable. Among these are undersea and planetary exploration, satellite retrieval and repair, the defusing of explosive devices, and work in radioactive environments. Finally, prostheses, such as artificial limbs, are themselves robotic devices requiring methods of analysis and design similar to those of industrial manipulators.


While robots are themselves mechanical systems, in this text we will be primarily concerned with developing and manipulating mathematical models for robots. In particular, we will develop methods to represent basic geometric aspects of robotic manipulation, dynamic aspects of manipulation, and the various sensors available in modern robotic systems. Equipped with these mathematical models, we will be able to develop methods for planning and controlling robot motions to perform specified tasks. Here we describe some of the basic ideas that are common in developing mathematical models for robot manipulators.

1.1.1 Symbolic Representation of Robots

Robot Manipulators are composed of links connected by joints to form a kinematic chain. Joints are typically rotary (revolute) or linear (prismatic). A revolute joint is like a hinge and allows relative rotation between two links. A prismatic joint allows a linear relative motion between two links. We denote revolute joints by R and prismatic joints by P, and draw them as shown in Figure 1.2. For example, a three-link arm with three revolute joints is an R arm.

Each joint represents the interconnection between two links. We denote the axis of rotation of a revolute joint, or the axis along which a prismatic joint translates by zi if the joint is the interconnection of links i and i + 1. The



Fig. 1.2 Symbolic representation of robot joints.

joint variables, denoted by θ for a revolute joint and d for the prismatic joint, represent the relative displacement between adjacent links. We will make this precise in Chapter 3.

1.1.2 The Configuration Space

A configuration of a manipulator is a complete specification of the location of every point on the manipulator. The set of all possible configurations is called the configuration space. In our case, if we know the values for the joint variables (i.e., the joint angle for revolute joints, or the joint offset for prismatic joints), then it is straightforward to infer the position of any point on the manipulator, since the individual links of the manipulator are assumed to be rigid, and the base of the manipulator is assumed to be fixed. Therefore, in this text, we will represent a configuration by a set of values for the joint variables. We will denote this vector of values by q, and say that the robot is in configuration q when the joint variables take on the values q1 ···qn, with qi = θi for a revolute joint and qi = d1 for a prismatic joint. An object is said to have n degrees-of-freedom (DOF) if its configuration can be minimally specified by n parameters. Thus, the number of DOF is equal to the dimension of the configuration space. For a robot manipulator, the number of joints determines the number DOF. A rigid object in three-dimensional space has six DOF: three for positioning and three for orientation (e.g., roll, pitch and yaw angles). Therefore, a manipulator should typically possess at least six independent DOF. With fewer than six DOF the arm cannot reach every point in its work environment with arbitrary orientation. Certain applications such as reaching around or behind obstacles may require more than six DOF. A manipulator having more than six links is referred to as a kinematically redundant manipulator. The difficulty of controlling a manipulator increases rapidly with the number of links.

(Parte 1 de 5)