STAR73-بینایی و کنترل در رباتیک بر اساس متلب

robotics_vision_control_matlab

STAR73-بینایی و کنترل در رباتیک بر اساس متلب

The practice of robotics and computer vision both involve the application of computational algorithms to data. Over the fairly recent history of the fields of robotics and computer vision a very large body of algorithms has been developed. However this body of knowledge is something of a barrier for anybody entering the field, or even looking to see if they want to enter the field — What is the right algorithm for a particular problem?, and importantly, How can I try it out without spending days coding and debugging it from the original research papers?

The author has maintained two open-source MATLAB Toolboxes for more than 10 years: one for robotics and one for vision. The key strength of the Toolboxes provide a set of tools that allow the user to work with real problems, not trivial examples. For the student the book makes the algorithms accessible, the Toolbox code can be read to gain understanding, and the examples illustrate how it can be used —instant gratification in just a couple of lines of MATLAB code. The code can also be the starting point for new work, for researchers or students, by writing programs based on Toolbox functions, or modifying the Toolbox code itself.

The purpose of this book is to expand on the tutorial material provided with the toolboxes, add many more examples, and to weave this into a narrative that covers robotics and computer vision separately and together. The author shows how complex problems can be decomposed and solved using just a few simple lines of code, and hopefully to inspire up and coming researchers. The topics covered are guided by the real problems observed over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes a lot of Matlab examples and figures. The book is a real walk through the fundamentals of robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and epipolar geometry, and bring it all together in a visual servo system.


Representing Position and Orientation

Time and Motion

Mobile Robot Vehicles

Navigation

Localization

Robot Arm Kinematics

Velocity Relationships

Dynamics and Control

Light and Color

Image Formation

Image Processing

Image Feature Extraction

Using Multiple Images

Vision-Based Control

Advanced Visual Servoing


The practice of robotics and machine vision involves the application of computational algorithms to data. The data comes from sensors measuring the velocity of a wheel, the angle of a robot arm’s joint or the intensities of millions of pixels that comprise an image of the world that the robot is observing. For many robotic applications the amount of data that needs to be processed, in real-time, is massive. For vision it can be of the order of tens to hundreds of megabytes per second.

Progress in robots and machine vision has been, and continues to be, driven by more effective ways to process data. This is achieved through new and more efficient algorithms, and the dramatic increase in computational power that follows Moore’s law. When I started in robotics and vision, in the mid 1980s, the IBM PC had been recently released – it had a 4.77 MHz 16-bit microprocessor and 16 kbytes (expandable to 256 k) of memory. Over the intervening 25 years computing power has doubled ۱۶ times which is an increase by a factor of 65 000. In the late 1980s systems capable of real-time image processing were large 19 inch racks of equipment such as shown in Fig. 0.1. Today there is far more computing in just a small corner of a modern microprocessor chip.

Over the fairly recent history of robotics and machine vision a very large body of algorithms has been developed – a significant, tangible, and collective achievement of the research community. However its sheer size and complexity presents a barrier to somebody entering the field. Given the many algorithms from which to choose the obvious question is:

What is the right algorithm for this particular problem?

One strategy would be to try a few different algorithms and see which works best

for the problem at hand but this raises the next question:

How can I evaluate algorithm X on my own data without spending days coding and

debugging it from the original research papers?

Two developments come to our aid. The first is the availability of general purpose mathematical software which it makes it easy to prototype algorithms. There are commercial packages such as MATLAB®, Mathematica and MathCad, and open source projects include SciLab, Octave, and PyLab. All these tools deal naturally and effortlessly with vectors and matrices, can create complex and beautiful graphics, and can be used interactively or as a programming environment. The second is the open-source movement. Many algorithms developed by researchers are available in open-source form. They might be coded in one of the general purpose mathematical languages just mentioned, or written in a mainstream language like C, C++ or Java.

For more than fifteen years I have been part of the open-source community and maintained two open-source MATLAB® Toolboxes: one for robotics and one for machine vision. They date back to my own PhD work and have evolved since then, growing features and tracking changes to the MATLAB® language (which have been significant over that period). The Robotics Toolbox has also been translated into a number of different languages such as Python, SciLab and LabView.

The Toolboxes have some important virtues. Firstly, they have been around for a long time and used by many people for many different problems so the code is entitled to some level of trust. The Toolbox provides a “gold standard” with which to compare new algorithms or even the same algorithms coded in new languages or executing in new environments.

Secondly, they allow the user to work with real problems, not trivial examples. For real robots, those with more than two links, or real images with millions of pixels the computation is beyond unaided human ability. Thirdly, they allow us to gain insight which is otherwise lost in the complexity. We can rapidly and easily experiment, play what if games, and depict the results graphically using MATLAB®’s powerful display tools such as 2D and 3D graphs and images.

Fourthly, the Toolbox code makes many common algorithms tangible and accessible. You can read the code, you can apply it to your own problems, and you can extend it or rewrite it. At the very least it gives you a headstart.

The Toolboxes were always accompanied by short tutorials as well as reference material. Over the years many people have urged me to turn this into a book and finally it has happened! The purpose of this book is to expand on the tutorial material provided with the Toolboxes, add many more examples, and to weave it into a narrative that covers robotics and computer vision separately and together. I want to show how complex problems can be decomposed and solved using just a few simple lines of code.

By inclination I am a hands on person. I like to program and I like to analyze data, so it has always seemed natural to me to build tools to solve problems in robotics and vision. The topics covered in this book are based on my own interests but also guided by real problems that I observed over many years as a practitioner of both robotics and computer vision. I hope that by the end of this book you will share my enthusiasm for these topics.

I was particularly motivated to present a solid introduction to machine vision for roboticists. The treatment of vision in robotics textbooks tends to concentrate on simple binary vision techniques. In the book we will cover a broad range of topics including color vision, advanced segmentation techniques such as maximally stable extremal regions and graphcuts, image warping, stereo vision, motion estimation and image retrieval. We also cover non-perspective imaging using fisheye lenses and catadioptric optics. These topics are growing in importance for robotics but are not commonly covered. Vision is a powerful sensor, and roboticists should have a solid grounding in modern fundamentals. The last part of the book shows how vision can be used as the primary sensor for robot control.

This book is unlike other text books, and deliberately so. Firstly, there are already a number of excellent text books that cover robotics and computer vision separately and in depth, but few that cover both in an integrated fashion. Achieving this integration is a principal goal of this book.

Secondly, software is a first-class citizen in this book. Software is a tangible instantiation of the algorithms described – it can be read and it can be pulled apart, modified and put back together again. There are a number of classic books that use software in this illustrative fashion for problem solving. In this respect I’ve been influenced by books such as LaTeX: A document preparation system (Lamport 1994), Numerical Recipes in C (Press et al. 2007), The Little Lisper (Friedman et al. 1987) and Structure and Interpretation of Classical Mechanics (Sussman et al. 2001). The many examples in this book illustrate how the Toolbox software can be used and generally provide instant gratification in just a couple of lines of MATLAB® code.

Thirdly, building the book around MATLAB® and the Toolboxes means that we are able to tackle more realistic and more complex problems than other books. The emphasis on software and examples does not mean that rigour and theory are unimportant, they are very important, but this book provides a complementary approach. It is best read in conjunction with standard texts which provide rigour and theoretical nourishment. The end of each chapter has a section on further reading and provides pointers to relevant textbooks and key papers.

Writing this book provided a good opportunity to look critically at the Toolboxes and to revise and extend the code. In particular I’ve made much greater use of the ever-evolving object-oriented features of MATLAB® to simplify the user interface and to reduce the number of separate files within the Toolboxes.

The rewrite also made me look more widely at complementary open-source code. There is a lot of great code out there, particularly on the computer vision side, so rather than reinvent some wheels I’ve tried to integrate the best code I could find for particular algorithms. The complication is that every author has their own naming conventions and preferences about data organization, from simple matters like the use of row or column vectors to more complex issues involving structures – arrays of structures or structures of arrays. My solution has been, as much as possible, to not modify any of these packages but to encapsulate them with light weight wrappers, particularly as classes.

I am grateful to the following for code that has been either incorporated into the Toolboxes or which has been wrapped into the Toolboxes. Robotics Toolbox contributions include: mobile robot localization and mapping by Paul Newman at Oxford and a quadcopter simulator by Paul Pounds at Yale. Machine Vision Toolbox contributions include: RANSAC code by Peter Kovesi; pose estimation by Francesco Moreno-Noguer, Vincent Lepetit, Pascal Fua at the CVLab-EPFL; color space conversions by Pascal Getreuer; numerical routines for geometric vision by various members of the Visual Geometry Group at Oxford (from the web site of the Hartley and Zisserman book; Hartley and Zisserman 2003); the k-means and MSER algorithms by Andrea Vedaldi and Brian Fulkerson; the graph-based image segmentation software by Pedro Felzenszwalb; and the SURF feature detector by Dirk-Jan Kroon at U. Twente. The Camera Calibration Toolbox by Jean-Yves Bouguet is used unmodified.

Along the way I got interested in the mathematicians, physicists and engineers whose work, hundreds of years later, is critical to the science of robotic and vision today. Some of their names have become adjectives like Coriolis, Gaussian, Laplacian or Cartesian; nouns like Jacobian, or units like Newton and Coulomb. They are interesting characters from a distant era when science was a hobby and their day jobs were as doctors, alchemists, gamblers, astrologers, philosophers or mercenaries. In order to know whose shoulders we are standing on I have included small vignettes about the lives of these people – a smattering of history as a backstory. In my own career I have had the good fortune to work with many wonderful people who have inspired and guided me. Long ago at the University of Melbourne John Anderson fired my interest in control and Graham Holmes encouraged me to “think before I code” – excellent advice that I sometimes heed. Early on I spent a life-direction-changing ten months working with Richard (Lou) Paul in the GRASP laboratory at the University of Pennsylvania in the period 1988–۱۹۸۹٫ The genesis of the Toolboxes was my PhD research and my advisors Malcolm Good (University of Melbourne) and Paul Dunn (CSIRO) asked me good questions and guided my research. Laszlo Nemes provided sage advice about life and the ways of organizations and encouraged me to publish more and to open-source my software. Much of my career was spent at CSIRO where I had the privilege and opportunity to work on a diverse range of real robotics projects and to work with a truly talented set of colleagues and friends. Mid book I joined Queensland University of Technology which has generously made time available to me to complete the project. My former students Jasmine Banks, Kane Usher, Paul Pounds and Peter Hansen taught me a lot of about stereo, non-holonomy, quadcopters and wide-angle vision respectively.

I would like to thank Paul Newman for generously hosting me several times at Oxford where significant sections of the book were written, and Daniela Rus for hosting me at MIT for a burst of intense writing that was the first complete book draft. Daniela, Paul and Cédric Pradalier made constructive suggestions and comments on early drafts of the material. I would also like to thank the MathWorks, the publishers of MATLAB® for the support they offered me through their author program. Springer have been enormously supportive of the whole project and a pleasure to work with. I would specially like to thank Thomas Ditzinger, my editor, and Armin Stasch for the layout and typesetting which has transformed my manuscript into a book.

I have tried my hardest to eliminate errors but inevitably some will remain. Please email me bug reports as well as suggestions for improvements and extensions.

Finally, it can’t be easy living with a writer – there are books and websites devoted to this topic. My deepest thanks are to Phillipa for supporting and encouraging me in the endeavour and living with “the book” for so long and in so many different places.

برای دریافت این کتاب روی دکمه‌ی زیر کلیک نمایین.

مطالب مشابه

یک دیدگاه