• Nenhum resultado encontrado

Vision Based Collision Avoidance of Industrial Robots

N/A
N/A
Protected

Academic year: 2023

Share "Vision Based Collision Avoidance of Industrial Robots"

Copied!
6
0
0

Texto

(1)

Vision Based Collision Avoidance of Industrial Robots

Alexander Winkler Jozef Such´y∗∗

Department of Robotic Systems, Chemnitz University of Technology, 09107 Chemnitz, Germany (e-mail:

alexander.winkler@e-technik.tu-chemnitz.de).

∗∗Department of Robotic Systems, Chemnitz University of Technology, 09107 Chemnitz, Germany (e-mail: jozef.suchy@etit.tu-chemnitz.de)

Abstract: This article presents an approach to collision avoidance of industrial robots. It is based on the method of artificial potential or force fields. The field is generated by virtual charges which are placed on obstacles. The virtual force acts on the robot which results in the modification of the manipulator path to avoid collisions. Robot path modification is performed by means of impedance control. The positions of the obstacles are determined continuously by image processing using a simple USB camera, therefore collision avoidance is also able to deal with moving obstacles, e.g. other manipulators. All algorithms are implemented with a real robot system and experimental results are presented.

Keywords: Robot control, robots vision, impedance control, collision avoidance, visual serviong, potential field.

1. INTRODUCTION

Collision avoidance of industrial robots is a very important issue especially when there are changes of the environment in robot’s work envelope. Further effort to make robot work cells more and more flexible will force this aspect especially when the robot work cell is free of safety fences.

Some advance approaches of path planning and collision avoidance of robots working in dynamic environments are developed and published, (Khatib [1986], Warren [1989], Quinlan and Khatib [1993], Brock and Khatib [2002], Seraji and Bon [1999], Bosscher and Hedman [2009]).

In this paper we investigate an approach of dynamic col- lision avoidance which combines the ideas of artificial po- tential fields, visual servoing and impedance control. The intention of dynamic collision avoidance is the modification of the robot path to avoid obstacles in situations when it is necessary. The algorithm should work with stationary and moving obstacles. The basic approach is that the obstacles emit virtual forces which influence the robot path by generating position offsets. One way how to set the dynamic relationship between virtual force and position offset is by target impedance.

The generation of artificial forces will be performed using the idea of virtual charges. Every charge can be regarded as an electric charge with electrostatic field in its neighbor- hood. Between two or more charges the electrostatic forces act. In this article the virtual charges will be placed on the end-effector of the robot and on the surfaces of obstacles.

Dealing with moving obstacles the positions of charges has to be continuously updated in real time. In the approach presented here, the charge positions will be determined from camera information via image processing.

This integration of the camera signal in the robot position control loops can be also understood as a kind of robot visual servoing, (Corke [1996], Hashimoto [2003]).

This paper is organized as follows: In the next section the setup of an adequate robot system will be described.

Besides, a possible test scenario will be defined which is used later for experiments. In section 3 some basics of image processing for obstacle detection will be presented along with an example. Section 4 deals with possibilities of collision avoidance of industrial robots based on virtual force fields together with impedance control. After this in section 5 the implementation of the test scenario and some experimental results will be presented. Finally, in the last section the short conclusion will be given and some efforts for further work will be presented.

2. EXPERIMANTAL SETUP

The robot system used for experiments consists of two KUKA KR6/2 manipulators. They are six axes articulated robot arms with a nominal payload of 6 kg. Each robot is controlled by its own KUKA Robot Controller KRC2 based on industrial PC. At the PC the real time operating system VxWorks runs together with Windows. VxWorks is used for real time tasks and Windows for program design and visualization. User robot applications can be programmed using KUKA Robot Language (KRL) which has all features of common robot programming languages.

For practical demonstration of vision based collision avoid- ance Robot I executes some handling task. It transports in an endless loop a workpiece from start position A to target position B and back, see Fig. 1. During the linear motion along the y-axis of the world frame Robot I is disturbed

Copyright by the 9452

(2)

Robot I

Robot II

Position A Position B

Camera

Marker

x y z

Fig. 1. Experimental setup

by Robot II which comes up to Robot I and simulates an obstacle.

Robot II will be observed continuously by a simple USB web cam of type Logitech Webcam Pro 9000. It is con- nected to Windows PC. The program which runs on the PC calculates the Cartesian end-effector position of Robot II. For this purpose it is equipped with a colored marker which can be also seen in Fig. 1. The position is then sent to the controller of Robot I via serial connection. In the case that the end-effectors were regarded as artificial charges the interaction force between the end-effectors can be calculated. This is performed by Robot controller I. As a result of the artificial force the path of Robot I will be modified using the principle of impedance control.

3. OBSTACLE DETECTION 3.1 Basic Algorithms

The detection of obstacles within the robot workspace is performed by image processing. For this purpose the robot work cell is supervised by camera. In the experiment presented in this paper it is sufficient to use a simple USB web cam. Its resolution for image capturing is 640×480 pixels. The moving obstacle will be represented by the end-effector of Robot II. It is equipped with a circular, blue colored marker. The camera is connected to Windows PC which fulfills the role of image processing. To this end software module was developed in Delphi. It includes the following features: After capturing the picture of the robot work cell it is processed by color filter. In the next step edge detection is performed. Thereafter, the marker will be found by Hough transform. Finally, the position of the obstacle will be calculated. Several algorithms will be described in somewhat more detail below.

Color filter In the first step the captured picture is transformed into binary image with respect to the color of the obstacle marker. This will be achieved by the color filter. The filter works in the HSV color space. For this purpose the values for red, green and blue (RGB) of every pixel of the original image have been converted to values for hue, saturation and value (HSV). If the values of the individual pixels are in a predefined range the

otherwise it will be set to 0.

Edge detection Edge detection is performed by the Laplacian. The Laplace operator ∆ for two Cartesian dimensions applied to image function f is defined as follows, (Siegwart and Nourbakhsh [2004]):

∆f(x, y) = ∂2f

∂x2 +∂2f

∂y2 (1)

For the discrete space of image pixels the discrete Laplace operator can be expressed with convolution operatorD:

D=

"0 1 0 1 −4 1 0 1 0

#

(2) It is applied to the color filtered binary image C. The resulting pictureE represents the edges.

Exy =





1, 4Cxy−Cx−1,y−Cx+1,y

−Cx,y−1−Cx,y+1>0 0, otherwise





(3)

Hough transform For the detection of the marker mounted on the robot end-effector the circular Hough transform is used, (Sonka et al. [1999]). It leads to the position and the diameter of the circular marker. To save computation time, the minimum and the maximum diam- eter of the searched circle can be preset in pixels.

Obstacle position In the case that the diameter of the marker is known, the coordinates of the obstacle can be easily calculated with respect to camera coordinate frame PC = [xC yC zC]T. Using the homogenous transforma- tion matrixTCW (McKerrow [1995]), which describes the relative pose between camera and Robot I the Cartesian coordinates of the obstacle in world frameP = [x y z]T can be determined:

P =TCWPC (4)

3.2 Example of Image Recognition

Previously described steps to calculate the obstacle posi- tion from the web cam image can be seen on the example in Fig. 2. It starts with the original snapshot of the robot workspace (Fig. 2a). In Fig. 2b the result of the color filter is shown which extracts the relevant regions with respect to their color in HSV color space. Afterwards edge detection is performed by Laplacian (Fig. 2c). The Hough transform provides a maximum which leads to the marker found in Fig. 2d.

4. COLLISION AVOIDANCE OF ROBOTS 4.1 Artificial Force Field

The approach of artificial potential fields is a well known method for path planning of mobile or stationary robots, (Khatib [1986]). It may be as well used to collision avoid- ance between multiple robots or between a robot and its dynamic environment. The artificial force emitted by the moving obstacle influences the robot motion to bring the manipulator arm away from it. In this case the main

(3)

a) b)

c) d)

Fig. 2. Example of image processing

problem is on-line generation of the virtual potential or force field of the moving obstacle (robot). For this pur- pose we propose the approach based on artificial charges.

Its idea comes from electric charges which generate the electrostatic field in their neighborhood. The electrostatic forceF12between two chargesQ1andQ2arises according to:

F12=− 1 4π

Q1Q2

||r||2 r

||r|| (5)

In (5)represents the electrical permeability and ris the position vector between both charges. The absolute force between the charges is reciprocally proportional to the square of their distance. However, for realization of virtual force fields in robotics this particular form of dependence is not obligatory and (5) can be generalized by introducing the so called force functionF:

F12=F(||r||) r

||r|| (6)

Hence, the functionF describes the relationship between distance and virtual force.

In the scenario presented in this paper the virtual force acts only between the end-effectors of both manipulators.

This means that Robot II which has the role of an obstacle emits the virtual force and Robot I respond to it. The Cartesian position of Robot I pI with respect to world frame is available in its robot controller and the position of the obstaclepII is detected by image processing already described. So, (6) can be written as:

FV = [Fx Fy Fz]T =F(||pI−pII||) pI−pII

||pI−pII|| (7) To generate virtual force fields surrounding complex obsta- cles many charges are necessary. In this case the principle of superposition is valid and can be used to calculate the resulting force, (Winkler and Such´y [2009]). However, in this paper only one virtual charge is used.

4.2 Robot Motion Control

To implement collision avoidance by manipulator path correction it is necessary to influence the robot motion.

It is the advantage of the chosen robot system that robot motion can by influenced in real time on the level of the

Real time Non real time KRL

• User programs

• Motion commands

RSI

Sensor drivers Object library Position controller

Analogue input, Force/Torque, Vision, …

Signal processing, Motion control

Fig. 3. Functional scheme of RSI

position control loops. For this purpose KUKA’s Robot Sensor Interface (RSI) is utilized, (KUKA Roboter GmbH [2007]). RSI allows the realization of sensor guided robot motions, (Winkler and Such´y [2006a]). It is an additional module which realizes real time signal processing and the access to the position control loops. The Robot Sensor Interface provides special KRL expressions to generate RSI objects which are defined in the object library. There are three kinds of them. Objects with outputs only pro- vide measured values or signals, e.g., digital/analogue in- put signals, current joint angles, end-effector position or measured values from force/torque sensor. Objects with inputs only receive values which take influence on robot motion or write to the output periphery of robot controller.

Objects with inputs and outputs are signal processing objects like proportional controller, integrator, summator.

Connections between RSI objects may be generated by using special KRL expressions. In this way it is possible to create relative complex controller structures. After having created the whole RSI structure, it is able to run in real time with the interpolation cycle. This means that it is executed periodically every 12msin parallel with the stan- dard KRL program. Changes in RSI program structure are possible during program execution. The functional scheme of RSI can be seen in Fig. 3.

4.3 Path Modification via Impedance Control

Robot system used for the experiments in this work allows corrections of the end-effector path by Robot Sensor Interface (RSI) already mentioned in the previous section.

Path corrections can be performed on the level of position control loops. For this purpose we propose a kind of impedance control. It is one of the main approaches to robot control (Hogan [1985]), (Zeng and Hemami [1997]) or for manual guidance of manipulator arms (Winkler and Such´y [2006b]).

In the context presented here impedance control will be used to determine the position correction offset depend- ing on the vector of the virtual force FV. The target impedance for x direction e.g., is then given as follows:

FV x(s)

∆X(s) =Mxs2+Dxs+Cx (8) In (8) FV x is the value of the virtual repulsive force in x direction, ∆X is the resulting path correction along x direction, Mx, Dx and Cx represent mass, damping and spring constants, respectively.

(4)

Δx Δy Δz

Motion control in Cartesian space

ST_PATHCORR

Receive virtual force

ST_SENPREA

Receive virtual force

ST_SENPREA

ST_PT1

First order system Receive virtual force

ST_SENPREA

ST_PT1

First order system

ST_PT1

First order system

Fx Fy Fz

Fig. 4. Signal flow diagram of the complete RSI structure to robot path modification

Obstacle position

External PC Robot

controller I

Robot controller II

Robot II (obstacle) Robot I

Web cam

Fig. 5. Structure of the robot system

It may be convenient to reduce target impedance to spring damper system. This will result in the dynamic behavior of a first order system between virtual force and position offset. Hence, transfer characteristic for the x direction e.g., can be expressed as follows:

∆X(s)

Fx(s) = 1

Dxs+Kx (9) It can be easily realized using the RSI object ST PT1.

The complete RSI structure to influence the current robot path in all translational Cartesian degrees of freedom can be seen in Fig. 4. The virtual force values (Fx, Fy, Fz) calculated in a background process were received by ST SENPREA objects. The desired position offsets (∆x, ∆y, ∆z) are finally attached to the ST PATHCORR object which perform inverse kinematics and connection to the robot joint position control loops.

5. IMPLEMENTATION AND EXPERIMENTAL RESULTS

5.1 Robot System

The structure of the complete robot system used for experiments is shown in Fig. 5. The web cam observes the end-effector of Robot II. The image processing runs on the external Windows PC. When the computation of the current robot/obstacle position is finished its values are sent to the controller of Robot I via serial connection.

Afterwards, the virtual force is calculated by the internal

in the robot program.

5.2 Implementation

For all parts of the robot system already described the corresponding software modules have been developed.

Robot I For Robot I the program of a handling task was developed. During the linear motions along the y axis of world frame from position A to B and from B to A (Fig. 1) the RSI structure is active. It includes reception of the virtual force values from internal PLC and path modification according to target impedance.

Additionally, the controller of Robot I includes an internal PLC. It is free programmable and independent from the main robot program. PLC receives the Cartesian obstacle position which is represented by the end-effector of Robot II. Taking the end-effector position of Robot I into account the PLC calculates the virtual force vector. It is transferred to the main robot program via special RSI variables for data exchange.

Robot II Robot II represents the moving obstacle. As one possibility, it can be controlled by a human operator using teach pendant. The operator moves the gripper equipped with the colored marker into the current workspace of Robot I to disturb its motion. Another way is the devel- opment of a simple program for the controller of Robot II to perform the disturbance task in an automatic mode.

PC Program The program which runs on the Windows PC performs image processing. It was developed using Delphi. After the position of the marker is calculated its values are sent to Robot I via serial connection.

5.3 Results of Vision Based Obstacle Detection

Initially, the operation of the vision based obstacle detec- tion will be verified. For this purpose the obstacle which is simulated by the gripper of Robot II is moved parallel to the coordinate axes of the world frame, see also Fig. 1.

Its origin is located in the base of Robot I. The Cartesian position of the tool center point is continuously logged by the robot program. Additionally, the workspace is super- vised by camera and the gripper position is determined and logged by the PC program. The plots of the Cartesian coordinates can be compared regarding Fig. 6.

It can be seen that the coordinates calculated from web cam match the measured values quite well. Some differ- ences can be found especially in the y coordinate which is depth axis with respect to the camera. However, the measurement accuracy of the camera is quite sufficient concerning the application presented here.

The computation time of the whole image processing cycle (image acquisition, color filter, edge detection, Hough transform) depends on some adjustments, primarily on the settings of the Hough transformation. Values between 200msand 500msare possible. Computation time may be reduced easily using multi core programming. However, the reached values are quite adequate for the scenario presented in this work.

(5)

1000 1100 1200

x (mm)

0 100 200

y (mm)

0 5 10 15 20 25 30 35

900 1000 1100

Time (s)

z (mm)

Obstacle position from robot Obstacle position estimated by web cam

Fig. 6. Comparison between real and vision estimated obstacle position

5.4 Results of Collision Avoidance

The estimated obstacle position is ultimately used to generate the virtual force vector FV which acts on the end-effector of Robot I. The force function representing the relationship between absolute value of the repulsive force and distance between gripper (TCP Robot I) and obstacle (TCP Robot II) is chosen as follows:

F(||pI−pII||) = 750||pI−pII||−1N·mm (10) Its sphere of action is bounded by a distance of 300mm and the absolute force value is limited to 15N.

The artificial repulsive force leads to the modified robot path with respect to its original planned path when executing the work task. The aim is to move around the obstacle and avoid collisions. The path modification is realized by the approach of impedance control. The desired target impedance for every translational degree of freedom is realized in discrete time. Thus, e.g., for the x direction equation (9) has been transformed from Laplace domain to time discrete domain:

∆X(s)

Fx(s) = Kx−1

1 +DxKx−1s = kF x

1 +Txs

∆X(z) Fx(z) =kF x

b1z−1 1−a1z−1

(11)

Taking into account the interpolation time of robot con- troller of 12ms, which is identical to the sampling time of RSI, a time constant Tx of approximately 1s is reached with the following transfer function:

∆X(z)

Fx(z) =kF x 0.01z−1

1−0.99z−1 (12) The proportional gain kF x is set to 10mmN−1. The pa- rameters of target impedances for y and z directions are chosen in the similar way. It might be possible to merge force function and desired target impedance. However,

0 500 1000 1500

Obstacle pos. from cam.

−10

−5 0 5

Virutal force (N)

−500 0 500 1000

Pos. Robot I (mm)

0 10 20 30 40 50 60 70 80 90 100

1220 1240 1260 1280

Time (s)

Xpos. Rob. I (mm)

x y z

Fig. 7. Disturbance of Robot I by obstacle

they are realized separately to accentuate the basic ap- proach of impedance control.

The functioning of vision based collision avoidance is shown in Fig. 7. Robot I performs the handling task.

Some interesting values were logged only during the linear motions parallel to the y axis of the word frame. The motions for grasping and putting the workpiece were not considered. In this context two cycles of the handling task can be found in Fig. 7. In the first cycle the obstacle (Robot II) is away from the path of Robot I. The values of virtual force components are small when the y coordinates of Robot I and Robot II become close at time 8s/35s. For that reason the evasive movement of Robot I is relatively small which can be seen clearly on the x coordinate.

After 42s, within the second cycle, Robot II moves closer to the path of Robot I. The changing obstacle position is detected by camera and send to the PLC of Robot controller I, which calculates the virtual force vector. The higher force results in the higher distance of the evasive motion of Robot I (see time 58s/85s).

The robot work cell with respect to the camera view is shown in Fig. 8a for the first cycle and in Fig. 8b for the second cycle of the handling task, respectively.

6. CONCLUSION

This article presents an approach to collision avoidance of industrial robots. It is based on artificial forces emitted by virtual charges along with image processing for charge placement. In the case that the robot comes close to a detected obstacle its path is modified by means of impedance control. The whole approach may be classified as visual servoing.

(6)

a) b)

Fig. 8. Camera view of the robot work cell

For the implementation the robot system consisting of two articulated manipulators with six axes was chosen. Robot I performs a handling task, while Robot II equipped with a marker represents the moving obstacle and enters the workspace of Robot I. The workspace is supervised by USB web cam and the obstacle position (end-effector of Robot II) is calculated by image processing. As a result of this disturbance the path of Robot I was modified by the virtual force. For this purpose the well known approach of impedance control has been used.

The practical realization was successfully accomplished and some results were presented in this paper. More result can be shown in a video.

Based on the approaches and results presented here some perspective developments seem to be possible for further research.

For simplification of image processing a colored marker was attached to the obstacle. It would be more preferable to do without it. This feature would require more sophis- ticated algorithms for image processing.

Up to now the generation of the virtual force field was simplified in the sense that it acts only between the end- effectors which means that one charge is placed on the center of the marker and another one on the gripper of Robot I. It would be a challenging task to determine the posture of moving Robot II (obstacle) via image processing and crop its whole body with virtual charges.

An interesting scenario for further research would also be the situations when a human enters the robot workspace.

The 3D camera information can again be used. The human is than cropped with virtual charges in real time and he/she emits the repulsive force field. In this case addi- tional safety aspects should be taken into consideration, (Som [2006]).

REFERENCES

P. Bosscher and D. Hedman. Real-time collision avoidance algorithm for robotic manipulators. In Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, pages 113–121, 2009.

Oliver Brock and Oussama Khatib. Elastic strips: A frame- work for motion generation in human environments.

The International Journal of Robotics Research, 21(12):

1031–1052, 2002.

P. I. Corke. Visual Control of Robots. Research Studies Press Ltd., 1996.

Koichi Hashimoto. A review on vision-based control of robot manipulators.Advanced Robotics, 17(10):969–991, 2003.

tion: Part i, ii, iii. ASME Journal of Dynamic Systems, Measurement and Control, 107:1–24, 1985.

O. Khatib. Real-time obstacle avoidance for manipula- tors and mobile robots. The International Journal of Robotics Research, 5(1):90–98, 1986.

KUKA Roboter GmbH. KUKA Robot Sensor Interface (RSI) 2.1, 2007.

P. J. McKerrow.Introduction to Robotics. Addison Wesley, 1995.

S. Quinlan and O. Khatib. Elastic bands: Connecting path planning and control. InProceedings of the IEEE International Conference on Robotics and Automation, pages 802–807, 1993.

H. Seraji and B. Bon. Real-time collision avoidance for position-controlled manipulators. IEEE Transactions on Robotics and Automation, 15(4):670–676, 1999.

R. Siegwart and I. R. Nourbakhsh. Introduction to Au- tonomous Mobile Robots. MIT Press, 2004.

F. Som. Innovative robot control offers more operator ergonomics and personnel safety. In Proc. of Joint Conference on Robotics - 37th International Symposium on Robotics and 4th German Conference on Robotics, 2006.

M. Sonka, V. Hlavac, and R. Boyle. Image Processing, Analysis, and Machine Vision. Brooks/Cole Publishing Company, 1999.

C. W. Warren. Global path planning using artificial po- tential fields. InProceedings of the IEEE International Conference on Robotics and Automation, pages 316–321, 1989.

A. Winkler and J. Such´y. An approach to compliant motion of an industrial manipulator. In Proceedings of the 8th International IFAC Symposium on Robot Control, 2006a.

Alexander Winkler and Jozef Such´y. Force-guided motions of a 6-d.o.f industrial robot with a joint space approach.

Advanced Robotics, 20(9):1067–1084, 2006b.

Alexander Winkler and Jozef Such´y. Intuitive collision avoidance of robots using charge generated virtual force fields. In Torsten Kr¨oger and Friedrich M. Wahl, editors, Advances in Robotics Research, pages 77–87. Springer, 2009.

G. Zeng and A. Hemami. An overview of robot force control. Robotica, 15(5):473–482, 1997.

Referências

Documentos relacionados

4 Conclusão Nas composições coloridas obtidas com as imagens resultantes da multiplicação 4x5, 4x2 e 4x7 e da adição de bandas 4+5, 4+2 e 4+7, observaram-se de maneira geral,