• Nenhum resultado encontrado

Robot plays board games with human

N/A
N/A
Protected

Academic year: 2021

Share "Robot plays board games with human"

Copied!
54
0
0

Texto

(1)

Universidade de Aveiro Departamento deElectr´onica, Telecomunicac¸˜oes e Inform´atica,

2017

Miguel ˆ

Angelo

Dias Horta

Robˆ

o Joga Jogos de Tabuleiro com Humano

Robot plays board games with human

(2)
(3)

Universidade de Aveiro Departamento deElectr´onica, Telecomunicac¸˜oes e Inform´atica,

2017

Miguel ˆ

Angelo

Dias Horta

Robˆ

o Joga Jogos de Tabuleiro com Humano

Robot plays board games with human

Dissertac¸˜ao apresentada `a Universidade de Aveiro para cumprimento dos requisitos necess´arios `a obtenc¸˜ao do grau de Mestre em Engenharia em Electr´onica e Telecomunicac¸˜oes, realizada sob a orientac¸˜ao cient´ıfica de Prof. Doutor Nuno Lau e Prof. Doutor Artur Pereira, Professores Auxil-iares do Departamento de Electr´onica,Telecomunicac¸˜oes e Inform´atica da Universidade de Aveiro

(4)
(5)

o j´uri / the jury

presidente / president Professor Doutor Lu´ıs Filipe de Seabra Lopes

Professor Associado da Universidade de Aveiro

vogais / examiners committee Professor Doutor Lu´ıs Paulo Gon¸calves dos Reis

Professor Associado da Escola de Engenharia da Universidade do Minho

Professor Doutor Jos´e Nuno Panelas Nunes Lau

(6)
(7)

agradecimentos / acknowledgements

Quero agradecer aos meus pais e aos meus irm˜aos que inquestionavel-mente contribu´ıram imensainquestionavel-mente para eu conseguir chegar at´e este ponto. Quero tamb´em agradecer aos professores Nuno Lau e Artur Pereira pela orientac¸˜ao que me deram e pelo tempo gasto comigo, fundamental para a realizac¸˜ao desta tese. Finalmente quero agradecer ao meu colega e amigo Andr´e S´a que me acompanhou durante esta longo e dura jornada, constantemente presente para discutir problemas e incessantemente contribuindo com boas ideias.

(8)
(9)

Resumo O campo da rob´otica tem-se desenvolvido a um ritmo impressionante. O dia em que os robˆos ser˜ao uma constante dentro da sociedade est´a mais perto do que nunca. No entanto, na cultura popular os robˆos ainda s˜ao vistos como uma ameac¸a, desconsiderando os seus benef´ıcios. Este medo irracional precisa de ser desmistificado.

Uma das opc¸˜oes para melhorar a percepc¸˜ao da humanidade em relac¸˜ao aos robˆos, pode basear-se na disseminac¸˜ao de robˆos de entretenimento. Este tipo de robˆos, que ´e desenvolvido com o ´unico objetivo de trazer felicidade aos seus humanos, proporciona um ambiente seguro onde se pode interagir com os robˆos, atestando a sua seguranc¸a, previsibilidade, funcionalidade, confiabilidade e robustez, melhorando positivamente a percepc¸˜ao das massas para com robˆos. O objetivo final ser´a que tais experiˆencias contribuam tamb´em para a aceitac¸˜ao de robˆos mais complexos dentro da sociedade.

´

E dentro deste conjunto de ideias que esta dissertac¸˜ao foi escrita. Us-ando algumas das mais recentes inovac¸˜oes na ´area , foi desenvolvido um ambiente onde um humano pode experimentar jogar jogos de tab-uleiro com ou contra um brac¸o rob´otico. Especificamente foi desen-volvido um ambiente onde ´e possivel jogar o jogo do galo com o robˆo. Este robˆo ´e formado pela inter-conex˜ao do brac¸o JACO da Kinova e da cˆamara da Microsoft Kinect. Al´em disso, o mesmo ambiente foi replicado num mundo simulado atrav´es do simulador Gazebo.

(10)
(11)

Abstract The robotics field has been developing at an outstanding pace. The day in which robots will be commonplace within the society is closer than ever. However, in the popular culture robots are still perceived as a threat, disregarding its benefits. This irrational fear needs to be demystified.

One of the options to improve humanity perception towards robots, may rely in the dissemination of entertainment robots. This type of robots, which is developed with the only purpose of bringing joy to its human peers, provide a safe environment where robots can be inter-acted, attesting robot’s security, predictability, functionality, reliability, and robustness, while positively improving masses’ perception towards robots. The end goal shall be that such experiences also contribute to the acceptance of more complex robots within the society.

It is within this set of ideas that this dissertation is written. Using some of the most recent innovations in the field, an environment was developed where a human can experience playing board games with or against a robotic arm. Specifically it was developed an environment where it is possible to play tic-tac-toe with the robot. This robot is formed by the interconnection of a Kinova’s JACO arm with one Microsoft’s Kinect motion cam. Furthermore, the same environment was replicated in a simulated world using the Gazebo simulator.

(12)
(13)

Contents

Contents i

List of Figures iii

List of Tables v

1 Introdution 1

1.1 Motivation and goals . . . 1

1.2 Thesis structure . . . 2

2 State of the art 3 2.1 Hobbit, a care robot supporting independent living at home . . . 3

2.2 Robot Vision . . . 4 2.2.1 Dlib . . . 4 2.2.2 OpenCV . . . 5 2.2.3 PCL . . . 5 2.3 Mobile manipulation . . . 5 3 Environment 7 3.1 Hardware . . . 7 3.1.1 JACO Arm . . . 7 3.1.2 Kinect . . . 8 3.2 Software . . . 9

3.2.1 ROS - Robot Operating System . . . 9

3.2.2 Mobile manipulation . . . 10

3.2.3 Robot Vision . . . 10

3.2.4 Simulation . . . 10

4 Architectural approach 13 4.1 Logical division of responsibilities . . . 13

4.1.1 Perception Node . . . 13

Note about the robot self-representation in the world . . . 17

4.1.2 Manipulation Node . . . 18

(14)

Place . . . 20 4.1.3 Game resolution . . . 21 4.1.4 General status . . . 21 4.2 Simulated environment . . . 23 5 Results 27 5.1 Table detection . . . 27

5.2 Cup detection and individualization . . . 27

5.3 Board detection . . . 28

5.4 Pick up cup . . . 28

5.5 Place cup . . . 29

5.6 Board’s grid state detection . . . 30

6 Conclusions 31 6.1 Future Work . . . 32

References 33

A Environment replication 35

(15)

List of Figures

2.1 Hobbit v2 . . . 4

3.1 Environment. . . 7

3.2 Jaco in its roots. . . 8

3.3 Microsoft’s kinect. . . 9

4.1 Example of one of the captured point clouds. . . 14

4.2 Perception node’s table extraction. . . 15

4.3 Perception node’s prismatic extraction. . . 16

4.4 Perception node’s Euclidean segmentation. . . 16

4.5 RQT CloudColorFilter tool. . . 17

4.6 Perception node’s board extraction. Color extraction only. . . 18

4.7 Board extraction. Color and proximity conditions. . . 18

4.8 Manipulation node; critical areas. . . 19

4.9 Manipulation node; detail on the critical areas. . . 20

4.10 Manipulation node pick action frames. . . 20

4.11 Manipulation node place action frames. . . 21

4.12 Control tool gui. Setup State. . . 22

4.13 Control tool GUI. During game. . . 23

4.14 General status node’s state machine. . . 25

(16)
(17)

List of Tables

4.1 Example of the obtained plane coefficients. . . 15

5.1 Board detection in real environment. . . 28

5.2 10 runs of pick actions. . . 29

(18)
(19)

Chapter 1

Introdution

1.1

Motivation and goals

This master thesis was realized in the infrastructure and in collaboration with the research unity IRIS-Lab (IEETA) of the University of Aveiro.

The inspiration for this work came from the will to extend some of the work developed within the EuRoC project, in which the IRIS-Lab participated. EuRoC (short for Eu-ropean Robotics Challenges) is a consortium initiative (consisting of the leading robotics companies and research institutions), which aims to “develop competitive solutions to keep the European manufacturing industry’s global leadership in products and services” [1]. In one of its challenges, the TIMAIRIS team, envolving IRIS-Lab and IMA S.p.A, developed a robotic agent able to solve a pentaminoe puzzle. It was then noted how this particular problem could be extended into a research platform to foster the advancements achieved at IRIS lab, while also serving as an excellent setting capable of easily demonstrating the progress achieved in the robotic area to the public.

While the solution achieved at the EuRoC project worked quite well [2] it was developed in a completely different hardware than the one available at the IRIS-Lab. It was then necessary to adapt the previous solution to the new reality. Given that most of the work had to be redone, the opportunity was seized to introduce some changes:

• Switch the game/puzzle.

Pentaminoe puzzle, while an excellent challenge for robotic manipulation presents some shortcomings for the intended demonstrative propose: it is barely recognized by the general public and presents fewer possible interactions with an eventual human agent;

• Upgrade the underlying software.

Since 2014 a lot of progress has been made in the robotics and computer vision field, with the software accompanying this trend. Most of the software in which the solution was based has been superseded, or have seen inclusion in a new library which is able to provide increased robustness;

(20)

Given that some of the hardware in which this thesis relies has to be shared with other projects within the IRIS-Lab is was decided to replicate the real environment in a simulated platform, thus future works shall not be completely dependent in the availability of the hardware.

The end result shall be a robotic agent capable of playing Tic-Tac-Toe in three different ways: play against itself, play independently against a human, and play as instructed by a human.

1.2

Thesis structure

This thesis is organized in six chapters, with the current being the first.

Chapter 2 presents an overview of the state of the art. It contains a curated list with the available software that could prove useful in the resolution of this dissertation. For every software it was studied its possible usages and weighed its pros and cons within the resolution of the problems dealt in this dissertation.

Chapter 3 describes in-depth the software that was ultimately chosen and used in the realization of this dissertation.

Chapter 4 describes the achieved solution. It contains all the steps necessary to replicate the work done, the logic behind all choices and the results of each task necessary for the end goal result.

Chapter 5 contains a reliability analysis of the developed solution. It extends what is explained in the Chapter 4, but with every task repeated multiple types and registered its behavior for each attempt.

Finally, Chapter 6 contains the conclusions that can be derived from the work described in the dissertation. It also contains an analysis of areas that can be improved, and future extensions that can be added to this work.

(21)

Chapter 2

State of the art

Given the sizable multidisciplinary depth of the robotic field, and consequently of this master thesis, it is almost impossible to possess full in-depth knowledge of all the underlying theory. Nevertheless, one needs to be aware of the solutions developed, abstracted and provided by its peers. It is of special interest to study the cons and pros of each solution, without having to be familiar with the implementation details.

The following sections provide an overview of the software solutions currently available to tackle the challenges presented by the problem dealt in this thesis. For practical reasons, only open-source solutions, as defined by the Open Source Initiative [3], were considered.

It is also included an overview of a research robot 2.1, Hobbit [4], that share some similarly with the one developed in this thesis. The insights provided by the Hobbit’s team through the post-mortem of the first prototype was highly beneficial for the development of this thesis.

2.1

Hobbit, a care robot supporting independent

liv-ing at home

As described in the post-mortem of the first prototype: “The Hobbit project combines research from robotics, gerontology, and human–robot interaction to develop a care robot which is capable of fall prevention and detection as well as emergency detection and han-dling. Moreover, to enable daily interaction with the robot, other functions are added, such as bringing objects, offering reminders, and entertainment. The interaction with the user is based on a multimodal user interface including automatic speech recognition, text-to-speech, gesture recognition, and a graphical touch-based user interface.” [4]

Some of the results that were obtained in the Hobbit project proved to be highly inspirational for this thesis. In a testing environment this robot was able to show its features and build acceptance among its users.

In the technical area, while the most innovative part of Hobbit project was not used in this thesis, the multimodal user interface, other technical decisions were taken into account.

(22)

The conclusions reached in the manipulation and grasping proved to be useful, simplifying and guiding the work developed in this thesis.

Figure 2.1: Hobbit v2, a care robot supporting independent living at home.

2.2

Robot Vision

Robot vision is a hot topic in computer science, there are high stakes in its development, both in academia and in the business world, provoking a exponential rise of the number of available software solutions for this area. However most of these efforts are originated in a entrepreneurial environment, and therefore behind huge paywalls. The available solutions with more permissive usage requirements, while also great in number, are similar among them. The majority have as base or attempt to improve these three libraries:

• Dlib • OpenCV

• Point Cloud Library

2.2.1

Dlib

Dlib is general-purpose library, available under the BSL-1.0 license, that implements a set of machine learning algorithms [5]. Within this set of algorithms it includes some which purpose falls under the robot vision category.

Its more prominent feature for this thesis is its object detection. It is based in a convolutional neural network offering great flexibility and high robustness.

It saw no use in this thesis mainly due to its step learning curve and complexity while barely adding novel functionally when compared to other solutions. However it does seems to produce robuster results and therefore could prove useful in future derived work, even more so if complex object or motion detection is needed.

(23)

2.2.2

OpenCV

OpenCV, short for Open Source Computer Vision Library, is an efficient library that contains state of the art algorithms for Image processing. It follows the BSD license.

It offers robust, fast and reliable solutions for image processing. However its tooling is mostly aimed at the processing of 2D images. While it does contains features for 3D environments they are clearly lacking.

While ultimately it was not used during this work, it must be recognized that there are tasks which behavior could have been improved by its usage, mainly when dealing with planar surfaces. In derived work, this library could prove highly useful.

2.2.3

PCL

Point Cloud Library, PCL, is a C/C++ library for 2D/3D image and point cloud processing. It follows the BSD license. It was extensively used in this dissertation. As such and to avoid repetition a more in-depth description can be found in the following chapter, section 3.2.3.

2.3

Mobile manipulation

Mobile manipulation is a really broad subject that encompasses a lot of disciplines. The normal approach is to have a library for each problematic and connect them to obtain the desired result. Nevertheless, a really ambitious project is currently being developed that attempts to include all the necessary features and tooling in a single library: MoveIt!

Using such library greatly reduces the effort needed from the programmer, and leaves most of the work to the library. Since it includes all the requirements that encompass a Mobile manipulation stage, it can internally use the results of each module to reduce the complexity to which the programmer is exposed.

MoveIt! is one of the few libraries that can offer this set of features, while being

open-source, easily extensible and highly integrated with ROS. All of these characteristics are desirable, consequently the choice is virtually reduced to MoveIt!. Therefore MoveIt! was used as the sole mobile manipulation library in this dissertation. Once again to avoid repetition, a more detailed description can be found in the section 3.2.2.

(24)
(25)

Chapter 3

Environment

Figure 3.1 presents the environment used in this work. It is constituted by a 6-DOF robotic arm (Kinova’s JACO 3 FINGERS) in which a three finger gripper is attached; an RGB-D camera (Microsoft’s Kinect v1); the board for tic-tac-toe game, with cups as X’s and O’s; and the necessary infrastructure to fixate the apparatus.

Figure 3.1: Environment.

The following sections give a short description of each choice and the reasons that lead to such option. While for the hardware side, it was limited by the material present at the IRIS-Lab, in the software side, there was more freedom.

3.1

Hardware

3.1.1

JACO Arm

Kinova’s JACO 3 FINGERS, figure 3.2 is a light-weight robotic arm with 6 degrees of freedom (6-DOF). It represented the first product released by the Canadian company Kinova Robotics [6]. While originally it was only projected and used as an assistive robotic

(26)

arm [7], improving the daily life of those with reduced upper body mobility, its outstanding features did not go unnoticed for long. It promptly gained adoption among the robotic field research community[8][9], persuading Kinova to also explore this application field. It has seen numerous new features, mainly coming from the software side. Particularly it saw the development of a new and comprehensive high-level C++ API and also a high quality ROS interface.

Figure 3.2: Jaco in its roots.

In this thesis it was used as the main actuator. Its 6-DOF conjugated with its 90cm reach and its high precision joint control allow an ample manoeuvrability within a vast three-dimensional workspace. Combined with its three fingers, able to withstand a max-imum payload of 1.3Kg [10], it is able to effectively grasp any board game playing piece, proving a great solution for the work intended in this master thesis.

3.1.2

Kinect

Microsoft’s Kinect v1, figure 3.3, was originally developed as a motion controller for Microsoft entertainment system XBOX 360 [11], nevertheless, due to its ability to capture depth and motion information combined with its low price, once it got open source drivers, quickly saw numerous researchers employing it in various researches and experiments [12]. It is able to output RGB and depth information at the resolution of 640x480 pixels reaching a rate of approximately 30Hz. Its practical depth range is limited to the interval between 1.2 and 3.5 meter and its angular field of view is 58◦ horizontally, 45vertically

and 70◦ diagonally [13].

In this thesis Kinect is the main and sole perception device. Given its characteristics it is fully capable of performing the required tasks.

(27)

Figure 3.3: Microsoft’s kinect.

3.2

Software

3.2.1

ROS - Robot Operating System

“The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behaviour across a wide variety of robotic platforms” [14]. ROS at its lowest level is a middleware that offers a message passing interface, both abstracting the underlying heterogeneous hardware and providing inter-process communication mech-anisms. In addition, it provides a comprehensive library that eases the tracking of all different robot’s moving parts and its respective frames. The transform library can also be used with range and depth sensors, being possible to transform the received data as it was perceived from a different frame.

Another major strength is its introspection tools, able to provide three-dimensional visualization of many sensors data types, which in conjugation with the transform library provides an immersive environment that allows to make sense of all the data that the robot is receiving [15].

ROS was the obvious choice for this master thesis. The work in which it is based was already developed in ROS. All the hardware used in this thesis provides excellent interfaces for ROS; To interface with the Microsoft’s Kinect camera it was used the openni ROS package. For the JACO arm the Kinova-ROS ROS package provided directly by Kinova was used.

In favor of the ROS choice is also the fact that all authors have in-depth knowledge of ROS acquired in previous projects and are perfectly comfortable working with it. Finally, all code provided by ROS is licensed within a permissive license, BSD, allowing a better understanding of the underlying abstraction layer, and even if needed, modifying its code.

(28)

3.2.2

Mobile manipulation

As stated in the previous chapter MoveIt! was used as the sole mobile manipulation library. “MoveIt! is state of the art software for mobile manipulation, incorporating the latest advances in motion planning, manipulation, 3D perception, kinematics, control and navigation. It provides an easy-to-use platform for developing advanced robotics applica-tions, evaluating new robot designs and building integrated robotics products for industrial, commercial, R&D and other domains.” [16].

MoveIt! facilitates immensely the task of planning and evaluating arm motions. With

minimal configuration it can produce robust motion plans, even when obstacles are present. It abstracts away the robot representation, the robot state, the motion planning, both the forward and inverse kinematics, the world representation and the collision checking. While abstracted, each previously listed task can be interacted and adapted for better control or to perform more complex interactions.

It has a simple interface. In its most basic form it produces and executes a motion plan with the only input being the goal pose. But it can also perform more complex tasks. It can accept goals poses, in both the joint and Cartesian space. It can compute Cartesian paths. It can form motion plans based only in constrains. It can take input from 3D cameras to map the world and consequently also plan avoiding obstacles. It can calculate motions where joints must obey constrains. It can recognize objects and surfaces, pick and place objects. It can work in dual-arm configurations.

In this thesis it was not used to its full potential. Mostly because it is still a immature technology and some of its solutions do still behavior unexpectedly. Therefore its usage within this thesis was reduced to motion planning and object collision detection.

3.2.3

Robot Vision

As described in the previous chapter, PCL was the library chosen to handle the robot vision. “PCL presents an advanced and extensive approach to the subject of 3D perception, and it is meant to provide support for all the common 3D building blocks that applications need. The library contains state-of-the art algorithms for: filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. PCL is supported by an international community of robotics and perception researchers” [17].

PCL is an advanced tool, but it is surprisingly easy to use. With very few instructions, it is possible to easily deconstruct a scene and obtain its constituents. It integrates seamlessly with the Microsoft’s Kinect camera and the ROS software.

It was chosen because it is the most known and widely utilized open source library that works with 3D environments and is oriented towards the robot vision context.

3.2.4

Simulation

As described in the Gazebo’s webpage, its objective is to provide a simulated envi-ronment to easily test robots: “Robot simulation is an essential tool in every roboticist’s

(29)

toolbox. A well-designed simulator makes it possible to rapidly test algorithms, design robots, perform regression testing, and train AI system using realistic scenarios. Gazebo offers the ability to accurately and efficiently simulate populations of robots in complex in-door and outin-door environments. At your fingertips is a robust physics engine, high-quality graphics, and convenient programmatic and graphical interfaces. Best of all, Gazebo is free with a vibrant community” [18].

Gazebo is also highly integrated with ROS, presenting a great solution to replace the real environment component seamlessly, without being necessary to write more code to integrate this new scenario.

For this thesis gazebo was the only considered simulator since it is the only simulator that already had modelation for all the hardware used in this thesis.

(30)
(31)

Chapter 4

Architectural approach

4.1

Logical division of responsibilities

Following the ROS development philosophy, the solution developed within this thesis is subdivided in a few nodes. Node is a ROS concept, essentially a process that performs computation. ROS nodes are easily connected among themselves using the built-in ROS’ IPC capabilities. These IPC capabilities can be either handled in a synchronous or asyn-chronous manner. Since choosing the asynasyn-chronous option allows the parallelization of some sub-routines, potentiality improving the overall performance of the program, in this thesis all nodes used this approach.

Node’s interconnections are a weak connection, the malfunction of a node doesn’t prop-agate to the others nodes. The remaining nodes can continue to perform their task if, of course, they do not rely in functions provided by the faulty node. This approach is advised since, with reduced computation overhead, one is able to develop solutions with better fault tolerance, reduced code complexity, and improved responsibility division. Also, when things fail it is easier to isolate the issue.

The implementation developed for this thesis (excluding the ones externally provided and used to expose Kinect and Jaco functionality) was subdivided in four nodes: one responsible for the actuation, another for the perception, other for the game resolution, and finally one that manages the general status and inter-node logic.

4.1.1

Perception Node

This section describes in-depth the functionality provided by the perception node. This node has as its main input the data provided by the Kinect sensor. This sensor capture the scene through a few different techniques, which, when blended together, have as final result a detailed machine translation of the captured scene. It is presented as a point cloud, a structure that aggregates every captured point, each point representing a location in a 3D environment (XYZ), and its color. Figure 4.1 presents an example of such cloud.

Having a representation of the scene it is necessary to make sense of it. In the scene there are a few objects of interest, therefore it is needed to locate and isolate them from

(32)

Figure 4.1: Example of one of the captured point clouds.

all the rest. One has to find the subset of points, usually referred as a cluster, within the point cloud that constitutes the object that needs to be found. The more points a cloud has, the harder this task is, hence, the first step is to remove all the points that present no value.

The input cloud precision is somewhere around the millimeters (it depends on the ob-ject’s distance from the focus) far more than the required. All of the objects and perceived surfaces used in this work have dimensions of at least a few centimeters. Therefore, the input cloud has more points than the required. In order to reduce them, a downsampling algorithm is applied. Using PCL’s VoxelGrid interface, a voxel grid is build, with each voxel measuring 5x5x5 mm, and each point contained in the voxel is averaged towards its centroid. The resulting cloud possesses a precision of 1 ± 0.5 cm. This step also has the advantage of reducing some of the input noise, particularly it is especially efficient at reducing the number of isolated points.

Given that some beforehand knowledge of the scene is possessed, such fact can be exploited to further ease the task. It is known that, from the position and angle in which the camera is placed, it will capture a table, on top of which sits the game board and the playing pieces. Unless the camera has its view obstructed, the table will always be the biggest plane in the scene. The board is a shade of white, and represents the biggest surface in that color. Within this board there is painted, with thin black lines, a 3x3 grid. These 9 squares are around the same size. The cups that replaces the X’s and O’s are roughly shaped like a truncated cone, with base radius of 7cm, a top radius of 4.5cm and a height of 11cm. Specifically the cups that corresponds to the X’s are of a red shade, while O’s are in painted in a blue shade.

With the description given above, one can easily pinpoint the region of interest, points outside are, therefore, useless; any point below or 11cm above the table is unnecessary. Once it is estimated the table pose, such points can be easily removed. Fortunately the table is the dominant plane in the scene, and PCL provides an easy way to find such plane. Using the SACSegmentation interface, with a plane as model and RANSAC as the estimator, it is possible to find the coefficients for the plane model, and the points

(33)

contained by the model. Knowing the points that belong to the table surface is of special interest because it is also there that is located the game board. Figure 4.2 presents the result obtained, and table 4.1 gives some insight about the obtained plane coefficients, the coefficients follow the equation of a plane, ax + by + cz + d = 0.

Figure 4.2: Perception node’s table extraction.

Run a b c d

1 0.004 0.470 0.883 -0.900 2 0.009 0.470 0.883 -0.900 3 0.008 0.470 0.883 -0.900

Table 4.1: Example of the obtained plane coefficients.

Now that table plane’s coefficients have been obtained, the points outside the range 4-11cm (assuming the table plane as the origin) can be removed, reducing the overall number of points in the scene representation, and therefore reducing the computation cost. The range starts at 4cm, because if it started at 0cm it could also include some portions of the table plane. With this change it is guaranteed that no point belonging to the table is included, making it more resilient to noise. It also has the advantage of removing clusters that could introduce false matches. Once again recurring to the Point Cloud library, it was used the ExtractPolygonalPrismData interface. It takes a planar hull, projecting it within the desired height bounds, and registers all points inside the newly formed prism. The end result is shown in the figure 4.3

After the two previously described transformations what is expected to remain is the upper part of the cups. It is still needed to obtain the individual cluster for each one. One possible solution, and the one used, is to perform a Euclidean segmentation [19]. This algorithm, in short, takes the Euclidean distance between two points, and if they are close enough they belong to the same cluster otherwise they are part of different clusters. For this use case in particular it was defined that each cluster is more than 2cm apart form any

(34)

Figure 4.3: Perception node’s prismatic extraction.

other. Applying the EuclideanClusterExtraction PCL’s interface the result contained in the figure 4.4 is obtained. The colors were altered to make evident that each cup belonged to a different cluster.

Figure 4.4: Perception node’s Euclidean segmentation.

After each individual cup cluster is obtained, its characteristics are determined: color and pose in relation to robot base. The color is considered as being the average of all hues that constitute the cluster, if less than 30 or bigger than 300 it is considered red otherwise blue.

Finally, it is needed to find the game board. For this task, it will be used the table cluster that was found before. As described, the board has a unique shade of white among all of what sits in the table surface. Therefore it is expected that if it is applied a color filter it is possible to extract exclusively the board. In reality, even after the best possible color filter there will remain some rogue points. For example when dealing with whites, reflections are a common source of this type of errors. To eliminate such artifacts, after the color filter, a proximity filter is also applied. Any point that in its closest vicinity does not contain a minimum number of other points is not considered to belong to the main cloud.

(35)

These parameters, vicinity length and minimum number of other points, were determined through trial-and-error.

The RGB color format, the one used in the Kinect camera, and consequently the one in the points clouds, is not the most appropriate to perform a color filtering since different lighting conditions are hard to express in this system. The HSV system is a better solution. Hue Saturation Value, can be perceived as if Hue represents the base color, Saturation its different shades and Value the different conditions of lighting. In this system, shades of white appear when S is near 0% and V near 100% independently of H value. With HSV system the determination of values for the color filtering process is more intuitive, but the task to find the best values for the filter remains as tedious as before. The process is mostly based in trial-and-error, and is affected by the current environment conditions. Consequently a minimalist tool was developed to help with this task, figure 4.5. It eliminates the need of recompilation to test each new condition set, while also providing in real time feedback of the applied filters. Since these values are so volatile, they are not hard-coded. They can be changed via parameters that are set during the initialization. Once again to reach the end goal result PCL was used. For the color filter it was used the

Figure 4.5: RQT CloudColorFilter tool.

ConditionalRemoval interface. It takes a set of conditions, in this case the range for H, S

and V, and removes (or maintains) the points that follow the set of conditions, Figure 4.6 . For the proximity filter it was used RadiusOutlierRemoval interface. It takes a radius search value and the minimal quantity of points that ought be within, removing the points that does not reach such minimum. Figure 4.7 is the end result. When compared with figure 4.6, which shows the result after filtering the desired color, it is obviously noticeable the shortcoming of only using color filters.

Note about the robot self-representation in the world

As described earlier, the robot is constituted by two individual and disconnected parts, the arm and the camera. As such, being two separated parts, its relative position could change from time to time. It is therefore needed to determine the poses between these two

(36)

Figure 4.6: Perception node’s board extraction. Color extraction only.

Figure 4.7: Perception node’s board extraction. With both color and proximity conditions.

parts. It was planned to use the camera capabilities to determine its pose in relation to the arm. The plan was to use the knowledge of the arm geometry and characteristics to determine flawlessly its pose in relation to the camera. The fact that arm is mostly con-stituted by highly reflective surfaces and with few clearly distinctive features transformed this task in a hellish challenge. Unfortunately no solution could be developed that would be able to achieve the desired result.

In order to not compromise the whole work, it was decided to measure a priori the pose relation between the parts, at the cost that not displacement could occur.

4.1.2

Manipulation Node

The manipulation node consists of the logic required to pick and place cups, while avoiding collisions with external entities. All of the solutions presented in this section rely in the utilization of MoveIt! software. In the majority of the cases the defaults used by MoveIt! yielded the best results, meaning that otherwise stated the planner used by

(37)

MoveIt! is the one it uses by default, RRT Connect.

A pick and place motion plan begins once a play is defined. The object to be picked will always be a cup and will always be outside of the playing grid, the destination of the cup will always be a square of the 3x3 grid. The determination of the information about the cup to be picked and the target square is outside of the scope of this node, it is given to it once a pick and place action is requested.

Pick

The pick motion is divided in 3 main steps: isolate the critical areas, approach the cup and grasp it and retreat to a safe position.

In order to isolate the critical areas, spaces where a collision can occur, a set of rectan-gular cuboids is built which contain all the elements that could produce a collision. First, a rectangular cuboid is defined that encompasses the structure that holds the camera, as seen in figure 4.8. For the remaining obstacles, once again the knowledge of the scene is used, as previously described in the Perception section 4.1.1. Exploiting such knowledge set of rectangular cuboids that isolate all the collision elements can be built without adding to much computation strain. On top of the table surface a rectangular cuboid is placed, occupying it all, and spawning 11cm in height. An aperture is then defined where the target cup is located, allowing the gripper to reach there. The figure 4.8 illustrates visually the solution reached.

MoveIt! does not possess any interface to directly construct the collision regions in the

same manner as it was described previous paragraph. In order to accomplish the intended effect, the region is instead formed by 4 smaller rectangular cuboids. Figure 4.9 illustrates clearly the position of each by having them painted in different colors: blue, purple, red, and yellow. They are arranged in such manner that every cup is enclosed by them except the cup that is intended to picked.

Figure 4.8: Manipulation node; critical areas.

Once safe to traverse the environment, an order is issued for the arm end-effector to place itself just on top of target cup. Then the gripper is set to an open configuration,

(38)

Figure 4.9: Manipulation node; detail on the critical areas.

open enough to envelop the cup just as the arm is lowered. Then the gripper is closed up to a configuration that guarantees the cup grasp. The state in which it is guaranteed that the cup is grasped, was determined a priori, through a trial-and-error process. Finally the arm is retracted to a safe position, always holding the cup, awaiting for the place motion. Figure 4.10 gives an overview of the process.

Figure 4.10: Manipulation node pick action frames.

Place

The place motion is in all similar to the pick motion. It is also performed in 3 main steps: isolate the critical areas, approach the target square in the grid and release the cup, and retreat to the position just before the pick and place motion started.

The rectangular cuboids that enclose the obstacles is built in the same manner that the one described in the Pick section. The only difference is that now the aperture is located in the grid’s square where the cup is going to be placed. The arm is programmed to move to a position that leaves the cup hovering the target square. Then it releases the cup, retracts

(39)

the arm, and disassembles the anti-collision cuboids, concluding the pick and place action. Figure 4.11 illustrates the process.

Figure 4.11: Manipulation node place action frames.

4.1.3

Game resolution

The game resolution follows the Minimax algorithm [20]. An in-depth explanation is out of the scope for this thesis. If the reader is interested in the algorithm details, the book Game Theory written by Michael Maschler, Eilon Solan and Shmuel Zamir, provides a great explanation.

The solution used in this thesis is closely based in the open-source one provided by Jason Fox at https://github.com/jasonrobertfox/tictactoe [21].

The Minimax algorithm guarantees the best possible play. Due to tic-tac-toe inherent characteristics if the best play was always followed the best result the adversary could achieve is a draw. Therefore, and to simulate different degrees of difficulty, there is a random chance that the robot chooses a different valid play. The probability of ignoring the result of Minimax algorithm is bigger the easier the difficulty is set.

4.1.4

General status

The purpose of this node is to coordinate all the work done among the other nodes. It is the main node in the sense that it is the node that receives all the processed data, and decides what action to take based on that data. It is also the node responsible to keep track of the current state of the game.

Before being possible to analyze the board state, it is necessary to locate the board in the world. This step happens in Setup State. Once the board is found it is possible to start the game. It is also during this state that the initial configuration occurs. The user can modify the behavior of the robot through the tool represented in the figure 4.12.

The GamePlayMode combo box allows the user to chose with whom the robot plays. The options are:

(40)

Figure 4.12: Control tool gui. Setup State. • Adversary moves directly

i.e. the robot during the adversary turn is expected to not interfere. It is the adversary by its own means that moves the cup to the desired location

• Adversary instructs intention

Not implemented. The intention was to have the robot interpreting the adversary gestures, and doing the adversary play according to the interpret instructions. • Robot plays with itself

The Robot plays in both turns, however its decision logic is not affected. Its decision are the same as if the adversary was a human. There is no human interaction besides the required in the setup state.

The GameDifficulty combo box sets the expected difficulty. It gradually increases in the following order, Easy, Medium, Hard, Impossible. In the last setting, Impossible, it is impossible for the human adversary to win, the best result achievable is a draw.

The Robot moves second check box defines who plays first. By default it is the robot that moves first.

Finally the Finish Setup button, once pressed, signals that the game has begun. During the game, the human operator can observe the robot perception through the second panel in the GUI. Figure 4.13 presents such panel. In the left it is the current board state: white for empty cell, red for X, and blue for O; In the right there is a list of cups detected outside the board, once again red for X and blue for O.

The logic followed for the remaining states is presented in the figure 4.14.

The Invalid State that can occur in any phase. It is used to signal that the current environment presents an unrecoverable situation.

The Waiting Environment State happens when it was already given the order to move the robot, but it still needs to find itself within the scene, and/or the board. Once all the conditions are met it changes to next state.

(41)

Figure 4.13: Control tool GUI. During game.

The Ready State is the state where a new play is formed. It requires the existence of an extra cup (defined as cup that is not currently within the play grid but it is in a region where it can be reached) to move the next state. While this condition is not met the robot remains in this state.

The Waiting Robot Play State ends once the perception node finds a new valid cup in the playing grid. A new cup is considered valid once all the movements have finished and the perception node reports the same configuration three times. This approach was chosen because it allows to detect invalid state as soon as them happen.

There is a special case where a transition that is not clearly represented in the diagram can happen. It occurs when the robot is playing against itself, the flags that dictate whose turn is and which color belongs to robot are toggled resulting in it always transitioning to the ready state.

The Waiting Adversary Play State is in all similar to the Waiting Self Play State, it also only ends when the perception node finds a new valid cup in the playing grid, being the only differences: it does not contain the special case that occurs in the Waiting Self

Play State when the robot is playing with itself, and the color that is considered valid is

different.

Finally, in the Restarting State, each cup is individually picked and moved away from the playing grid. Once clear, the game is ready to restart.

4.2

Simulated environment

Everything that was said for the real environment holds in the simulated world, except that human interaction is yet to be modeled. It was carefully crafted in order to be as similar as possible to the real one. A side-by-side comparison is presented in the figure 4.15 While it did not change the end result there is a major difference between the real and the simulated environment that could not be worked out. In the real environment the cups

(42)

are grasped with only two of the three available fingers. This configuration proved much more stable than using all the three fingers. When using the latter configuration there were instances where the cup would slip and fall from the grasp. In the simulated environment, however, the observed behavior is exactly the opposite: the three fingers grasp is more stable. It is suspected that such difference occurs from the inability of the simulator to properly simulate object deformation. To handle this aspect it was necessary to define two different chains, and two different grasping points, one each type of environment. The chains are only different in its endings: in the simulated environment it ends in-between the two fingers, in the real environment it end in the centeroid defined by the three fingers. Those points are also the same used as the grasping points. It is worth noting that while the same executable is valid for each scenario (i.e. no recompilation is required) it is necessary to pass a parameter indicating whether the robot is in the real or in the simulated environment.

Even though it was not attempted to map the noise of the real scenario to the simulated, all the solutions described above proved robust enough to also work in the absence of noise.

(43)
(44)

Figure 4.15: Real environment vs simulated environment

(45)

Chapter 5

Results

This section presents an in-depth analysis of the reliability of the proposed solutions. While in the previous chapter some of the results can already be observed, here it is performed a more high level analysis. The effectiveness of the proposed solution is studied among a few different scenarios, and how many instances it can go without doing nothing unexpected. Unless explicitly noted, the results presented in this chapter are valid for both the real and simulated environments.

5.1

Table detection

When the environment conditions described in the section 4.1.1 are met, it is yet to be observed an instance where the solution developed failed to find the table. However, during the normal execution there are a few moments where the camera is obstructed and therefore it is impossible to find the table. These instances are not regarded as failure, but as expected behavior, since there is not enough information from the sole perception input to extract the table plane by any means.

5.2

Cup detection and individualization

Once again there are two different environments, that result from a complementary set of conditions, that need to be analyzed: the environment where the conditions described in the section 4.1.1 are met, and the environment that does not meet such conditions.

If it is guaranteed that a minimum distance of 2cm is maintained between the upper part of the cups, the solution developed is always able to find and individualize the cup cluster. If this restriction is not respected, undefined behavior can occur. The most frequent consequence observed if the environment is disrespecting the expected conditions is having two different cups being interpret as one single cluster, which while theoretically detectable, due to how Euclidean segmentation is implemented in PCL (it requires an upper and lower bond to determine the cluster validity), it would be dropped during the segmentation.

(46)

About the colors: if the lighting conditions do not suffer an ample change after the initial configuration, the system is able to distinguish between the two colors with 100% success rate. Lighting changes like a cloud passing in front of the Sun are enough to trigger undefined behavior, and no guarantee about the perceived color can be made. There is a relation between the number of wrongly perceived colors and the amount of change in the lighting conditions, however no attempt was made to measure this relation, since there was no means to accurately measure the lighting changes.

5.3

Board detection

Within the environment described in the section 4.1.1, and when maintaining the light-ing conditions (as explained in the previous section), the board detection has a success rate of 100%. It was always possible to find the correct position of the board within a 2cm tolerance, upper limit to which the system is guaranteed to perform correctly (a fourth of a grid’s slot size). The board detection process yielded even better results, no instance were observed where the error exceeded 1cm. Follows a short table, Table 5.1 with the results of some runs done in the real environment. The error was measured as the euclidean distance between the place where effectively is the board’s corner closest to the arm and where it was detected.

Run Correctly detected? Error(cm)

1 Yes 0.114 2 Yes 0.388 3 Yes 0.208 4 Yes 0.142 5 Yes 0.071 6 Yes 0.702 7 Yes 0.240 8 Yes 0.521 9 Yes 0.187 10 Yes 0.092

Table 5.1: Board detection in real environment.

5.4

Pick up cup

Within a normal interaction with the game every observed pick up action worked flaw-lessly. In every run realized it was not observed any occurrence where the arm failed to pick the cup. There are however some actions that can introduce undefined behavior: the cup position is determined in the very beginning of the pick action, if the cup is moved during the pick up action, no guarantee of its success can be made.

(47)

Even during normal execution there was some occurrences where it presented undesired behavior: it slightly collided with other cups. It bumped into another cups but without enough force to produce a displacement that could alter the game state. Therefore such occurrences were disregarded for the overall success rate of the action. Table 5.2 resumes the observed occurrences.

Run Correctly grasped? Collided with any other element?

1 Yes No 2 Yes No 3 Yes No 4 Yes No 5 Yes No 6 Yes No 7 Yes No

8 Yes Yes, but without change of the state

9 Yes No

10 Yes No

Table 5.2: 10 runs of pick actions.

5.5

Place cup

The resilience results obtained for the Place cup action were very similar to the Pick

cup action. It did perform the action with success in every attempt, but there are also

cases where minor collisions did occur. Table 5.3 resumes the occurrences registered after 10 place actions.

Run Correctly placed? Collided with any other element?

1 Yes No

2 Yes No

3 Yes No

4 Yes Yes, but without change of the state

5 Yes No 6 Yes No 7 Yes No 8 Yes No 9 Yes No 10 Yes No

(48)

5.6

Board’s grid state detection

The resolution of this task relies in the success of both the task of finding and indi-vidualize the cups and the task of the board extraction. These two tasks, as described previously, in normal operation conditions, are yet to been seen failing. Therefore finding the current game state also was observed to work 100% of the attempts.

(49)

Chapter 6

Conclusions

Building a robust robot, mainly when perception is involved, is very challenging. There are millions of small variations, tiny errors almost imperceptible by the sensors but enough to produce significant offset in the expected behaviour. This thesis suffered hugely with such errors, the work presented here is far from the initial goal. Most of the time dedicated with this work was spent in the improvement of its robustness to noise. Devising approaches to solve the required goals was trivial, the challenging part was having it resistant to noise. Another aspect that hardened reaching the purposed goals was the computation cost of perception operations. While there is a number of robust solutions that could be used to reach near perfect results, even when noise is present, the computation cost of such operations is prohibitively long. It could take up to a few minutes, compromising the de-sired interactivity with the robot. It was therefore needed to reach a compromise between efficiency and interactivity, while also using as much shortcuts as possible. One of the shortcuts that was used was the parallelization of certain operations. Although it yielded the desired results it also exponentially increased the difficulty of the code. Another short-cut that was used was the reduction of the density of the input point cloud. While it reduced in a noticeable manner the computation cost of the perception operations, it also decreased its resilience against noise.

Given the current work, one of the things it is clearly lacking is resilience. If some-thing goes slightly wrong during its execution it has no means to perform any type of recover, rending the whole game progress useless, and requiring a cold restart and operator interference, to be possible to interact with it again.

The usage of MoveIt! significantly reduced the complexity of the manipulation task, however its apparent simple API barely hides the overwhelming complexity of its implemen-tation. New programmers to the field and to the library can interpret some of the obtained results as unexpected. In reality it is just the implementation leaking out. MoveIt! has as goal acting as a black box: it is given a request and it produces a result. However, in order for that result to fall under predictable behavior it still requires extensive configura-tion, and in-depth knowledge of the underlying inner working, defeating the main reason that led to its choice. Nevertheless it still decreased the time required to reach a working solution.

(50)

6.1

Future Work

Even though the work presented here is a reduced version of what was initially planned, it is still a solid base for eventual derived works. It is built on top of state-of-art software, in its latest releases, meaning that whoever comes after will have to dedicate little to no time in bringing the software up-to-date.

One of the interesting extensions that can result from this work is having the robot play under human instructions. The human would instruct his intentions through hand signs, and the robot would interpret and follow the orders. Likewise, it would be interesting to implement other games.

(51)

References

[1] EuRoC consortium. EuRoC, a €7M Grant Robotics Challenge Programme has Launched. http://www.euroc-project.eu/fileadmin/articles/EuRoC_PR_ 2014-04-01.pdf, 2014. [Online].

[2] IRIS Lab / IEETA Universidade de Aveiro. EuRoC Stage I, Challenge 2, IRIS Task 5. https://www.youtube.com/watch?v=qyTO1Nf3L2U, 2016. [Online].

[3] Open Source Initiative. The Open Source Definition. https://opensource.org/osd. [Online].

[4] D. Fischinger, P. Einramhof, K. Papoutsakis, W. Wohlkinger, P. Mayer, P. Panek, S. Hofmann, T. Koertner, A. Weiss, A. Argyros, and M. Vincze. Hobbit, a care robot supporting independent living at home: First prototype and lessons learned. Robotics

and Autonomous Systems, 75:60–78, 2016.

[5] Davis E. King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning

Research, 10:1755–1758, 2009.

[6] Kinova Robotics. About us. http://www.kinovarobotics.com/about-us/. [Online]. [7] F. Boucher. From need to innovation [industrial activities]. IEEE Robotics Automation

Magazine, 22(3):18–19, Sept 2015.

[8] V. Modugno, G. Neumann, E. Rueckert, G. Oriolo, J. Peters, and S. Ivaldi. Learn-ing soft task priorities for control of redundant robots. In 2016 IEEE International

Conference on Robotics and Automation (ICRA), pages 221–226, May 2016.

[9] C. Bousquet-Jette, S. Achiche, D. Beaini, Y. S. Law-Kam Cio, C. Leblond-M´enard, and M. Raison. Fast scene analysis using vision and artificial intelligence for object prehension by an assistive robot. Engineering Applications of Artificial Intelligence, 63:33–44, 2017.

[10] Kinova Robotics. User guide. http://www.kinovarobotics.com/wp-content/ uploads/2017/06/JACO%C2%B2-User-Guide-Asstive-Robotics-April-2017.pdf. [Online].

(52)

[11] Brier Dudley. E3: New info on Microsoft’s Natal – how it works, multiplayer and PC versions. http://old.seattletimes.com/html/technologybrierdudleysblog/ 2009296568_e3_new_info_on_microsofts_nata.html. [Online].

[12] OpenKinect. History. https://openkinect.org/wiki/History. [Online].

[13] OpenKinect. Imaging Information. https://openkinect.org/wiki/Imaging_ Information. [Online].

[14] Open Source Robotics Foundation. About ROS. http://www.ros.org/about-ros/. [Online].

[15] Open Source Robotics Foundation. Core Components. http://www.ros.org/ core-components/. [Online].

[16] Ioan A. Sucan and Sachin Chitta. ”MoveIt!”. http://moveit.ros.org/. [Online]. [17] Radu Bogdan Rusu and Steve Cousins. 3D is here: Point Cloud Library (PCL).

In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011.

[18] Open Source Robotics Foundation. Gazebo. http://gazebosim.org/. [Online]. [19] Radu Bogdan Rusu. Semantic 3D Object Maps for Everyday Manipulation in

Hu-man Living Environments. PhD thesis, Computer Science department, Technische

Universitaet Muenchen, Germany, October 2009.

[20] Michael Maschler, Eilon Solan, and Shmuel Zamir. Game Theory. Cambridge Uni-versity Press, 2013.

[21] Jason Fox. tictactoe. https://github.com/jasonrobertfox/tictactoe, 2013. [On-line].

(53)

Appendix A

Environment replication

• Install Ubuntu 16.04 LTS in your computer. The installation medium and documen-tation of the procedure can be found here: http://releases.ubuntu.com/16.04/. • Update the system.

$ s u d o apt u p d a t e

$ s u d o apt dist - u p g r a d e

• Install ROS Kinetic desktop-full version. You can follow this guide: http://wiki. ros.org/kinetic/Installation/Ubuntu.

• Create the catkin workspace: http://wiki.ros.org/catkin/Tutorials/create_ a_workspace. Note: Don’t forget to add the appropriate sourcings to your /.bashrc file.

• Clone the kinova-ros package, available at: https://github.com/MiguelHorta/ kinova-ros.

$ cd ˜/ c a t k i n _ w s

$ git c l o n e h t t p s :// g i t h u b . com / M i g u e l H o r t a / kinova - ros . git

• Install the driver for the JACO arm:

$ s u d o cp $ H O M E / c a t k i n _ w s / src / kinova - ros / k i n o v a _ d r i v e r / u d e v /10 - kinova - arm . r u l e s \ / etc / u d e v / r u l e s . d /

• Clone the iris-jacokinect package, available at: https://github.com/MiguelHorta/ iris-jacokinect/.

$ cd ˜/ c a t k i n _ w s

$ git c l o n e h t t p s :// g i t h u b . com / M i g u e l H o r t a / iris - j a c o k i n e c t . git

(54)

$ s u d o apt - get i n s t a l l ros - kinetic - m o v e i t

• Install MoveIt! visual tools.

$ s u d o apt i n s t a l l ros - kinetic - moveit - visual - t o o l s

• It should compile now. However it will not yet run.

$ cd ˜/ c a t k i n _ w s $ c a t k i n _ m a k e

• Install kinect support software.

$ s u d o apt i n s t a l l ros - kinetic - openni - l a u n c h

• Install the packages for the inverse kinematics.

$ s u d o apt i n s t a l l ros - kinetic - trac - ik - k i n e m a t i c s - p l u g i n ros - kinetic - trac - ik

• Install the packages for the gazebo simulator.

$ s u d o apt i n s t a l l ros - kinetic - gazebo - ros - c o n t r o l ros - kinetic - ros - c o n t r o l l e r s *

• Everything is now installed. To run with robot:

$ r o s l a u n c h i j k _ t t t _ s o l v e r s e t u p . l a u n c h $ r o s l a u n c h i j k _ t t t _ s o l v e r p o s _ s e t u p . l a u n c h

$ r o s l a u n c h i j k _ t t t _ s o l v e r run . l a u n c h r o b o t _ c o n n e c t e d := t r u e

• Without the robot, in the simulator:

$ r o s l a u n c h i j k _ t t t _ s o l v e r s e t u p . l a u n c h n o t _ g a z e b o := f a l s e $ r o s l a u n c h i j k _ t t t _ s o l v e r p o s _ s e t u p _ g a z e b o . l a u n c h

$ r o s l a u n c h i j k _ t t t _ s o l v e r run . l a u n c h r o b o t _ c o n n e c t e d := f a l s e

• The GUI to control to robot can be launched with:

$ rqt - s i j k _ t t t / C o n t r o l l e r

Referências

Documentos relacionados

Were used and quality indicators patient safety National Agency of sanitary surveillance (ANVISA – 2013) as the type of assistance used, installation and removal

As Mylopoulos (1992) points out, conceptual models represent aspects of the physical and social world and support communication, problem solving, and negotiation of

Utilice la entrada 1 (máxima sensibilidad) para la mayoría de las guitarras y la entrada input 2 (–6dB) con instrumentos de salida alta/preamplificados o para un

Objetivou-se com esse trabalho, avaliar o desempenho produtivo de caprinos (Capra hircus) da raça Canindé submetidos a um sistema de exploração misto em regime de manejo

[r]

Com base nos autores citados referente ao comparecimento do público nos estádios e na necessidade de se desenvolver um melhor entendimento do estágio atual

A aplicação dos princípios da Educação Popular em Saúde, nas práticas educativas na ESF, pode constituir um grande desafio, já que muitos profissionais de saúde podem não

39. O montante global dos pagamentos referenciados nos nºs 10 e segs. Os demandados Filipe Santos e Vítor Branquinho agiram voluntária, livre e conscientemente. Os demandados