• Nenhum resultado encontrado

Development of a vision system for humanoid robots

N/A
N/A
Protected

Academic year: 2021

Share "Development of a vision system for humanoid robots"

Copied!
104
0
0

Texto

(1)

Universidade de Aveiro Ele tróni a, Tele omuni açõese Informáti a.

2011

Alina Liliana

Trifan

Desenvolvimento de um sistema de visão para

robs humanoides

Development of a vision system for humanoid

(2)
(3)

Universidade de Aveiro Ele tróni a, Tele omuni açõese Informáti a.

2011

Alina Liliana

Trifan

Desenvolvimento de um sistema de visão para

robs humanoides

Development of a vision system for humanoid

robots

Dissertação apresentada à Universidade de Aveiro para umprimento dos

requesitos ne essários à obtenção do grau de Mestre em Engenharia

Ele -tróni ae Tele omuni ações,realizadasoba orientação ientí a doDoutor

António JoséRibeiroNeves,ProfessorAuxiliarConvidadodoDepartamento

de Ele tróni a,Tele omuni açõeseInformáti adaUniversidade deAveiroe

do DoutorManuel Bernardo Salvador Cunha, ProfessorAuxiliar do

Depar-tamentodeEle tróni a,Tele omuni açõeseInformáti adaUniversidadede

(4)
(5)

presidente /president Armando JoséFormoso de Pinho

professor asso iadoda Universidadede Aveiro(por delegaçãodaReitora da

Uni-versidade deAveiro)

vogais/examiners ommittee Alexandre JoséMalheiro Bernardino

professorauxiliardoInstitutoSuperiorTé ni odaUniversidadeTé ni adeLisboa

Manuel Salvador Bernardo Cunha

professorauxiliar daUniversidadedeAveiro( o-orientador)

António JoséRibeiro Neves

(6)
(7)

a knowledgements orientadoresAntónio NeveseBernardoCunhaportodooapoio,pa iên iae

disponibilidadequememostraramduranteoúltimoano. Agradeçotambém

aos meus pais que nun a deixaram de a reditar em mim e que sempre me

apoiaram. Por m um obrigado muito espe ial ao H.B. por me ter

atu-rado nosmeus pioresmomentos e por ter elebrado omigo todosos meus

pequenossu essos.

With allmygratitudeand appre iationIwouldliketo thankmysupervisors

António Neves and Bernardo Cunha for allthe suport, patien e and

avail-abilitythat they showed meduring the lastyear. Iwouldalso like tothank

myparentswhoalwaysbelievedinmeandgavemethestrengthto ontinue.

Last but not least, a spe ial thank you to H.B. who had to endure allmy

pani atta ksinthe momentsof rysisand whoalso knewhow to elebrate

(8)
(9)

portanteequepermiteainterpretaçãodoespaçoamaisdo queuma

dimen-são. No asode robshumanoides, ujaforma seassemelhaaumhumano,

a utilizaçãode visão éum enormedesao devido às restriçõesque derivam

da diferença entre o orpo humano eo rob. Um sistema de visão robusto

deve ser apaz de disponibilizarinformaçãopre isasobre oambientee uma

orre tarepresentaçãodosobje tos deinteresse. Estadissertaçãoapresenta

uma soluçãopara um sistema de visão de um rob humanoide. Os

algorit-mos desenvolvidos para todosos módulos do sistema,desde aaquisição da

imagematéaoseupro essamentoedete çãodosobje tosdeinteresseforam

testados num rob humanoide desenvolvido para jogar futebol. Neste tipo

de apli ação, o mundoenvolvente aorobé simpli ado,havendo um

on-junto de ores om signi adoespe ial neste ontexto. Nestaapli ação em

parti ular,a orre tadete çãoemtemporealdelinhasbran asnum ampo,

balizasamarela eazul bem omobolaslaranja,mostramae iên iado

sis-temaproposto. No entanto,otrabalhodesenvolvidonesta dissertaçãopode

fa ilmenteserexportadoparaoutrasplataformashumanoides omalterações

mínimas, omo apresentadonosresultadosexperimentaisdesta dissertação.

Paraalém dosalgoritmosreferidos,estadissertação apresenta um onjunto

de apli açõesquepodemser exe utadas num omputadorexternoaorob,

desenvolvidas para uma melhor manipulação das imagens adquiridas pela

(10)
(11)

task of interpreting spatial data, indexed by more than one dimension. In

the ase of ahumanoid robot,that is arobotwhose stru ture imitatesthe

human body,vision isavery hallenging areabe ause ofallthe restri tions

that ome from the dieren es between the human body and the roboti

one. Arobust visionsystem shouldbe able toprovidea urateinformation

about the environment and a pre ise des ription of the obje ts of interest.

This thesis presents a solution for an a urate implementation of a vision

system forahumanoid robot. Froma quiring images,pro essingthem and

dete ting the obje ts of interest all the algorithms have been tested on a

so er playing humanoid robot. For a so er playing robot the world is

simpliedtoa numberof olorsthatare meaningfulinthis ontext. Inthis

parti ular ase, the algorithms for real-time dete ting the white eld lines,

the blue and yellow goals and the orange ball have proven their reliability.

Theworkimplemented anbeeasilyexportedtootherhumanoidplatforms,

as presented inthe experimentalresultsofthis thesis. Moreover,this thesis

presentsseveralappli ationsthat anrun onanexternal omputerand that

have been reated for abetter manipulation of the images a quired by the

(12)
(13)

Contents i

List of Figures iii

List of Tables ix

1 Introdu tion 1

1.1 Obje tives . . . 2

1.2 Humanoid Robots . . . 2

1.3 Roboti visioninso er appli ations . . . 3

1.4 PortugueseTeam . . . 4

1.5 Contribution and stru ture ofthethesis . . . 6

2 Humanoid So er 9 2.1 NAORobots . . . 11

2.2 SPL Teams . . . 14

2.2.1 B-Human . . . 14

2.2.2 TT-UT AstonVilla. . . 15

2.2.3 MRL . . . 16 2.2.4 Austrian Kangaroos . . . 16 2.2.5 CMurfs . . . 17 2.2.6 Cerberus. . . 17 2.2.7 Borregos . . . 18 2.2.8 Robo Eireann . . . 18 2.2.9 TJArk . . . 18 2.3 Criti al omments . . . 19

3 Vision system ar hite ture 21 3.1 Communi ations . . . 21

3.2 NaoCalib and the ongurationle . . . 24

(14)

3.5 Communi ation among agents . . . 29

4 Vision pro ess: fromimage a quisition to obje t dete tion 31 4.1 A essing thedevi eand a quiring images . . . 32

4.2 Conversionsbetween olor spa es . . . 34

4.3 Camera parameters . . . 36 4.3.1 Exposure . . . 37 4.3.2 Gain . . . 37 4.3.3 White Balan e . . . 37 4.3.4 Brightness . . . 38 4.3.5 Contrast . . . 39

4.4 Self- alibration ofthe amera intrinsi parameters . . . 39

4.5 LUT and theindeximage . . . 44

4.6 Color analysisusings an lines. . . 46

4.6.1 Orange segmentation . . . 48 4.6.2 White segmentation . . . 49 4.6.3 Yellow/bluesegmentation . . . 50 4.7 Cluster formation . . . 51 4.8 Obje tDete tion . . . 52 5 Experimental results 55 5.1 Results obtained withtheNAOrobot . . . 55

5.1.1 Balldete tion . . . 58

5.1.2 Pro essing time . . . 62

5.2 Bioloid . . . 63

5.2.1 The Mi ro-Rato ompetition . . . 63

5.2.2 Bioloid vision system. . . 64

5.3 Bioloid results . . . 67

6 Con lusions and future work 69 6.1 Future work . . . 70

A Modules of the vision system 73

B User's manual 77

(15)

1.1 Severalhumanoid robots. . . 3

1.2 An example of an image a quired by the NAO amera during a game,

repre-senting what the robot sees. . . 4

1.3 Two onse utiveframesa quiredwhiletherobotismoving. Duetoits

lo omo-tion, onse utivesframesa quired by the amera of therobot might represent

dierent viewsof thesurrounding world. . . 5

1.4 Logo of the Portuguese Team. . . 6

2.1 Image taken during a SPL game - the RoboCup 2010 nal between the

B-Human andrUNSWift teams. . . 10

2.2 Severalhumanoid robots ompeting intheRoboCupHumanoid League. . . 11

2.3 SeveralNAOplatforms developed byAldebaran Roboti s. . . 12

2.4 A omputer-generated rendering showsRomeo doing hores ata home[1℄. . . . 13

2.5 Aldebaran NAOhumanoid robotsusedinRoboCupSPL. . . 13

2.6 A RGB onversionof therawimage sent bythe amera. . . 14

3.1 Vision system ar hite ture in luding the appli ations that do not run on the

robot. . . 22

3.2 Typi al lient serverapproa h. . . 23

3.3 Stru ture ofthe binary onguration leusedintheproposedvision system.. . 25

3.4 On the left, an original image a quired by the NAO amera. On the right,

thesame image withthe olors ofinterest lassied by meansof theNaoCalib

appli ation. . . 25

3.5 Anillustrationofthefeaturesofthe NaoCalibappli ationthat anbea essed

bypressing several keys ofthePC keyboard.. . . 26

3.6 On the left,an original image a quired by theNAO amera and displayed by

the NaoViewer appli ation. On the right, the same image with the olors of

interest lassied bymeans of theNaoCalib appli ation. . . 27

3.7 Onthe left,the indeximage withallthe lassied olorslabeled. Ontheright,

(16)

3.9 Illustration of aTDMA round [3℄.. . . 29

3.10 Illustration of aTDMA adaptive round [3℄. . . 30

4.1 Blo kdiagramofthe visionpro ess thatrunson therobot. . . 32

4.2 Representation of theUand V omponentson as ale from -0.5to 0.5. [4 ℄. . . . 33

4.3 In (a) a YUV422 Ma ropixel [5℄ and in (b) hroma subsampling in the YUV

olor spa e illustration[4℄. . . 34

4.4 On the left, the RGB ube and on the right, an example of an additive olor

mixing: adding redto green yields yellow, adding all three primary olors

to-gether yields white[4℄. . . 35

4.5 The oni aland ylindri al representationsof theHSV olor spa e[4 ℄. . . 36

4.6 On the left,an a urate image a quiredwith the amera parameters orre tly

alibrated. In the middle, an overexposed image and on the right, a

underex-posedimage. . . 37

4.7 On the left,an a urate image a quiredwith the amera parameters orre tly

alibrated. Onthe right,the same imageafter signi antly in reasing thegain

value. . . 38

4.8 On the left,an a urate image a quiredwith the amera parameters orre tly

alibrated. Inthe middle,animageinwhi hthered hroma hasatooelevated

value, thus the redish tonality of the image. On theright, an image inwhi h

theblue hroma istoo high. . . 38

4.9 On the left,an a urate image a quiredwith the amera parameters orre tly

alibrated. Inthemiddle,animagethatistoobrightduetoahighvalueofthe

brightness parameter of the amera. Onthe right, a too dark image aptured

when the brightness parameter istoo low. . . 39

4.10 On the left,an a urate image a quiredwith the amera parameters orre tly

alibrated. Inthemiddle, an image a quired witha highvalueof the ontrast

parameter andon theright,an image a quiredwithalowvalueofthe ontrast

parameter of the amera.. . . 39

4.11 An exampleof animage a quiredwiththe amera working inauto-mode. The

olor of the arpet is dierent than the green we would expe tto see and the

white areasand theyellowgoals aretoo bright. . . 40

4.12 Blo k diagram of a PI ontroller.

r(t), e(t)

and

u(t)

represent the response, the error and the ontrol signals, respe tively. The onstants

K

p

and

T

i

are experimentally determined oe ients asso iated to the gain, respe tively to

(17)

of the amera have onverged. On the right, the histogram of the image. As

expe ted, most oftheimage appearsinthemiddle region ofthehistogram. . . 42

4.14 Blo kdiagramofthe auto alibrationpro ess. . . 42

4.15 Onthe left,animage a quired withthe amerausedinauto-mode. The white

re tangle, in the top middle of the image, represents the white area used for

alibrating the white balan e parameters. In the middle, an image a quired

after alibrating the gain and exposure parameters. On the right, the result

of the self- alibration pro ess, after having also thewhite balan e parameters

alibrated. . . 43

4.16 Onthelefta olor alibrationaftertheintrinsi parameters ofthe amerahave

onverged. Ontheright, the result of olor lassi ation onsidering thesame

rangeforthe olorsofinterestbutwiththe ameraworkinginauto-mode. Most

of the olorsofinterest arelost (theblue, the yellow, thewhite andthebla k)

andtheshadowoftheballonthegroundisnowblue,whi hmightbe onfusing

for the robot when pro essing the information about theblue olor. . . 43

4.17 The mappingof the olorsofinterest fora NAOrobot basedona one olor to

one bitrelationship. . . 45

4.18 Anillustrationofthetype astingoftheunsigned harbuertoanintegerone,

allowing thus the reading offour bytesat thesame time. Using this approa h

a redu ed resolution ofthe images an be obtained. Thus thepro essing time

is signi antly de reased andreliable results an stillbe attained. . . 45

4.19 Ontheleft,anillustrationofthehorizontals anlines. Ontheright,theverti al

s an lines. . . 47

4.20 Transitions between greenpixels (G) and pixels ofone of the olorsofinterest

(C). . . 47

4.21 On the left, the RGB painted image in whi h the enter of all valid orange

horizontal s an lines are marked with a bla k ross. On the right, the index

image. . . 49

4.22 On the left, the RGB painted image in whi h the enter of all valid orange

verti al s anlines aremarked withabla k ross. Ontheright,theindex image. 49

4.23 On the left, the RGB paintedimage inwhi h the enters of the s anlines are

marked withbla k rosses. Onthe right, the orrespondingindex image. . . 50

4.24 On the left, the RGB painted image in whi h the enter of all valid yellow

horizontal s an lines are marked with a bla k ross. On the right, the index

image. . . 50

4.25 An exampleof lusterformation. Thelowerpartoftheyellowposts,aswellas

(18)

robot at whi h itisfound. . . 53

4.27 Ballsize at dierent distan es from therobot. . . 54

4.28 Ontheleft,thedete tionofjustonepostofthegoal. Ontheright,thedete tion

of the goalwhen both postsare visible.. . . 54

5.1 On the left,theoriginal image. In themiddle, theequivalent index image and

on the right, the image painted a ording tothelabelsinthe8-bit image. . . 56

5.2 Onthe left,the indeximage orrespondingto theprevious originalimage. On

theright,theequivalentimagepainted a ordingtothelabelsinthegrays ale

image. Be ause of hanges in the illumination, some information about the

green islost and onlyone yellowpostisvalidated. . . 57

5.3 On the left, the original image. In the middle, the index image and on the

right,the painted image ontaining the markersfor theball andtheblue goal. . 57

5.4 Ontheleft,the indeximage. Inthemiddle,theequivalentindex imageandon

theright,an image ontaining only themarkersof theobje tsof interest. . . . 57

5.5 On the left, the index image. On the right, the equivalent image painted

a ordingto thelabelsinthegrays ale image andwiththemarkers forall the

valid orangeblobs andfor theblobthat hasbeen validatedasbeingtheball. . 58

5.6 Images showing theball being dete ted at distan esfrom 50 mto 3m. . . 59

5.7 Graphi oftheballareaa ordingtothedistan efromtherobotinthesituation

when the robot is not moving and the ball ispla ed in front of it at dierent

distan es. . . 59

5.8 Coordinates in pixels of the mass enter of the ball a ording to the distan e

from the robot in the situation when the robot is not moving and the ball is

pla ed infront ofit atdierent distan es. . . 60

5.9 Coordinates in pixels of the mass enter of the ball a ording to the distan e

from the robot a quiredwhile the robot ismoving towardsa xedball. . . 60

5.10 Coordinatesinpixels ofmass enterofthe ballinthesituationwhen therobot

is not moving andtheball isbeingsent towards it. . . 61

5.11 Severalexamples ofrobots onstru ted by means oftheBioloid roboti kit. . . 63

5.12 An image fromtheMi ro Rato2011 ompetition. . . 64

5.13 Onthe left,animage of theMi ro Ratoeld. Ontheright, agraphi al

repre-sentation of the four orner posts and thebea on[6 ℄. . . 65

5.14 On the left, the original image a quired by the amera of the Bioloid robot,

also ontainingthemarkers forthe obje ts ofinterest. Ontheright,the

orre-sponding olorsegmented image. . . 67

5.15 Ontheleft,theoriginalimage withthemarkersforalltheposts. Ontheright,

(19)

also a mark for the mass enter of ea h post. The blue/pink post is situated

thefurthestpossiblefromtherobot,thelengthofthemaze,anditisstillbeing

(20)
(21)

4.1 Table presenting the al ulation of the array indexing operation ne essary for

determining

index

LU T

and, asexample, the equivalent binary value for three RGB / YUV values orresponding a three olors of interest: red, green and

blue, respe tively. . . 44

5.1 Per entage ofballdete tions ompared to the totalnumberofframesa quired

undervarious s enarios. . . 61

5.2 Pro essing timesspent bythevision pro ess. . . 62

5.3 Pro essing times spent bythemain tasksof theself- alibration module, when

the ameraisstartedatdierent valuesoftheintrinsi parameters. Intherst

situation, the amera starts in auto-mode, inthe se ond situation the amera

starts with all parameters set to 0. Finally, inthe third situation the amera

(22)
(23)

Introdu tion

For both humans androbots vision is a very important sense,the latterrepresenting the

point ofinterest ofthis proje t. Intheroboti worldmany dierent sensors an be used, but

usually, the vision system is the main sensorial element as it is apable to provide a great

amount of information su h as spa ial, temporal and morphologi al. Based on the robot

purpose, the visionsystem hasto perform dierent tasks. The majority ofthe systemsmust

perform obje t re ognition, obje t learning and lo alization per eption. An e ient roboti

vision systemshould also beable to perform its tasksinreal time, whi h meansthat several

onstraintsareimposed. Firstofall,realtimeappli ationsimplythatalgorithmi omplexity

is limited,allowing thus a smallpro essing time.

Oneof the most interesting and popular bran hes ofroboti snowadays is roboti so er.

For a so erplaying robot vision plays probably themost important part. In roboti so er,

the environment is always hanging, the ball and therobots arealways moving, most of the

time in an unpredi table way. The vision system is responsible for apturing all these fast

hanging s enes, pro essing them and taking valid de isions in thesmallest possible amount

oftime, thus allowing real-time operations.

Thisthesisfo uses onthe development of amodular visionsystemfor a humanoid robot.

Thework that will be presentedhas been applied to thehumanoid robotsof the Portuguese

Team [7℄ ompeting in theRoboCup Standard PlatformLeague [8℄. The goalof the proje t

wasthedevelopment of a generi vision systemfor humanoid robots, thatis a visionsystem

that ould be used with a wide range of humanoid robots. A platform for tests was needed

and therst and most usedavailable platform wasthe roboti so er playerNAO robot [1 ℄.

However, with a small numberof adjustments, the same vision systemhas been su essfully

applied to a Bioloid humanoid robot [9 ℄. This thesis fo uses on the implementation of the

vision systemfor the NAOrobot, with emphasis on its modularity. One se tion will also be

dedi ated to the usageof the proposed video systemwiththeBioloid robot inthe hapter of

(24)

Theaim of thisproje tisnding asolution for over omingall the hallengesand

restri -tions that roboti vision has to fa e and presenting an a urate implementation of several

algorithms for dete tingobje ts ofinterestfor a humanoid so er-playingrobot. The spe i

goalsof this thesisare:

Study ofalready existingalgorithms forimplementing a roboti vision system.

Development of algorithms that an be applied to the humanoid platforms that the University an provide.

Integrationof theproposedalgorithms withthegivenplatforms.

Testingof thedeveloped algorithms.

1.2 Humanoid Robots

Human-friendly roboti sisnowadays moreand more on erned about providinga variety

of me hani ally onstru ted solutions for some of the most ommon daily tasks of a human

being. One of the many bran hes of roboti s that best a omplishes the task of imitating

humanbehavioursis humanoid roboti s.

Ahumanoidrobot isarobotwhose overallappearan eisvery similartothehumanbody,

thus allowing itto intera t withobje tsand environments thatwere initiallydesigned

ex lu-sively for human usage. The bodyof themost humanoid robots onsistsof a head, a torso,

two legs and two arms. There are also several models of robots whose bodies in lude a fa e

with eyes and even a mouth. The idea behind the onstru tion of a humanoid robot is to

imitate dierent physi al and mental tasksthathumansundergo daily.

A humanoid robot thatis fullyautonomousshould be ableto:

Gather information about theenvironment.

Workfor an extendedperiod withouthuman intervention.

Move either all or part of itself throughout its operating environment without human assistan e.

Avoid situations thatare harmful to people or itself unless those arepart of its design spe i ations.

Probablythe most important feature of ahumanoid robot isthat itshould be apable of

learningnewstrategies for adaptingto previously unknown situationsinorderto a omplish

(25)

robots are able to sense the environment, to lassify obje ts of interest and to ontinuously

learn new te hniques for adapting to all the hanges that might o ur in the surrounding

world. Despite their autonomy while a omplishing given tasks, a humanoid robot, just like

anyother ma hine, requiresregularmaintenan e.

Humanoidrobotsarebeingusednowadaysinalargenumberofs ienti resear hbran hes.

Among the most interesting and innovative examples is probably robot NAO [1℄ (Fig. 1.1

(b)) a so er playing robot, fully autonomous developed by Aldebaran Roboti s. ASIMO

(Fig. 1.1(a)), the world's most advan ed humanoid robot [11 ℄ is designed to operate in the

real world, where people need to rea h for things, pi k things up, navigate along oors and

even limb stairs. Last but not least, Robonaut2 [12 ℄ (Fig. 1.1( )) is the rst robot to enter

Earth's orbit ona NASAspa e shuttle. Thesethree examplesonly represent aninnitesimal

per ent of what humanoid roboti sstands for. Itis a very wide resear h areathrough whi h

te hnology an bringimagination tolife.

(a)ASIMO (b)NAO ( )Robonaut2 (d)Bioloid

Figure1.1: Several humanoid robots.

1.3 Roboti vision in so er appli ations

For any so er robot the obje ts of interest are: the ball, the eld lines, the goals, the

team members and of ourse the obsta les they might en ounter on the eld aswell as the

opponent team's members.

In humanoid roboti s, the implementation of a vision systemis yetrestri ted by several

other elements, out of whi h one of the most important is thela k of stability of the digital

amera due to the type of its lo omotion. Another restri tion omes from the limitation of

the energeti autonomy and pro essing apabilities. These issues, along withthe rest of the

(26)

might seem simplied, ona rst approa h, be ause of the olor odi ation of theobje ts of

interest, the things are not that simple. Figure 1.2 represents an example of what a robot

sees during a game. However, this would be an ideal situation, in whi h its vision is not

ae ted by its lo omotion. Su h frames are usually aptured while the robot is not moving

anyparts ofits body.

Figure1.2: Anexampleofanimagea quiredbytheNAO ameraduringagame, representing

whattherobot sees.

Figure1.3 presents a more ommon a quisition of images during aso er game. That is,

apart from the obje ts that an be found outside the eld but that are still in luded inthe

robot's representation of the surrounding world, therobot hasto deal withthe instability of

the amera while walking. Thesurroundingobje ts anbe asour e offalsepositivesinwhat

on ernsthedete tions oftheobje tsofinterestwhilethemovementsofthe amera inuen e

the image a quisitionpro ess.

Theresear h workdone intheeldofroboti vision isanongoingpro ess sin eproviding

ana urate digitalrepresentation ofthesurroundingworldisavery omplextaskwhenthere

isnot anyhuman sense to relyon. Moreover, withevery year thatpasses, restri tionsinthe

area of roboti so er are lifted, this meaning that the environment is be oming more and

more generi , without having the reliability of olor information and the one of the spa ial

onstraint.

1.4 Portuguese Team

Portuguese Team represents the joint eort of two Portuguese universities, University of

Porto and Universityof Aveiro,to build a new resear h oriented and ompetitive SPL team.

Theproje tstartedin2010anditis omposedofmembersofthethreePortugueseteamsthat

(27)

Figure1.3: Two onse utiveframesa quiredwhiletherobotismoving. Duetoitslo omotion,

onse utivesframesa quiredbythe ameraoftherobotmightrepresentdierentviewsofthe

surroundingworld.

CAMBADA [15 ℄and 5DPO [16℄.

FCPortugalisajointproje toftheUniversitiesofPortoandAveiroinPortugal. Theteam

parti ipates inseveral RoboCup ompetitions sin e 2000, in luding: Simulation 2D,

Simula-tion3D(humanoidsimulation),Coa hCompetition,Res ueSimulation,Res ueInfrastru ture

and Physi al Visualization/Mixed Reality. The team resear h is mainly fo used on reating

new oordination methodologies su hasta ti s,formations,dynami role ex hange,strategi

positioningand oa hing. Theresear h-orienteddevelopment ofFC Portugal has been

push-ing it to be one of the most ompetitive over the years (Simulation 2D World hampion in

2000 and European hampion in2000 and 2001,Coa hChampion in2002,2nd pla e in2003

and 2004, Res ue Simulation European hampion 2006, Simulation 3D European hampions

in2006 and 2007 and WorldChampions inRoboCup 2006).

CAMBADAistheMiddle-SizeLeagueRoboCupteamandproje tfromIEETA/University

ofAveirothatstartedin2003. Theteammainresear hinterests overmostoftheMiddle-Size

League hallengessu hasrobotvision,sensorfusion,multi-agentmonitoring,andmulti-robot

team oordination. The team won RoboCup2008, a hieved 3rd pla e inRoboCup 2009 and

RoboCup 2010 World MSL ompetitions and was2nd inthe RoboCupGerman Open 2010.

Moreover,for the last 5 yearsit hasbeen the national hampion of theMSL.

5DPOisaproje tofINESC-P/FEUPaimingatresear hingintopi ssu hasvisionbased

self lo alization, datafusion, real-time ontrol, de ision and ooperation. The team isa tive

in RoboCup sin e 1998 in the Small-Size and Middle-Size leagues. The team results have

been quite goodin the Small-Size League. 5DPO won three European/GermanOpen

Small-Size League hampionships and a hieved a 3rd pla e at RoboCup 1998 and a 2nd pla e at

RoboCup2006 world hampionshipsinthis league.

(28)

Asarstapproa handfortestingpurposes,theworkpresentedinthisthesishasbeenapplied

tothehumanoid robotsof thePortuguese Team.

Figure1.4: Logo ofthe PortugueseTeam.

1.5 Contribution and stru ture of the thesis

Asstated inSe tion 1.4the workdes ribedinthis thesis hasbeen applied in the ontext

of the roboti so er and all the algorithms that will be presented in Chapter 4 have been

tested on the NAO humanoid robots of the Portuguese Team. Even though the goal of the

proje t was to develop a generalized vision system for a wide range of humanoid robots, it

wasimperativetohaveanenvironmentonwhi hthework ouldbetestedon. Thealgorithms

that was implementedhad goodresults on theNAOrobots and it an be easily exported to

other humanoid platforms, with a minimum number of hanges. In the ase of the so er

playing robot NAO, the vision system is able to a quire good quality images by alibrating

theintrinse parameters ofthe amera a ordingtotheenvironment. Inthe ontext ofso er

games, therobot an dete t the balland thegoals aswell asthelinesof theeld. Moreover,

the software wasalso usedwitha few adjustements witha Bioloid robot, whi h is aproof of

the modularityof the proposedvision system.

The thesis is stru tured in six hapters and two appendixes, ea h of them aiming to

explain dierent issues that have been approa hed in the development of the proje t. The

rst hapter was an introdu toryone, stating theobje tives of thethesis and presenting the

fundamental elements ofthe proje t. Inthis regard, there waspresentedan overviewon the

state of the art humanoid robots and also on theissues that roboti vision has to over ome

nowadays. Moreover, a presentation of the SPL team Portuguese Team has been in luded

sin e it provided the environment for developing and testing ofthework des ribed.

The se ond hapter gives a more detailed des ription of what humanoid roboti so er

means andalso fo uses on thework of themost important teams parti ipating inRoboCup.

(29)

(Se tion 3.3) and NaoCalib (Se tion 3.2). The rst one, NaoVieweris a tool used for

moni-torizing and debugging purposes. It allows the user to follow in real-time theresults of the

algorithms thatrun ontherobot. NaoCalib isaveryhelpfultoolinthepro essof alibrating

the olors of interest. The fourth hapter is dedi ated to the des riptionof the

implementa-tion of the main vision pro ess that runs on the robot. The vision pro ess an be divided

into three modules whi h are: a essof thedevi e, alibrationofthe amera parameters and

image a quisition and last but not leastimage pro essing whi h envolves olor segmentation

andobje tdete tion. The detailsof theimplementation of allthree modules arepresented.

Chapter5presenttheresultsoftheimplementedalgorithmsanddes ribestheusageofthe

developed vision systemina Bioloid[9℄ platform andnally,Chapter6 on ludesthe thesis,

statingalsoseveral issuesthat might beapproa hed infuturework developments.

Appendix A presents the prototypesof all the methods thathave been developed for the

several modules of the vision system, while Appendix B represents the user's manual of the

(30)
(31)

Humanoid So er

RoboCup is an international initiative that fosters resear h in roboti s and arti ial

in-telligen e, on multi-robot systems in parti ular, through ompetitions like RoboCup Robot

So er, RoboCup Res ue, RoboCupHome and RoboCupJunior. RoboCup is designed to

meet theneed ofhandlingreal world omplexities,thoughinalimitedworld, while

maintain-ingan aordable problemsize andresear h ost. RoboCupoersanintegrated resear htask

overing thebroad areas of arti ial intelligen e and roboti s. Su h areas in lude: real-time

sensorfusion,rea tivebehavior,strategya quisition,learning,real-timeplanning,multi-agent

systems, ontextre ognition,vision,strategi de ision-making,motor ontrol,intelligentrobot

ontrol, andmany more.

Theultimate goaloftheRoboCupinitiative isstated asfollows:

By mid-

21

st

entury, ateamof fullyautonomoushumanoid robotso er players shall winthe

so er game, omply withthe o ialrule of the FIFA, against the winner of the most re ent

World Cup.

Su ha goal might sound overly ambitious given the stateof theart te hnology. Building

humanoid so erplayersrequiresan equallylongperiodoftimeaswellasextensive eortsof

a broad range of resear h areas. Most probably the goal will not be met in any near term,

howeveritis important that su ha longrangegoal be laimed andpursued.

The main fo usof the RoboCup ompetitions is the game of so er, where the resear h

goals on ern ooperativemulti-robotandmulti-agentsystemsindynami adversarial

environ-ments. One ofthe most popular so er leaguein RoboCup isthe StandardPlatform League

(SPL). In this league all teams use identi al, standard robots whi h are fully autonomous.

Therefore, theteams on entrate on softwaredevelopment only,while stillusing

state-of-the-art robots. Omnidire tional vision is not allowed, for ing de ision-making to trade vision

resour es for self-lo alization and balllo alization. Theleague repla ed thehighlysu essful

Four-LeggedLeague, basedonSony'sAIBOdogrobots[18℄,andisnowbasedonAldebaran's

Nao humanoids.

(32)

entre ir le, orner ar s, and the lines surrounding the penalty areas) are

50mm

in width. The entre ir le has an outside diameter of

1250mm

. In addition to this, the rest of the obje ts of interestarealso olor oded. The o ialball isa Myle orangestreet ho keyball.

It is

65mm

in diameter and weights

55

grams. The eld linesare white and the two teams playing an have either redor bluemarkers. The redteam will defend a yellow goal andthe

blue teama sky-bluegoal. Figure2.1showsan image ofa typi alSPL game.

Figure2.1: Imagetakenduringa SPLgame- theRoboCup2010 nalbetweentheB-Human

andrUNSWift teams.

The game is divided into two halfs, ea h being 10 minutes long, with an interval of 5

minutesbetweenthe twohalfs. Theonlya eptedhumaninterventionduringthegameisthe

oneofthereferee. Thede isionstakenbytherefereearetransmittedtotherobotsbywireless,

bymeansofanappli ation alledGameController. Thisappli ationimplementsthesixstates

inwhi htherobot anbe: initial,ready,set,play,penalty andstop. Fortheteamsthat annot

properly ommuni atewiththeGameController,thepositioningand ommuni ationwiththe

robots is done manually by the referee, who ommuni ates his de isions to the robots with

the helpofthe hest buttonandthe feetbumps.

After booting, the robots arein their initial state. In this state, thebutton interfa e for

manuallysetting the team olor andwhether theteam has ki k-ois a tive. The robotsare

not allowed moving inany fashion besides initiallystanding up. Pressing the leftfoot bump

sensorwill swit h theteam olor. Shortlypressing the hest button will swit h therobot to

the penalized state.

In the ready state, the robots walk to their legal ki k-o positions. They remain in this

(33)

In the set state, the robots stop and wait for ki k-o. If they are not at legal positions,

theywill bepla ed manuallybytheassistant referees. Theyareallowed to move their heads

before thegame (re)starts but are not allowed moving their legs or lo omote inanyfashion.

This state is not available if only the button interfa e is implemented. Robots that do not

listento theGameController willbepla edmanually. Until thegameis(re)started,theyare

inthepenalized state.

Intheplaying state,therobotsare playing so er. Shortlypressing the hestbutton will

swit h the robot to the penalized state. A robot is in the penalized state when it has been

penalized. It is not allowed moving in any fashion, i. e. also the head has to stop turning.

Shortlypressingthe hest buttonwill swit h the robot ba kto theplaying state.

Thisnished stateisrea hed whenahalfisnished. Thisstateisnot availableifonlythe

buttoninterfa e isimplemented.

Another leaguebased on humanoid robots intheRoboCup ompetition istheHumanoid

League. Themain dieren ebetweentheSPLand theHumanoid Leagueisthatinthelatter

teams anparti ipatewiththeirownrobots,thereisnotastandardrobotthathastobeused.

Just like in the SPL, in the Humanoid League autonomous robots with a human-like body

plan and human-like senses play so er against ea h other. Dynami walking, running, and

ki king the ball while maintaining balan e, visual per eption of the ball, other players, and

the eld, self-lo alization, and team play areamong themanyresear hissues investigated in

the league. Thisleagueis dividedinthree subleagues,a ording to robot sizes:

Kid Size (30-60 mtall), Teen Size (100-120 m tall) and Adult Size (over 130 m). Examples

of therobotsusedinthese ompetitions anbeseen inFig.2.2.

(a)AdultSize (b)TeenSize ( )KidSize

Figure2.2: Several humanoid robots ompeting intheRoboCupHumanoid League.

2.1 NAO Robots

The proje t NAO, laun hed inearly 2005, is a humanoid robot developed by Aldebaran

(34)

ationallovertheworld anpur haseNAOrobotstospe ialpri esforedu ationandresear h

purposes. As a result, NAO is at the moment the most used humanoid robot for a ademi

purposes worldwide. From simple visual programming to elaborate embedded modules, the

versatilityof theNAOenables usersto explorea widevarietyofsubje tsat whateverlevelof

omplexity.

In July 2007 NAO(versionH1/H25 - Fig.2.3( )) was nominated asthe o ial platform

for the standard league by RoboCup Organizing Commitee, and su essor to Sony's Aibo

robot dog [18℄. NAO's rst parti ipation at the RoboCup o ured in July 2008 at Suzhou,

China where 15 universities ea h used 2 robots per team ompeting in so er mat hes. By

2009 almos100NAO ompetedat theRoboCupheldinGraz,Austria. 24teams(4NAOper

team) made fulluse ofthephysi al and ognitive apabilities oftherobot.

(a)NAOT2 (b)NAOT14 ( )NAOH1/H25

Figure2.3: Several NAOplatforms developed byAldebaran Roboti s.

More re ently, the fully autonomous NAO robots (Fig. 2.5( )) also started being used

in proje ts that develop behaviors for humanoid robots to be used in the environment of a

home[19 ℄. Inthis environment,therobot hasto performspe i tasks likebringing a an of

sodafromthe fridgeto ahuman user. Moreover, inMar h2011 Aldebaranroboti slaun hed

a largerversionofthe NAO,theso alledrobot Romeo(Fig.2.4)designed ex lusively forthe

use at home.

The NAO robot used in the SPL (Fig. 2.5(a),(b)) has

57

entimeters of height and

4.5

kilograms. It omes equippedwithax86AMDGEODE500MHzCPUpro essorand256 MB

SDRAM / 2 GB ash memory. The robot was initially designed for entertainment and has

me hani al, ele troni ,and ognitive features.

Reserved forthe resear hand tea hingelds,NAOA ademi sEdition isdeliveredwitha

(35)

(a) (b)

Figure2.5: Aldebaran NAOhumanoid robots usedinRoboCupSPL.

and the appropriate ompilation and debugging tools. Apart from this, NAO A ademi s

Edition hasanother standard feature alledChoreographe, asoftwarethatallows NAOusers

to intera t withthe robotina very simplemanner.

NAOqiis a framework property of Aldebaran Roboti s that runson the robots and a ts

asaserver. Dierent modules anplugintoNAOqieither asalibraryorasabroker,withthe

latter ommuni ating over IP with NAOqi. It supplies binding for C, C++, Python, Ruby

and Urbi. NAOqi in NAO is automati ally laun hed at startup if NAO is onne ted to the

network. NAOqi itself omes with several notable modules designed to ease the intera tion

between the user and NAO. Su h modules are: ALMemory, ALMotion, ALLeds, ALSonar,

ALRobotPose, ALVideoDevi e, ALVisionRe ognition, ALVisionToolbox, et . The names of

themodulesaregenerallyrelevanttotheirappli ation,thereforeonlytheonesrelatedtovision

(36)

an a ess to images transformed in the requested format. ALVisionRe ognition is a module

whi h dete ts and re ognizes learned pi tures, like pages of a omi book, fa es of obje ts

areevenlo ations. TheALVisionToolbox ontains severaldierent vision fun tionalities, like

pi turetaking, video re ording, white balan e setting. However, even thoughthese modules

wereusedintherststepsofthis proje t,initsnalformthey arenolonger useddueto real

time onstraints, asitis des ribed inSe tion 4.1.

Fromthepointofviewofthehardwaremostrelatedtothedevelopedvisionsystem,NAO

has2 identi al video amerasthat are lo ated inthe forehead, respe tively in the hin area.

Theyprovidea

640 × 480

resolutionat30framesperse ondandtheeldofviewis58degrees (diagonal). They an be used to identify obje ts inthe visual eld su h as goals and balls,

and bottom amera an ease NAO's dribbles. The native output of the amera is YUV422

pa ked and only one amera an be used at one time, due to the fa t that the two ameras

share thesame bus andboth driverswill beloaded into thevideodev0 kernel module.

Figure2.6: A RGB onversionofthe raw image sent bythe amera.

2.2 SPL Teams

Thisse tionfo useson the mostimportant proje ts intheRoboCup SPL,withemphasis

on their vision systems. The information presented inthis se tion is a ordingto the Team

Des riptionpapersof theteamsthatattended RoboCupin2010.

2.2.1 B-Human

B-Human[20℄isa ollegiateproje tattheDepartmentofComputerS ien eofthe

Univer-sityofBremenand theDFKIResear hAreaSafe andSe ure CognitiveSystems. Thegoalof

the proje tis to developsuitable softwareinorder to parti ipate inseveralRoboCup events.

Theyparti ipatedduringseveralyearsinthehumanoidleague until theyjoinedtheStandard

(37)

limitationsoftherobot'sar hite ture. Ea hpro essrepresentsamoduleoftheirsoftware. The

modules are: Cognition, Motion and Debug. The module related to vision is theCognition

module, whi h runs at a frequen y of 30Hzsin e the amera provides 30 framesper se ond.

This modulere eives amera imagesfrom Videofor Linux[21 ℄ andalso sensordatafromthe

Motion module whi h will be used for the lo alization of the robot. Their vision system is

onstru ted based ontheOpenCV image pro essinglibrary.

The pro ess Cognition is stru tured into three fun ional units: per eption, modeling and

behavior ontrol [22 ℄. Theper eptionsystemisbasedonverti als anlineswhi hmeansthat

thea tualamountofs annedpixelsismu hsmallerthantheimagesize. Themodelingunitis

responsiblefor providinganestimationofthe worldstate,whi hin ludestherobot'sposition,

theball's positionand velo ity andthepresen eofobsta les. Therststepof theper eption

unitisto provide a ontourof the bodyof the robot. Therobot beingwhite might be easily

onfusedwith thewhitelinesoftheeld. Byusingforward kinemati stherobotknowswhere

itsbodyis visibleinthe amera image and ex ludethese areasfrom image pro essing.

Afterthis,theimage ispro essedinthreesteps: segmentation andregionbuilding, region

lassi ationandfeatureextra tion. Firstthe eldbordersaredete tedbyrunnings anlines

thatstart from the horizon downwards until agreen segment ofa minimumlength isfound.

Next to the usual s an lines there are also used some s an lines that only re ognize orange

regions. For the goal dete tion two spe ial s ans are done to dete t verti al and horizontal

yellow or blue segments above the horizon. Region lassi ation is used for establishing if

the segmented regions arepart of a line, goal or ball. For a white region to be onsidered a

white lineit hasto onsistof ertainnumberofsegments, to have a ertainsize,and to have

a ertainamount of green above and below ifit is horizontally oriented or a ertain amount

ofgreenon its leftand right side ifitishorizontally oriented.

For dete ting goals, sin e thefoot of a post must be below the horizon and the head of

the post must be above, it is su ent to s an theproje tion of the horizon in theimage for

blue or yellow segmentsto dete t pointsofinterest.

An orange region, in order to be onsidered a ball annot be above the horizon or at a

distan e greater than the length of the eld diagonal. The ball is validated based on the

distan e of the Cb and Cr omponents of the surrounding pixels. The enter and radius of

the ballare al ulated and the relative position to therobot.

2.2.2 TT-UT Aston Villa

TT-UT [23℄ Austin Villa is a joint team of Texas Te h University and the University of

Texas at Austin. TT-UT AustinVilla won the2009 USOpenand the2010 US Open inthe

SPL. The team also nished third in the2010 RoboCup ompetition and fourth inthe2009

(38)

The vision moduleis basedon Reinfor ement Learning Algorithms - therobot should be

able to improve its own behaviourwithout the need for detailed step-by-step programming.

Theiralgorithm,Reinfor ement Learning withDe ision Trees usesde ision trees to learnthe

model by generalizing the relative ee t of a tions a ross states. The agent explores the

environment until it believes it has a reasonable poli y. The robot is enabled to learn the

olors on the robot so er elds by modeling olors as 3D Gaussians, using a pre-dened

motionsequen e.

2.2.3 MRL

Me hatroni sResear hLaboratory[24 ℄wasestablishedin2003asanindependent resear h

entre under the supervisionof Qazvin Islami AzadUniversity. One of its a tivities is

par-ti ipating at the RoboCup international ompetition.

Their approa h for the vision system is mostly based on pattern re ognition. The ball

re ognition algorithm is based on dete ting a ir le and ltering undesired noise. First the

image is segmented and s anned to nd orange spots. After performing a sear h of

8 × 8

pixels around the dete ted pixel, the image is s anned in four dierent dire tions starting

at the dete ted pixel. The re ognized obje t is most likely to be a square or a ir le when

the two measured dimensions are equal. The nal ball settlement position is estimated by

a urate ball kineti s formulation. Distan e between the ball and robot is al ulated by a

omparison between per eived ball radius and original radius. Then the absolute distan e

ould be a hievable by ltering the elevation between amera and robot's feet. The easiest

wayto nd the dire tion ofball is omparing urrent and latestball positions.

Regarding goal per eption, s an lines segmentation ombined with fuzzy logi dete tion

have been used.Theobsta le dete tion an be donebyvisionpro edureor byusing theUltra

Soni retrieved data. Ultra Soni retrieved data is exe uted with a sensor fusion pro ess to

dete t anobsta le infront ofthe robots, thenvisionisadmittedto dete t thoseobsta les. In

the nalstepthe robots aremodeled inthe worldstate and their lo ation and orientation in

ea h period areupdated su essively.

2.2.4 Austrian Kangaroos

Theteam [25℄ startedin2010 and is supportedbytheAutomation and Control Institute

(ACIN) and the Institute of Computer Languages Compilers and Languages Group

(COM-PLANG) from the Vienna Universityof Te hnology, aswell asby theInstitute of Computer

S ien e from the Universityof Applied S ien esTe hnikum Wien.

Theydeveloped their vision systembased on theCMVision [26℄ olor segmentation

soft-warewhi h provides a base to segment images in regionsof interest, to whom were assigned

(39)

internal sensor reading into a ount. Thus, the robots are able to estimate the distan e to

regionsof interestandto verifythea ura y ofobservationsthatarebasedonobje tsizes in

the image plane. To in rease the robots world knowledge, a global model was implemented

andintegrated into the robot's world model viathe multi-hypothesis approa h. Every robot

shareshis knowledge withthefullteam.

2.2.5 CMurfs

Thisteam[27 ℄ was reated in2010byresear hers attheCarnegie MellonUniversitywith

the purpose ofjoining the RoboCupSPL.

Their vision system is divided into two stages. The rst one, low level vision, uses the

CMVision library [26 ℄ to perform olor segmentation on theimage. CMVision usesa lookup

tableto mapfrom YUVpixel intensitiesto symboli olors, su h asred,blueor orange. The

librarythenbuildsuplistsofthe olored regionsintheresultingimage. Theselistsofregions,

whi hspe ifytheboundingboxand entroidofea hregion,arethenusedinthese ondstage

of vision,high level vision,for obje tdete tion. TheVision omponent pro essesthe amera

images and reports VisionFeatures, su has ballsand goalposts,with aheuristi onden e

valuebetween 0and1,representinghow onden etheyarethattheobje tistrulypresentin

theimage. TheVisionFeaturesdete tedbytheVision omponentareusedbythelo alization

omponent to generate aposeofthe robot.

2.2.6 Cerberus

Cerberus [28℄ was the rst international team in the Standard Legged Robot League. It

started as a joint eort of Bogazi i University, Turkey and Te hni al University of Soa,

PlovdivBran h, Bulgaria. Currently Bogazi i Universityismaintainingtheteam.

For theirvision systemthey useaGeneralizedRegression NeuralNetworkfor mapingthe

real olor spa e to the pseudo- olor spa e omposedof a smallerset of pseudo olors, namely

white,green,yellow,blue,robot-blue,orange,redandignore. Alookuptableis onstru ted

for all possible inputs. S an lines are used to pro ess the image in a sparse manner, thus

speedingup the entire pro ess.

The pro ess starts with the al ulation of the horizon based on the pose of the robot's

amera withrespe tto the onta t point of the robot withthe ground, thatis thebase foot

oftherobot. Afterthat, s anlinesthatare5pixels apartfromea h otherandperpendi ular

tothehorizonlineare onstru ted,su hthattheyoriginateonthehorizonlineandterminate

atthebottomoftheimage. Then, as anthroughtheses anlinesisstartedtondwherethe

greeneld starts, whi his done by he king for a ertain numberof onse utive green pixels

alongtheline. Inorder not to loseinformation about those important obje ts,a onvex-hull

(40)

eld borders are onstru ted,the shorters anlines areextendedba kto theseborders, thus

beingpossible tousethem to dete t the goalpostsand ballsthatare lose

to the borders. After this, ea h s an line is tra ed to nd olored segments on them. If

two onse utive segments tou h ea h other, they aremerged into a single region. For white

segments, there are some other onditions su h as hange in dire tion and hange in length

ration,whi hguaranteethatalleldlinesaremergedintosmallerandmoredistin tiveregions.

2.2.7 Borregos

Borregos[29 ℄isaMexi anteamthathasbeenparti ipatingintheRoboCup ompetitions

sin e 2004, starting inthe 2D Simulation League and moving forward to the 3D Simulation

Leaguewithhumanoidsin2007andnallyjoiningalsotheStandardPlatformLeaguein2010.

The vision pro ess of this team starts byobtaining a raw image from the amera devi e,

then a RGB matrix is extra ted from the input image and the olor lassi ation pro ess if

performed. The output of the olor lassi ation is an obje tmap, whi h is a matrix of the

same size as the RGB matrix but instead of the RGB values it ontains obje t labels. The

pattern re ognition pro ess takestheobje t mapand triesto nd agsand then onstru t a

ve tor for every re ognized ag. Ea h ve tor ontains distan e, verti al angle and horizontal

angle toags. Color lassi ation isthepro essthatmapspixelvaluesofanimage toa olor

label that orresponds to a region of olor spa e pre-dened in a look up table (LUT). The

raw values of ball distan e, dire tion and elevation are al ulated based on the supposition

that the ball is always on the oor, whi h is generally true. This is done by al ulatingthe

entroid ofallpixels orrespondingtotheballintheimage anddividingthem overtheimage

width, thenmultiplying theresulting al ulation bythellof theviewof the amera.

2.2.8 Robo Eireann

The team [30 ℄ onsists of undergraduates, graduates and a ademi s from The Hamilton

Institute, ComputerS ien e and Ele troni Engineering at NUIMaynooth.

Their vision uses OpenCV but with problems in obtaining the speed needed for high

levels of performan e. Their approa h hasproven helpful for generating olour-segmentation

algorithms,basedonoptimization workinginthe HSIspa efor apturedimages. Thesystem

seekstheoptimal HSI spa e bounds for ea h olor to minimize a ompromise of false alarms

and misses in olor segmentation. This is then used to generate a LUT in the robot olor

spa efor olor segmentation.

2.2.9 TJArk

Theteam[31 ℄wasestablishedin2004asapartoftheLabofRobotandIntelligentControl

(41)

to formsegments of the same olor, thenusing run length en odingalgorithms to merge the

segment into blobswhi h an bethenusedinthevision re ognition. For goaldete tion, they

s anhorizontally the imagethenthehorizontalrun willbe onne ted to blobsbasedon their

positions. Some extra s anning is made to de ide the goal post blob. When there is only

one eligible post left, they use the rossbar and the orner in the forbidden area to de ide

whetherthepostisontherightor left. Inordertore ognizethelinesandki k-o ir lethey

adoptedtheapproa h of grouping points of thesame olor into blobs and then al ulate the

hara teristi s of these blobs. Inspired by the resampling method, they use the run-length

en odingalgorithm to dete t the target region,thent thesampleto theGaussian model or

theHistogram model. If thelight ondition hanges, they hose the MAP method to adapt

themodelto a ommodatethe urrent light ondition.

2.3 Criti al omments

The rst step in developing the vision system that is presented in this do ument was

thestudying of thework that hasalready been developed inthearea of roboti vision,with

emphasis on the work of the teams urrently parti ipating in RoboCup Standard Platform

League. Theinformationaboutthe urrentlevelofprogressofea hoftheteamswaspresented

in the previous subse tions (Subse tion 2.2.1 to 2.2.9). An overview of the work developed

sofar inroboti vision wasneeded inorder to better understand the ontext, the hallenges

andthe onstraints thatroboti vision implies. Moreover, having a essto dierent solution

proposalsthatarebeingimplementedbya widerangeof resear hersand onsequently tothe

advantages and possible aws that ea h proposal brings, the hoi e of a personal approa h

be omeseasier. All ofthevision systems presented sofar have ertain ommon featuresand

someofthem analsobefoundinthear hite tureofthevisionsystemdes ribedinthispaper.

Theywill be detailed asfollows. Nevertheless, none of theteamspresentedhave approa hed

intheirTeamDes riptionPaperstheissuesregardingimagea quisitionand alibrationofthe

ameraintrinsi parameters.

Theuseofanimage pro essinglibraryis ommontoallproje tsenvolvingavisionsystem

and humanoid vision systems are not an ex eption. For the teams in the SPL, OpenCV

and CMVision libraries are two of the most ommon hoi es. They provide a wide range of

fun tionsfor manipulating images. For the proje t thatis being des ribed inthis thesis, the

library hosenis OpenCV [32℄.

TheenvironmentintheSPLisstill olor odedwhi hmeansthatagood olor lassi ation

isaveryimportant stepinattaining goodresults. For thisreason manyteamsuseaLUT for

pre-dening labels for the regions that have one of the olors of interest. Then thevalues of

thepixels in the image aremapped to a orresponding olor label, asit will be des ribed in

(42)

amount of time due to the restraintsof the video devi e, time management when pro essing

animage hasto beperformed. For example,using a devi ethat worksat a frame rate of

30

framesper se ondimplies thatthetotal time that an bespent for a quiring and pro essing

one frame is

33

ms. Themainapproa hfor obtainingasmall pro essingtimeisbasedonthe subsampling of the information a quired from the amera of the robot. In this way, either

the information about the olor or theinformation about the luminan e an be subsampled

while the a quisition pro ess an still delivergood representations of thesurrounding world.

Inadditiontothis,inordertoredu ethetimespentfors anninganimage insear hofoneof

the olorsofinterest, severalteams hoosethe approa h of usingeitherverti al or horizontal

line s ans, whi h means that the number of s anned pixels is mu h smaller than the size of

the image. For theproje t presented, verti al or horizontal s an linesare usedfor dete ting

transitions from greento one ofthe olorsofinterest. Also onlyhalf ofthe olumns andhalf

of therows ofthe image ares anned inorderto obtain abetter pro essingtime. The details

of this pro essare des ribed inSe tion 4.6.

After having the olor of interest segmented the ommon approa h for the majority of

teams is to merge all neighbour pixels of the same olor into blobs [33℄. The idea of

neigh-bourhood mightbedierentfromteamtoteambutthe on eptisthesame. Forthispurpose,

the run-length en oding is one of the most ommon hoi es. Run-length en oding is a very

simple form of data ompression in whi h sequen es of the same value, that o ur a large

number of time, arestored as asingle datavalue and ount, rather than astheoriginal run.

Inthe ase of the proposedsystemar hite ture,also basedon run-length en oding, pixels of

the same olor are grouped into blobs if they belong to parallel verti al or horizontal s an

linesthat are lose to ea h other. Then the blobs pass a rst validation test towards being

labeledasobje tsofinterestiftheyarepre ededor/andfollowedbya ertainnumber ofgreen

pixels. Also ommon for this proje taswellasfor theother proje ts mentioned before, after

having avalid blobis omputingseveralmeasurementssu hasthe enterofmassandareaof

the boundingbox, whi h prove to be very helpful in the pro ess of validating the obje ts of

interest. Amoredetaileddes riptionofthesestepsoftheimplementationofthevisionsystem

is presentedinSe tion 4.8.

The on epts dis ussed might be ommon for a large number of roboti vision systems

but every implementation has its parti ularities whi h dierentiate it from the others. The

parti ularities of the proje t des ribed in this do ument will be des ribed in the following

(43)

Vision system ar hite ture

This thesis presents a modular vision system developed for humanoid robots, but whi h

anbeused, withsmall hanges, inotherroboti platforms thathave adigital amera asthe

main sensor.

Inthis hapterthear hite ture ofthevisionsystemwillbedes ribed,givingmore details

about some supportmodulesand appli ations thatare not running on therobot. The main

visionpro essrunningontherobotwillbepresentedinChapter4. Themodularvisionsystem

is presentedinFig.3.1.

Apart from the vision system pro ess running in real-time on the NAO robot, several

other modules and appli ations were developed for support and debugging purposes. These

modulesmakepossiblethe ommuni ationoftherobotwithalo alhostaswellasthesharing

oftheinformation provided bythevisionpro esswiththerestofthepro essesrunningonthe

robot. Thetwoappli ationsdeveloped an beusedeitherfor olor lassi ation,whi histhe

rst stepofthe obje tdete tion pro ess,or forimage visualizationandebugging, onsidering

that the NAOphysi al ar hite ture doesnot supportanygraphi al interfa e. Moreover, the

Linux operating system that omes with the robot does not have graphi al server, su h as

XServer 1

. These twoappli ations (NaoCalib- Se tion 3.2 andNaoViewer - Se tion 3.3) run

on an omputerdierent from theoneof therobot andre eive datathroughso kets.

3.1 Communi ations

Limitations of NAO's hardware ar hite ture (see Se tion 2.1) make impossible the

visu-alization of the results of the vision algorithms on the robot. Moreover, when robots are

operating the human user annot look dire tly to the results, in terms of what the robot

sees. This is valid for most of the mobile robots. Interfa es for thedisplay and analysis of

theimages annotbesupportedsin etheywouldneedtoa essanduseagreatamountofthe

resour es available on therobot. Also theoperatingsystemrunning on theNAO robot does

1

(44)

nothave graphi alsupport. Asinimage pro essing theuseoftools thatallow atleastimage

displayingitisimperative,asolutionforimplementingthemwasneeded. The hosensolution

isbasedonthedevelopment oftoolsthatrunonanexternal omputerand ommuni atewith

the robotbymeans ofa network so ket,using aserver- lient model.

Aso ketrepresentsanendpoint ofabidire tionalinter-pro ess ommuni ationowa ross

a omputer network. The so ket is responsible for delivering in oming data pa kets to the

appropriateappli ationpro ess,basedona ombinationoflo alIPaddressesandportnumber.

Ea hso ketismappedbythe operatingsystemtoa ommuni atingappli ation pro ess. The

ombinationof an IPaddress and theport into a singleidentity is alled theso ketaddress.

The network so ket an also be seen asan appli ation programming interfa e (API) for the

TCP/IPor UDP proto olsta ks.

The lient andservertermsrefertotwo pro esseswhi hwillbe ommuni atingwithea h

other. Oneofthetwopro esses,the lient, onne tsto theotherpro ess,theserver,typi ally

to make arequestfor information. The lientneeds toknowoftheexisten eand theaddress

of the server, but the server doesnot need to know theaddressof (or even the existen eof)

the lient prior to the onne tion being established. When a onne tion is established, both

sides an send and re eive information. The system alls for establishing a onne tion are

somewhat dierent for the lient and the server, but both involve the basi onstru t of a

so ket.

(45)

Figure3.2: Typi al lient serverapproa h.

Thesteps involved inestablishing aso keton theserver side areasfollows:

Create aso ketwiththe so ket() system all.

Bind the so ketto an address using thebind() system all. For a server so keton the Internet,an address onsistsofa port numberonthe hostma hine.

Listen for onne tions withthe listen() system all.

A ept a onne tion with the a ept() system all. This all typi ally blo ks until a lient onne ts withtheserver.

Send and re eive data.

Thesteps involved inestablishing aso keton the lient sideareasfollows:

Create aso ketwiththe so ket() system all

Conne t the so ketto the addressof theserverusing the onne t()system all

(46)

therequiredinformation forthe lientappli ationsthathavebeendevelopedandthatwillbe

presented in detail in the following se tions. When the main vision pro ess is run with the

option -server the rst step is to reate a so ketwhose address will be the ombination of

theIPofthe hostandthepredenedport5000. Whenthe lient onne tstotheserveritwill

startsendingthe information requiredbythe lient. The lient anrequestandre eiveeither

theYUV422 buer withthe raw data a quired by the amera or thebuer of the pro essed

index image. Therst is onverted inthe lient side and displayed asan RGB image with a

resolutionof

640 × 480

pixels,themaximumresolution oftheNAO amerais

640 × 480

. The latteris displayed withthe resolution usedint he pro essing algorithms (for theNAO robot

a sub-sampled resolution of

320 × 240

pixels is used). The index image represents an 8-bit image of labels for all the olors of interest and it will be further explained in Se tion 4.5.

The lient re eivesframesata slowerratethan theframeratebe auseof networkdelays and

thelimitationsof thebandwidth. Typi ally,itisne essary1store eivea fullframefromthe

robot.

3.2 NaoCalib and the onguration le

IntheRoboCup SPL,aswellasinotherroboti appli ations, image analysisissimplied

due to the olor oding of the obje ts. In SPL the robots play so er with an orange ball

on a green eld with whitelines and yellowand bluegoals. The olor information of a pixel

is a strong hint for obje t validation. Be ause of this, a good lassi ation of the olors is

ne essary.

Along with the alibration of the parameters of the amera (presented in Se tion 4.4), a

alibrationofthe olor rangeasso iated toea h olor lasshastobeperformedwheneverthe

environmentortheillumination onditions hange. Thesetwopro essesare o-dependentand

ru ial for image segmentation and obje tdete tion.

NaoCalibisanappli ation reatedafteramodelusedbyCAMBADA[34 ,35 ℄,theRoboCup

Middle-Size League team of the University of Aveiro. It is used for the lassi ation of the

olors of interest andit allows the reation ofa onguration lethat ontains theHue (H),

Saturation (S) and Value (V) minimum and maximum values of the olors of interest. The

onguration le (Fig. 3.3) is a binary le that apart from the H, S and V maximum and

minimumvaluealso ontains thevalues of theintrinsi parameters ofthe amera.

In other appli ations, the onguration le an ontain more information that ould be

relevantforthevisionpro ess. Forexample,inthesituationwhenthe ameraoftherobothas

a xedpositionrelative to theground, the ongurationle ould ontain information about

themapping between pixels and real distan es. Moreover, information about the regions of

theimagesthatdo not have to bepro essed an beadded.

(47)

the amera islledwhenthe alibrationmodule(Se tion4.4)isrun. The ongurationleis

reatedon the lient side andthenexportedto therobot and loadedwhenthevision pro ess

starts.

Figure 3.3: Stru ture ofthebinary onguration leusedintheproposedvision system.

The alibrationof the olors isperformedinthe HSV olor spa e be ause theresults are

more a urate than inthe ase ofthe RGB olorspa e [36 ℄.

Having the vision pro ess running asa server, a ording to how it hasbeen des ribed in

thepreviousse tion,NaoCalib ana tasa lientthatre eivesfromtheNAOrobottheimage

buer anddisplays itinthe RGB formatwitha defaultresolution of

640 × 480

pixels. These framesareusedfor alibratingthe olor rangeasso iated to ea h olor lass.

The interfa e is basedon the histograms ofthe three olor omponents (Hue, Saturation

and Value) and it allows the sele tion of the olor rangefor ea h olor lass withthehelp of

sliders. For ea h of the olorsof interest, aset ofpixels orresponding to the olor lassthat

is being alibrated an be sele ted withthe helpof the mouse. The olor lasses orrespond

to the olorsof interest, whi hare: white, green,orange, yellow, blue, blue-sky,magenta and

bla k. Basedon thisandwiththehelpoftheH,SandV histogramstheuser anmanipulate

the sliders for sele ting themaximumand minimum valuesof thethree omponents for ea h

olor lass. Thesevalues are then updated inthe onguration le, opied to therobot and

loaded whenthe visionmodule isnextstarted.

(a) (b)

Figure 3.4: On the left, an original image a quired by the NAO amera. On the right, the

sameimage withthe olors ofinterest lassied bymeans oftheNaoCalib appli ation.

The intera tion between the userand this appli ation is smoothenbytheuse ofdierent

(48)

Figure 3.5: An illustration of the features of the NaoCalib appli ation that an be a essed

bypressing several keys ofthe PC keyboard.

3.3 NaoViewer

Another lient appli ation thathas been developed and that is being usedfor debugging

purposes is NaoViewer. NaoViewer is a lient tool that has several options useful in the

monitoring andtesting ofthe implemented algorithms.

Asarstoption,NaoViewer anberun withthebasi purposeofre eivinganddisplaying

frames a quired by the amera of the robot. The lient re eives the YUV422 raw buer

a quired by the NAO amera whi h is onverted into an YUV422 image asan intermediary

step andthen onvertedto theRGB olor spa ewiththehelpoftheOpenCV fun tions. The

RGB image isdisplayed witha defaultresolution of

640 × 480

pixels.

The se ond option of this appli ation is to display the8-bit indexed image as well as its

painted RGB version, the former also having the option of ontaining markers for the

de-te ted obje ts of interest. In this situation, theresolution used inthe pro essing algorithms

(inthe ase of the NAO robot, the resolution was sub-sampled to

320 × 240

pixels, see Se -tion4.5). Theindex image,aspreviouslymentioned, isan8-bit image inwhi hallthe olors

of interest are labeled a ordingly to the look-up table. The painted image is a 3- hannels

RGB image of the same resolution as the index image. The index image is s anned and for

ea h pixellabeledashaving oneof the olorsof interest, the olor ofthe orrespondingpixel

in theRGB image isfor edashaving therespe tive pure olorof interest. Ifthereare pixels

that do not have anyof the olors of interestthey willbe painted asgray. Moreover, all the

(49)

featuresofthe appli ation.

(a) (b)

Figure3.6: Onthe left,an originalimage a quired bytheNAO amera and displayed bythe

NaoViewerappli ation. Ontheright, thesame image withthe olorsofinterest lassied by

meansof theNaoCalib appli ation.

(a) (b)

Figure 3.7: On the left, the index image withall the lassied olors labeled. On the right,

the RGB image obtained bypainting theindex image a ordingto thelabels.

Furthermore, the NaoViewer appli ation allows the re ording of a video by saving all

the frames re eived. This is also very helpful in the taskof understanding what the robots

see and for oine debugging. Complementary to this, a basi appli ation for reading and

displaying the video a quired hasbeen implemented. Moreover, themain vision pro ess was

adapted to sendthe rawdatapreviously saved.

3.4 Real-time database

Cooperation is one of the key words that should des ribe a team and this also applies

in the ase of a roboti so er playing team. A ommon approa h for a hieving ooperative

sensingisbymeansofadatabasewhereea hagentpublishestheinformationthatisgenerated

Referências

Documentos relacionados

i) A condutividade da matriz vítrea diminui com o aumento do tempo de tratamento térmico (Fig.. 241 pequena quantidade de cristais existentes na amostra já provoca um efeito

didático e resolva as ​listas de exercícios (disponíveis no ​Classroom​) referentes às obras de Carlos Drummond de Andrade, João Guimarães Rosa, Machado de Assis,

Você poderia dizer que a razão maior do porque pessoas procuram um trabalho vocal é porque eles querem “cantar” – e esse desejo de cantar, vem também porque frequente-

Ter conjunção carnal ou praticar outro ato libidinoso com menor de 14 (catorze) anos: Pena - reclusão, de 8 (oito) a 15 (quinze) anos. § 1º Incorre na mesma pena quem pratica

H„ autores que preferem excluir dos estudos de prevalˆncia lesŽes associadas a dentes restaurados para evitar confus‚o de diagn€stico com lesŽes de

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

A infestação da praga foi medida mediante a contagem de castanhas com orificio de saída do adulto, aberto pela larva no final do seu desenvolvimento, na parte distal da castanha,