• Nenhum resultado encontrado

Onboard image processing in drones

N/A
N/A
Protected

Academic year: 2021

Share "Onboard image processing in drones"

Copied!
89
0
0

Texto

(1)

Universidade de Aveiro Departamento deElectr´onica, Telecomunica¸c˜oes e Inform´atica,

2016

Jo˜

ao Lucas

Neves Cozinheiro

Processamento de imagem em Drones

(2)
(3)

Universidade de Aveiro Departamento deElectr´onica, Telecomunica¸c˜oes e Inform´atica,

2016

Jo˜

ao Lucas

Neves Cozinheiro

Processamento de imagem em Drones

Onboard image processing in Drones

Disserta¸c˜ao apresentada `a Universidade de Aveiro para cumprimento dos

requisitos necess´arios `a obten¸c˜ao do grau de Mestre em Engenharia

Eletr´onica e Telecomunica¸c˜oes, realizada sob a orienta¸c˜ao cient´ıfica do

Pro-fessor Doutor Ant´onio Jos´e Ribeiro Neves, Professor Auxiliar do

Departa-mento de Eletr´onica, Telecomunica¸c˜oes e Inform´atica da Universidade de

(4)
(5)

o j´uri / the jury

presidente / president Professor Doutor Manuel Bernardo Salvador Cunha

Professor Auxiliar do Departamento de Eletr´onica, Telecomunica¸c˜oes e Inform´atica

da Universidade de Aveiro

vogais / examiners committee Professor Doutor Lu´ıs Filipe Pinto de Almeida Teixeira

Professor Auxiliar do Departamento de Engenharia Inform´atica da Faculdade de

Engenharia da Universidade do Porto (arguente)

Professor Doutor Ant´onio Jos´e Ribeiro Neves

Professor Auxiliar do Departamento de Eletr´onica, Telecomunica¸c˜oes e Inform´atica

(6)
(7)

agradecimentos / acknowledgements

Um agradecimento especial aos meus pais e irm˜ao por todo o apoio

prestado ao longo deste meu percurso acad´emico, como tamb´em a toda a

minha fam´ılia.

A todos os meus colegas e amigos, que conheci ao longo destes 5 anos na Universidade de Aveiro, um grande obrigado.

Por fim, um grande obrigado ao meu orientador, Professor Doutor Ant´onio

Jos´e Ribeiro Neves, por toda a ajuda, incentivo, mas principalmente pela

(8)
(9)

keywords Drone, single-board, computer vision, processing

abstract The sight is one of the most important human senses. Since we were born, the human collects a great amount of image data that, throughout time, creates object standards that allow differentiate things. Nevertheless, the same is not so simple using computers.

Despite the difficulty of process images using computers, robots use

nowadays digital cameras as main sensors. More recently, one type of

robots is gathering more attention: Drones, also known by unmanned aerial vehicle. In order to process digital images, drones have to have a computational unit.

This thesis studies the efficiency of several single-boards computers when used to process digital images. Experimental results shows that it is possible to have a single-board attached to the drones allowing real-time image processing.

The case study chosen for this thesis is to use a drone to monitor parkings. The drone has the mission of flying above it and take images. Then, the single-board will process the images and try to detect cars on it, counting the total number of cars on parking car.

(10)
(11)

palavras-chave Drone, microcomputador, vis˜ao artificial, processamento

resumo A vis˜ao ´e um dos mais importantes sentidos humanos. Desde que nasce,

um humano tem a capacidade de guardar um grande n´umero de imagens

na sua mem´oria, criando padr˜oes ao longo do tempo que o permitem

diferenciar coisas. No entanto, o mesmo processo n˜ao ´e assim t˜ao simples

quando comparado a sistemas computacionais.

Apesar da complexidade do processamento de imagens atrav´es de

computadores, hoje em dia, as m´aquinas usam cˆamaras digitais como

seus principais sensores. Mais recentemente, um tipo de robot tem vindo

a destacar-se: os drones, tamb´em conhecidos por ve´ıculos a´ereos n˜ao

tripulados. De modo a processar imagens, os drones tˆem consigo uma

unidade computacional destinada a esse fim.

Esta tese estuda a eficiˆencia de v´arios microcomputadores no uso de

processamento de imagem. Os resultados experimentais permitem ver

que ´e poss´ıvel integrar um destes sistemas num drone permitindo um

processamento de imagem em tempo real.

O caso de estudo escolhido para esta tese destina-se a monitorizar

parques de estacionamento usando um drone. O drone tem a miss˜ao de

voar sobre o parque e recolher algumas imagens. De seguida, estas ir˜ao ser

processadas pelo microcomputador a bordo, detectando e contando quantos carros existem num parque de estacionamento num dado momento.

(12)
(13)

Contents

Contents i

List of Figures iii

List of Tables v

1 Introduction 1

1.1 Motivation . . . 2

1.2 Objectives . . . 3

1.3 Organization of the Dissertation . . . 3

2 Drone 5 2.1 Frame . . . 6

2.2 Motors . . . 7

2.3 Electronic Speed Control (ESC) . . . 10

2.4 Flight Controller . . . 10

2.5 Radio Transmitter . . . 11

2.6 Proppelers . . . 12

2.7 Battery and charging . . . 12

2.8 Drone systems . . . 14

2.8.1 Cheerson CX-20 . . . 14

2.8.2 Parrot Bebop 2 . . . 16

3 Single-board 19 3.1 x86 and ARM architectures . . . 20

3.2 Board models . . . 22

3.2.1 Raspberry Pi . . . 22

(14)

4 Software 27 4.1 Operating Systems . . . 27 4.1.1 Raspbian . . . 29 4.1.2 Xubuntu . . . 30 4.1.3 Debian . . . 32 4.2 OpenCV . . . 33

4.2.1 Image representation and scanning methods . . . 33

4.2.2 Low level image processing . . . 35

4.3 Car Detection . . . 38

5 Results 43 5.1 Parkings image database . . . 43

5.2 Power consume measurements . . . 45

5.3 OpenCV scan image . . . 47

5.4 Algorithms results . . . 50

6 Conclusions 59 6.1 Future Work . . . 59

Appendices 61

(15)

List of Figures

1.1 Parking car detection . . . 1

1.2 Thermal camera example . . . 2

2.1 Drone schematic . . . 5

2.2 DJI FlameWheel - F450 . . . 6

2.3 Brushed and brushless motors . . . 7

2.4 Brushed motor . . . 7

2.5 Brushless motor - Pairs of electromagnetic poles . . . 8

2.6 Brushless motor - Permanent magnets . . . 8

2.7 Drone topologies . . . 9

2.8 ESC connection . . . 10

2.9 Hobbyking KK2 flight controller . . . 10

2.10 TTRobotics Commander Transmitter . . . 11

2.11 Propellers movements - rotational directions . . . 12

2.12 Lithium Polymer (LiPo) battery . . . 13

2.13 Cheerson CX-20 . . . 14 2.14 GoPro Hero . . . 15 2.15 2-axis gimbal . . . 15 2.16 Parrot Bebop 2.0 . . . 16 2.17 FreeFlight 3 . . . 17 3.1 Raspberry Pi Zero . . . 19 3.2 x86 versus ARM . . . 20 3.3 Raspberry Pi 2 Model B . . . 22

3.4 Raspberry Pi blocks diagram . . . 23

(16)

4.1 Operating Systems . . . 28

4.2 Operating System layer . . . 29

4.3 Raspberry Pi - Configuration menu . . . 30

4.4 Raspberry Pi - Boot options . . . 30

4.5 Debian logotype . . . 32 4.6 OpenCV logotype . . . 33 4.7 Pixel example . . . 34 4.8 Gray levels . . . 34 4.9 RGB color space . . . 35 4.10 Blur filter . . . 36

4.11 Canny edge detector . . . 37

4.12 Sobel operator . . . 37

4.13 Car detection - Vertical approach . . . 39

4.14 Visual output system - Parking car detection (Library’s parking) . . . 39

4.15 Car detection - No vertical approach (CIAQ’s parking) . . . 40

4.16 Visual output system - Parking car detection (CIAQ’s parking) . . . 41

4.17 Car detection - No vertical approach (Library’s parking) . . . 41

4.18 Visual output system - Parking car detection (Library’s parking) . . . 42

5.1 University of Aveiro - Parking Map . . . 44

5.2 Parking 1 - Library’s parking . . . 44

5.3 Parking 2 - CIAQ’s parking . . . 45

(17)

List of Tables

2.1 Pros and cons of brushed and brushless motors . . . 8

3.1 Pros and cons of x86 and ARM architectures . . . 21

3.2 Boards comparison . . . 25

5.1 Boards power consumption . . . 46

5.2 Scan image - Timing measurements (BGR color) . . . 48

5.3 Scan image - Timing measurements (Grayscale) . . . 49

5.4 PC Ubuntu - 1920x1080 - Block Steps: 3 . . . 51

5.5 Raspberry Pi - 1920x1080 - Block Steps: 3 . . . 52

5.6 EPIA-P910 - 1920x1080 - Block Steps: 3 . . . 53

5.7 IGEPv2 - 1920x1080 - Block Steps: 3 . . . 54

5.8 PC - 1920x1080 (2nd attempt) . . . 55

5.9 Raspberry 2 - 1920x1080 (2nd attempt) . . . 56

5.10 EPIA-P910 - 1920x1080 (2nd attempt) . . . 57

5.11 IGEPv2 - 1920x1080 (2nd attempt) . . . 58

1 PC Ubuntu - 1920x1080 - Block Steps: 2 . . . 63

2 PC Ubuntu - 1920x1080 - Block Steps: 4 . . . 64

3 PC Ubuntu - 960x540 - Block Steps: 3 . . . 65

4 Raspberry Pi 2 - 960x540 - Block Steps: 3 . . . 66

5 EPIA-P910 - 960x540 - Block Steps: 3 . . . 67

(18)
(19)

Chapter 1

Introduction

During the last decade there have been an attempt to create autonomous solutions using drones in industry, agriculture, health services, among others. This need is due to the fact that drones are able to reach certain areas that humans or other type of robots have some difficult to achieve.

The drones, also known technically by unmanned aerial vehicle, already exist some decades ago. However, only in recent years they became more famous, after appeared on market with small dimensions and affordable price, available to anyone.

The main goal of this thesis is the study of single-boards that could be attached to a drone in order to be used as the main processing unit for computer vision algorithms, artificial in-telligence and control. With this, the drone can deploy autonomous tasks. As a case study, it was considered the car detection on parkings (see for example Fig. 1.1).

(20)

Two drones were used to acquire images from the parkings of the University of Aveiro (UA) and the computer vision algorithms for car detection have been tested on three different single-boards. The core of the algorithms for car detection in low altitude images is based on the work presented in [30]. These algorithms are developed on C++ and based on the OpenCV library.

1.1

Motivation

In industry, the computer vision is an area on constant growth, in which the use of cameras specialized and dedicated software allows to solve quality control problems during the manufacturing, checking and measurement products, among other applications. The recent addition of camera systems in drones, creates a new challenge in computer vision community, with a great capacity for technological development in the near future.

This thesis was proposed by the VisionMaker company. This is specialized on development of computer vision systems to industrial applications and have its head office on University of Aveiro.

As stated before, the case study for our system is car detection in parkings. Several solu-tions were proposed in literature about this topic (see for example Fig. 1.2).

Figure 1.2: Segmentation example of different components in a city image using thermal cameras [36].

(21)

Most of these approaches use high or medium altitude images, that is, more than 30 meters [33][35]. However, in this thesis, it was decided working with low altitude images acquired by the drones, between 10 and 30 meters. This fulfills the recent Portuguese law regarding drones.

1.2

Objectives

In this thesis is intended to provide a drone with visual intelligence. Then, it will be tested several single-boards to engage on a drone. This board will be able to process algorithms of computer vision that allow classifying elements on the collected images. So, the main objectives of this dissertation will be:

• Bibliographical research: analyzing the actual state of art about microcomputers

and single-boards able to run Linux environment;

• Software: installation of Linux distributions and OpenCV library on different boards; • Testing: test and analyzing the performance of single-boards on algorithms execution.

1.3

Organization of the Dissertation

This thesis is divided into 6 chapters, including this introductory chapter:

• Chapter 2 - Drone: In this chapter, a main explanation about the different parts of

typically drone;

• Chapter 3 - Single-board: such as second chapter, an explanation what is and how

is composed a single-board;

• Chapter 4 - Software: In this, it will be presented the different operating systems

that were used on different board, as also the computer vision library used to process the aerial parking images;

• Chapter 5 and 6 - Results and conclusions: At last, it will be presented the

dif-ferent results between different boards, as also the conclusion of this work and a possible purpose to future works.

(22)
(23)

Chapter 2

Drone

Drone, as it is usually known, is an unmanned aerial vehicle that can be controlled full or partially autonomous (Fig. 2.1). There are systems that can be controlled by a human operator with a remote control or can operate fully autonomously, being planned a route that is saved on the drone board.

(24)

The drone typical structure is composed by different parts. These, working together, ensure the stability of it, allowing to accomplish its movements and tasks. Given this, the main parts are:

• Frame; • Motors;

• Electronic Speed Control (ESC); • Flight Control Board;

• Radio transmitter and receiver; • Propellers;

• Battery and charger.

2.1

Frame

Any vehicle needs a structure that supports its all components and the drone is not an exception. This structure is denoted frame and it houses all the components. There are some things to considerer on frame details: weight, size and material type. The frame is the biggest component of drone, so, much of the flight stability depends of it. One of the most used frames, comparing different brands and companies, is the DJI FlameWheel F450 (Fig. 2.2). Manufactured by DJI company and stands out for being strong (hard plastic), light (282g) and have a built-in power distribution board (PDB) included [32].

(25)

2.2

Motors

The motors are responsible to create the drone’s movement. Its purpose is obvious: to spin the propellers. Motors are differentiated using the KV rating, that is the value of a motor relates to how fast it will rotate for a given voltage. If the KV rating for a particular motor is 650rpm/V, then, for example at 11.1V, the motor will be rotating at 11.1V x 650 = 7215rpm. The typical value founded on drone sets is 1000KV.

There are two motor types: brushed motor and brushless motor (Fig. 2.3).

Figure 2.3: Brushed and brushless motors [37].

On the brushed DC motors, the rotor includes two coils wounded around ferromagnetic armatures that, when traversed by current, generate a magnetic field that, in conjunction with the field of the permanent magnets, generates a rotational force. Switching the direc-tion of current flow for ensuring that the torque remains in the same direcdirec-tion is performed mechanically, calling so brushed motor (see for example Fig. 2.4).

(26)

A brushless DC motors uses permanent magnets (Fig. 2.5 and 2.6), mounted on the rotor, and a fixed armature on which a group of winding pairs of electromagnetic poles are mounted.

Figure 2.5: Brushless motor - Pairs of electromagnetic poles [31].

Figure 2.6: Brushless motor - Permanent mag-nets [31].

These poles are excited by an externally applied voltage. This configuration eliminates the need for a mechanical switching system (brushes). However, it requires an electronic control system which ensures the correct sequential switching of the poles, in order to generate the necessary torque that produces the correct rotation of the rotor.

Nowadays, the most used motor type is the brushless model. It’s much more efficient and have a large range of rotation speeds. In the following Table 2.1 can be seen a few comparison:

Brushed DC motor Brushless DC motor

Cheap High efficiency

Simple to control Large rotational speeds range

Large power range Easy maintenance

Very simple to use Price

Mechanical wear-out Complex control

Low efficiency

High noise volume while working

Table 2.1: Pros and cons of brushed and brushless motors [31].

Finally, the number of motors to use depends of the materials available. In other words, depends on how many motors are supported by frame and electronic speed control (ESC).

(27)

with six motors and even eight motors, hexa-copter and octo-copter respectively (see for ex-ample Fig. 2.7). Although the quad-copter topology is the most used due their price-quality, this has a serious problem. If when flying one motor fails, the drone is not able to recovery and falls down. However, on hexa and octo-copters, the same do not occurs. These are able to recovery the flight, allowing a safe landing.

Figure 2.7: On the left, quad-copter topology. In the middle, hexa-copter topology. On the right, octo-copter topology [18].

(28)

2.3

Electronic Speed Control (ESC)

The electronic speed control (ESC) is responsible to set how fast to spin the motors at any time. Each motor must be connected to an ESC, that is, for a quad-copter, four ESCs, for a hexa-copter, six ESCs, and so on. The ESCs are connected to the power distribution board (see for example Fig. 2.8)

Figure 2.8: ESC connection - How to connect it on the drone system [10].

2.4

Flight Controller

The flight controller is the main component that control all drone activities. This is com-posed by sensors, such as gyroscopes and accelerometers, that determines velocity of motors, drone inclination, in few cases, even position. All drone components are connected and con-trolled by it.

On the Fig. 2.9, it is represented a typically flight control scheme:

(29)

2.5

Radio Transmitter

These allow to control the drone remotely. Each action has associated one channel, then the number of channels determines how many individual action can be controlled. For ex-ample, the standard control model defines channel 1 for throttle, channel 2 for rotating right and left itself, channel 3 for pitch (forward and backward) and channel 4 for roll (leaning left and right). On the following Fig. 2.10 there is a radio transmitter example.

Figure 2.10: TTRobotics Commander Transmitter - 2.4GHz, 9 channels. 2 sticks for the basic movements and switches for changing flight modes (manual, GPS, automatically landing). Mode 1 and 2 selection (Mode 1 - throttle right hand, mode 2 - throttle left hand) [17].

(30)

However, as it is possible to see on the previous figure, if ESC and radio receiver allow using more channels than the needed to flight movements, it will be possible to have access to other controls, for example, auxiliary channels to command other extra functions.

Finally, there are also some drones controlled by Wi-Fi. For example, the Parrot models use this type of communication. Using an iOS or Android device, it is possible to control it.

2.6

Proppelers

The propellers working together with motors allow the drone taking off. When full system is working, it is the most danger component, may cause injury to people near this.

A quad-copter has four propellers, two that spin counter-clockwise and another two spin clockwise (Fig. 2.11). Usually, they are made using hard plastic or carbon.

Figure 2.11: Propellers movements - rotational directions [38].

2.7

Battery and charging

Actually, this is one of the major problems of drones nowadays, because this is the main factor responsible for the short flight time of drones.

Drones should be powered with Lithium Polymer (LiPo) batteries. LiPo batteries (Fig. 2.12) are much better than Nickel-Cadmium (NiCd) older batteries because they output power

(31)

faster, store larger amounts of power and have a longer life, doesn’t have a memory effect.

Figure 2.12: Lithium Polymer (LiPo) battery [25].

LiPo batteries act differently than NiCad or NiMH batteries when charging and discharg-ing. These batteries are composed by few cells and they are fully charged when each cell has a voltage of 4.2 volts and they are fully discharged when each cell has a voltage of 3.0 volts. It is important not to exceed both the high voltage of 4.2 volts and the low voltage of 3.0 volts. Exceeding these limits can harm the battery [7].

Another important detail is the battery C rate, in other words, how fast a battery can discharge. Current is generally rated in C’s for the battery. C is how long it takes to discharge the battery in fractions of an hour. For instance 1 C discharges the battery in 1 hour. 2 C discharges the battery in half an hour. All RC batteries are rated in milliampere hours. If a battery is rated at 2000 mAh and someone discharge it at 2000mA (or 2 amps, 1 amp = 1000mA), it will be completely discharged in one hour. The C rating of the battery is thus based on its capacity. A 2000mAh cell discharged a 2 amps is being discharged at 1C (2000mA x 1), a 2000mAh cell discharged at 6 amps is being discharged at 3C (2000mA x 3) [26].

(32)

2.8

Drone systems

For this thesis work was used two different models of drones. Initially, were collected several images vertically using a Cheerson CX-20 and a GoPro camera. In a second step, were collected also several images not totally vertical using a Parrot Bebop through its integrated camera. Then, in this section, it will be presented a description of the main features of both drone models.

2.8.1 Cheerson CX-20

Cheerson CX-20 (Fig. 2.13) is a low-cost drone model based on the famous DJI Phantom models. It is a open-source flight control quad-copter. It has a GPS system, that keeps it on a fixed flight position or also allows to fix the altitude value, moving only on the X and Y axes. It includes also an auto-return function, that allows the drone recover the take-off position. Relatively to the flight time, it can fly about 15 minutes and can take about 350g weight. The total drone weight is about 980g.

Figure 2.13: Cheerson CX-20 and its remote controller [5].

It doesn’t have an integrated camera. Then it was used a GoPro Hero model (Fig. 2.14) to capture aerial images.

(33)

Figure 2.14: GoPro Hero - One of the used cameras [11].

To do this, it was needed a 2-axis gimbal (see for example Fig. 2.15). It is used to stabilise a camera system, removing the camera vibration or shake. The gimbals have the ability to keep the camera level on all axes while drone movements moves the camera. Finally, to control the gimbal position, it is used an auxiliar channel, integrated on drone remote control system.

(34)

2.8.2 Parrot Bebop 2

The Parrot Bebop 2.0 is a programmable drone. It has available a few libraries that allow controlling it using JavaScript. Its total weight is about 500g and can fly about 25 minutes, much longer than the general average.

This Parrot model (see for example Fig. 2.16) have an integrated camera. Its resolution is 14 megapixels and is capable of shooting in 1080p video.

Figure 2.16: Parrot Bebop 2.0 [36].

The built-in camera has a digital stabilization system on a 3-axis framework. It has a fisheye lens which supports 170 field of view, so it can see almost everything in front. However, it doesn’t record all of this view. Rather it uses this to stabilize the video by sensing the movement of the quad-copter and moving the area captured as video to respond to that. Most of the camera drones have gimbal to stabilize the camera, like a previous Cheerson CX-20, but Parrot Bebop 2 uses stabilization software to get stabilized video with the wide angle fisheye camera.

It is controlled by a different way than other drones using an Android or iOS smartphone (see for example Fig. 2.17). Through WiFi connection, it can be controlled using a giroscope phone mode or using the commands presented on phone screen.

(35)

Figure 2.17: FreeFlight 3 running on an Android device [36].

The operating range is 100 meters. However, if the connection fails, it will return to the starting take-off position.

Finally, as a great contribution to this work, using an integrated GPS module, it allows planning previous path with specific parameters. It is possible to set the velocity, altitude, camera inclination and others, allowing to fly autonomously the parkings.

(36)
(37)

Chapter 3

Single-board

With the evolution of technology, single-board computers have become quite popular among both consumers and developers. There is a difference between traditional computers and single-board computers. Typically, desktop or laptop PCs have a motherboard. On the motherboard it is possible to find a processor and other circuits associated with that. It is also possible to find slots for peripherals for example RAM, hard disk or LAN card. On single-board computers, it consists of everything on an only single-board. It has a processor and onsingle-board RAM, ROM, flash storage, among others. Then, the board act as a typical computer.

The single-board computers (for example Fig. 3.1) are not as powerful as the common PCs but, in addiction, the processors are designed in order to consume less power.

Figure 3.1: One of the World’s smallest single-boards: Raspberry Pi Zero ans its components (65mm x 30 mm x 5mm) [23].

(38)

Typically, all these boards are powered by 5V. How much current (mA) the board requires is dependent on what each person hook up to it. As exception, some boards is powered by 12V.

For this work it is important to differentiate what is x86 and ARM arquitectures as both of these types were used.

3.1

x86 and ARM architectures

A few decades have passed since the release of the first CPU [2]. The battle between manufacturers further increased and actually the brands decided on a new competitor: the ARM.

The main responsible of this technological evolution is the main focus of this work, the single-boards and small devices.

The x86 processor was created by Intel. Later, with the success obtained, it was im-plemented a new architecture type, actually known by x86. Actually, this term is used to designate any processor compatible with 32-bit instructions.

The ARM was released some years later of x86, not having been the same success. It was created by ARM Holdings company, coming to simplify the existent 32-bit instructions (Fig. 3.2). However, important companies are investing actually on it. The energy is maybe the decisive factor.

Figure 3.2: x86 versus ARM [4].

On desktop computers, the power consumption was never a crucial detail, while for lap-tops, smartphones and single-boards, save battery is fundamental. Then, the great ARM advantage is the fact that this was created demanding a lower power consume, using a mini-mal resources to ensure it.

(39)

architectures typically are intended for different uses, the x86 always invested more on per-formance. Actually, the x86 models work over 3 GHz in few cases, while ARM work usually on 1.2 GHz. As result, the x86 always had overheating problems. Then, think up that this problem had been solved by ARM.

Actually, although ARM architecture have some interesting qualities, it still exist a large incompatibility with operating systems.

ARM processors follow a RISC (Reduced Instruction Set Computer) architecture, while x86 processors are CISC (Complex Instruction set Architecture). ARM is relatively simple and most instructions execute in one clock cycle. CISC instructions are mostly complex, tak-ing up multiple CPU cycles to execute each instruction. ARM processors follow the explicit load-and-store model, meaning any operation between two data objects in memory require explicitly loading the data from memory to processor registers, performing the operation and explicitly storing the data back into memory. In x86, the load-and-store-into-register logic is inbuilt into the more complex instructions thus allowing lesser instructions [27]. On the following Table 3.1, there is a comparison between main features of both architectures.

x86 ARM

BIOS Bootloader

CISC

(Complex Instruction Set Computer)

RISC

(Reduced Instruction Set Computer)

High clock speed Lower power consume

Great compatibility (OS) Future technology

High performance Medium performance

Large power range Overheating Power consume

(40)

3.2

Board models

For this thesis work was used three different single-board models: Raspberry Pi 2, IGEPv2 and EPIA-P910. In this section, it will be presented a comparison between them.

3.2.1 Raspberry Pi

Raspberry Pi (Fig. 3.3) is a single-board computer developed by the Raspberry Pi

foun-dation1 situated on United Kingdom.

Figure 3.3: Raspberry Pi 2 Model B [22].

Several generations of Raspberry Pis have been released by this foundation. The first generation, released in February 2012, was divided in two models: a basic model A and a higher specification model B. Both models contain a 700 MHz single-core CPU

ARM1176JZF-S and as difference, one of them 256 MB of memory and another 512 MB.

As it is possible to see on the Fig. 3.4, this is the block diagram for the Raspberry models A, B, A+ and B+.

(41)

Figure 3.4: Raspberry Pi blocks diagram [19].

3.2.2 IGEPv2

The IGEPv2 board (Fig. 3.5) is a low-power, fanless single-board computer based on the ARM Cortex-A8 processor. It has a frequency speed of 1000 MHz and it is powered with 5V. As main peripheral connections have a USB 2.0 host, HDMI connector, microSD where should be installed the operating system, among others.

Figure 3.5: IGEPv2 board [13].

3.2.3 EPIA-P910

(42)

tem processor on a highly compact Pico-ITX form factor to provide a versatile platform for building a diverse range of ultra-small form factor devices for healthcare, logistics, fleet management and a myriad of other vertical market applications. Its processor uses the X86 architecture type.

Figure 3.6: VIA EPIA-P910 board [9].

Based on the Pico-ITX form factor measuring just 10 cm x 7.2 cm, the VIA EPIA-P910 comes with a rich set of I/O and connectivity features, including two USB 3.0 ports, one Gigabit Ethernet port, one Mini HDMI port and one VGA port. The functionality of the VIA EPIA-P910 can be enhanced using its onboard pin headers and a variety of expansion cards and modules.

On the following Table 3.2, it will be present a comparison between these different boards presented.

(43)

EP IA-P9 10 IG EP v2 D M3 73 0 Ra sp be rr y Pi 2 Mo de l B Ra sp be rr y Pi 3 Mo de l B CP U 1. 2G Hz VIA E de n X4 Te xas In str um en ts D M3 73 0 10 00 MHz si ng le -c or e ARM Co rte x-A8 Br oad co m B CM2 83 6 90 0MHz qu ad -c or e ARM Co rte x-A7 C PU Br oad co m B CM2 83 7 1. 2G Hz 64 -b it qu ad -c or e ARMv 8 CP U RA M U p to 8 G B 13 33 MHz (4 GB f or this wor k) 51 2 MB 1 GB 1 G B LP D D R2 (9 00 MHz) G rap hi cs Ch ro m e 64 0 D X1 1 3D /2 D g rap hi cs ac ce le rati on Po w er S G X G PU - 2D /3 D g rap hi cs ac ce le rati on O pe nG L ES 1 .0 , 2 .0 an d O pe n VG s up po rt Vi de oC or e IV 3 D g rap hi cs c or e Vi de oC or e IV 3 D g rap hi cs c or e US B 2 (3 .0 p or ts ) 2 po rts 4 po rts 4 po rts G PIO Ye s, u si ng th e ad di tio nal e xtr a bo ar d 39 p in s 40 p in s 40 p in s Eth er ne t G ig ab it 10/100 10/100 10/100 Wi Fi No 2. 4G Hz 80 2. 11 b /g /n No 2. 4G Hz 80 2. 11 n Bl ue to oth No 4. 0 No 4. 1 (BL E, l ow en ergy) Vide o Mi ni HD MI VG A HD MI HD MI Co m po si te v id eo HD MI Co m po si te v id eo Au di o No Ste re o In /O ut m in i j ac k 3. 5m m jac k 3. 5m m jac k Car d sl ot No Mi cr o SD Mi cr o SD Mi cr o SD SA TA 2 No No No O th er s Im ag e Cap tu re In te rf ac e (1 2 bi ts ) Au di o In te rf ac e (D AS ) 1 x PW M po rt 3 x U ART in te rf ac es Cam er a in te rf ac e (C SI) D is pl ay in te rf ac e (D SI) Cam er a in te rf ac e (C SI) an d D is pl ay in te rf ac e (D SI) Pr oc es si ng c or es ru nn in g at 1. 2G Hz w ith 32 kb L ev el 1 an d 51 2k b Le ve l 2 c ac he memo ry

(44)
(45)

Chapter 4

Software

This chapter focuses on the software used on this work. It describes the operating systems installed in the single-boards and presents the image processing software used for testing their efficiency.

4.1

Operating Systems

An operating system (OS) is the most important software that runs on a computational machine. It is responsible to manage the computer’s memory, processes and all software and hardware running on it.

One of the operating system’s main tasks is to control the computer’s resources (the hardware and the software - for example Fig. 4.2). The operating system allocates resources as necessary to ensure that each application receives the appropriate amount. In addition, usually operating systems provide a application interface to help the interaction between the human and the machine.

Operating systems must accomplish the following tasks:

• Processor management: The operating system needs to allocate enough of the

proces-sor’s time to each process and application so that they can run as efficiently as possible. This is particularly important for multitasking;

• Memory storage and management: The operating system needs to ensure that each

process has enough memory to execute the process, while also ensuring that one process does not use the memory allocated to another process;

(46)

These devices require drivers that translate the electrical signals sent from the operating system or application program to the hardware device;

• Application interface: Programmers use application program interfaces (API) to

control the computer and operating system;

• User interface: It sits as a layer above the operating system. It is the part of the

application through which the user interacts with the application.

Usually used on the computers, the most common operating system are Microsoft Win-dows, Apple Mac OS X and Linux distributions (see for example Fig. 4.1).

Figure 4.1: Some examples of operating systems [28].

On this work, all boards were used running Linux distributions. They have the function to run software like a common computers. Inside of OS based on Linux, the distributions vary according system requirements or also board manufacturer.

(47)

Applications and Services

Middleware

OS – Processes, thereads and communication

Computer hardware

Figure 4.2: Operating System layer [28].

4.1.1 Raspbian

Raspbian1is a free operating system that runs on the Raspberry Pi single-board computer.

It is derived from Debian Linux and uses the LXDE desktop environment. It has available two versions: the old Wheezy and the newer Jessie. For this, it was used the last one. Raspbian comes with a variety of useful software tools. The user interface (UI) is similar to Windows, OS X and other Linux distributions.

To install it, it is necessary a micro SD card (or SD card, depending the Raspberry Pi model) where it will be written the Raspbian installation image. This operating system has a low level configuration menu that allows change some parameters. For tests, the only changed parameters are the SSH state, to control the Raspberry Pi through the network and the boot options, to deactivate the graphics mode, to save resources (see for example Fig. 4.3 and Fig. 4.4). To access this menu, run the Terminal and type:

(48)

Figure 4.3: Raspberry Pi - Configuration menu [21].

Figure 4.4: Raspberry Pi - Boot options [20].

4.1.2 Xubuntu

Xubuntu is a Linux distribution destined to computers with few resources. Based on Ubuntu, Xubuntu comes with Xubuntu’s desktop environment (Xfce), while Ubuntu uses Unity desktop environment.

Unity is a modern desktop environment which is well suited on both desktop and mobile devices. Xfce is a traditional desktop environment. Xfce is very lightweight and thus low on resources. Xubuntu runs well on older hardware due to its lightweight nature but Unity is not. Then, Xubuntu is perfect for those who want the most out of their desktops, laptops

(49)

and netbooks with a modern look and enough features for efficient, daily usage, working well on older hardware too.

This operating system was used on EPIA-P910 board. It was used the Xubuntu 14.04.4

LTS2 version, because it is one of the most stable versions. To install it, it is necessary to

get an installation image and writing it on HDD. Afterward, choose on EPIA bios menu as first boot device he HDD and run it.

For tests, the graphical interface was disabled. To do this, it is need to run the following commands on Terminal:

≫ sudo service lightdm stop (To disable)

≫ sudo service lightdm start (To reenable)

or

≫ sudo service gdm stop (To disable)

(50)

4.1.3 Debian

Debian is a open-source operating system developed with tools from the GNU project and supports different architectures, including Intel and AMD systems (Fig. 4.5). It is an association of individuals who have made common cause to create a free operating system.

Figure 4.5: Debian logotype [6].

There are a number of distributions based on it. These currently use the Linux kernel or the FreeBSD kernel, for example the famous Linux Ubuntu or even too the Raspian, referred on previous subsection. On this work, it was used a specific Debian image, optimized for IGEP board.

(51)

4.2

OpenCV

OpenCV, also known by Open Source Computer Vision Library is an open source computer vision software library (Fig. 4.6). OpenCV was built to provide a common infrastructure for computer vision applications and to create the use of machine perception in the commercial products.

The library has more than 2500 optimised algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, produce 3D point clouds from stereo cameras, among others.

OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 7 million. The library is used extensively in companies and research groups [34].

Figure 4.6: OpenCV logotype [15].

OpenCV is used mostly towards real-time vision applications. It has C++, C, Python, Java and MATLAB interfaces but it is written natively in C. OpenCV supports Windows, Linux and Mac OS.

4.2.1 Image representation and scanning methods

When using digital equipment to capture and store photographic images, they must first be converted to a set of numbers in a process called digitalization or scanning. Digital images are composed of pixels (short for picture elements). Each pixel represents the color or gray level for black and white photos at a single point in the image (for example Fig. 4.7). Pixels are arranged in a regular pattern of rows and columns and store information somewhat differently. A digital image is a rectangular array of pixels sometimes called a bitmap.

(52)

Figure 4.7: Zoom image where it is possible to see a difference in values between different pixels [16].

A black and white image is made up of pixels each of which holds a single number corre-sponding to the gray level of the image at a particular location (Fig. 4.8). These gray levels are distributed normally on a scale with 256 different values for 8 bits. If used more bits, it is possible a bigger range scale.

Figure 4.8: Gray levels [1].

A color image is made up of pixels each of which holds three numbers corresponding to the red, green and blue levels of the image at a particular location. Red, green and blue (sometimes referred to as RGB) are the primary colors (Fig. 4.9). Any color can be created by mixing the correct amounts of red, green and blue light. Assuming 256 levels (8 bits) for each primary, each color pixel can be stored in three bytes (24 bits) of memory. This corresponds to roughly 16.7 million different possible colors.

(53)

Figure 4.9: RGB color space [24].

Depending of the color system used, grayscale or RGB, the size for the matrix value of image depends from the number of channels used. However, there are also other color spaces for example HSV and YUV.

To scan the matrix of images there are some different methods [12].

The pointer access method consists to acquire a pointer to the start of each row and go through it until it ends, in other words, when using RGB (three channels), it is necessary to pass through three times more in each row. There is also the iterator method that is considered a safer way as it takes over these tasks from the user. It asks the begin and the end of the image matrix and then just increase the begin iterator until to reach the end. To acquire the value pointed by the iterator use the * operator. Finally, the On-the-fly address calculation method using at() function.

4.2.2 Low level image processing

As referred before, OpenCV has available several code interfaces. However, for this work, it was only used the C++ interface. They are available multiple implemented functions, each one direction to a specify processing image area. Nevertheless, there are some functions that serving as base for image processing, that is, they help to segment the image for further processing, for example blur, canny and sobel.

When applied a blur filter (Fig. 4.10), the main objective is to smooth the image, that is, this filter will reduce the noise. The noise reduction is a typical image pre-processing method which will improve the final result. Smoothing is done by sliding a window (kernel or filter) across the whole image and calculating each pixel a value based on the value of the kernel and the value of overlapping pixels of original image. This process is mathematically called

(54)

types of blurring methods.

Figure 4.10: Blur filter: original and blur image.

Edge detection is a crucial step in object recognition. It is a process of finding sharp discontinuities in an image.

A classical method of edge detection involves the use of operators, a two dimensional filter. An edge in an image occurs when the gradient is greatest. The operator works by identifying these large gradients to find the edges. There are a vast amount of operators designed to detect certain types of edges. The operators can be configured to search for vertical, horizontal or diagonal edges. One major problem with edge detection is when noise is present in images. It is not enough to simply reduce the noise, because the image will be either distorted or blurred.

Canny edge detector or only canny as usually known, is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It takes as input a grayscale image and produces as output an image showing the positions of tracked intensity discontinuities (see for example Fig. 4.11).

(55)

Figure 4.11: Canny edge detector: original and Canny image.

The effect of the Canny operator is determined by three parameters: the width of the Gaussian kernel used in the smoothing phase and the upper and lower thresholds used by the tracker. Increasing the width of the Gaussian kernel reduces the detector’s sensitivity to noise, at the expense of losing some of the finer detail in the image.

Usually, the upper tracking threshold can be set quite high and the lower threshold quite low for good results. Setting the lower threshold too high will cause noisy edges to break up. Setting the upper threshold too low increases the number of spurious and undesirable edge fragments appearing in the output.

At last, the Sobel operator (Fig. 4.12) is used particularly as alternative method to edge detection. The Sobel method provides a approximation to the gradient magnitude. It can detect edges and their orientations.

Figure 4.12: Sobel operator: original and Sobel image.

Comparing Sobel and Canny methods, Sobel is a simple method to implement and detects edges and their orientation. However, it is so sensitive to noise. Canny has the advantage of smoothing the image to remove the noise, resulting on a good localization and response to

(56)

it requires more resources, consuming more time.

4.3

Car Detection

The algorithms used for tests were developed in two different situations. During the first weeks of the project, there was only access to Cheerson CX-20. All the difficulties caused by the lack of planned flight lead to less obtained images. Having only frames captured in

Library’s parking, there were only an algorithm to detect vehicles in vertical footage, without

alignment and altitude constrains.

This first attempt is divided in different steps. At the beginning, the single-board receives an input image which is opened and converted into grayscale and HSV colorspace. Then, a blur operation is applied to smooth the image and reduce noise. The next task is finding edges. To do this, Canny Edge Detector is used, computing gradients in an optimized way. As there are parts of the image where vehicles are not supposed to appear, it is necessary to remove these zones using the methods Remove Colours and Remove Borders (Fig. 4.13 and 4.14). Finally, the interest regions are analyzed block by block to find zones with low edge density. Region growing technique is applied to constrain areas and decide if they represent vehicles, measuring its area. In case a car is detected, counter is updated.

(57)

Open time (RGB) Gray HSV Blur Canny Remove borders Remove colours

Blocks growingRegion Car count based on region size

Input

Image frame

Output

Cars found

Figure 4.13: Car detection - Vertical approach (block diagram).

(58)

The second attempt is divided in less parts but with more complexity.

The first three steps were kept unchanged as they are crucial for the rest of the detection. As Bebop2 drone is capable to repeat the same path and make sure it is aligned with the center of the road (using planned paths), it is possible to improve region selection, detecting the road in the middle of the image and removing it. Borders are also discarded using a constrain which is often verified in parking lots - the road’s width is very close to parking length (Fig. 4.15 and 4.16). For each space, if the length exceeds road’s width, the image is trunked.

Car analysis is also renewed and edge density evaluation is now done row by row, saving

some inefficient processes used in region growing technique.

Open time (RGB) Gray HSV Find road Car analysis Output Cars found Input Image frame

(59)

Figure 4.16: Visual output system - Parking car detection (CIAQ’s parking).

For the Library’s parking, all previous algorithms are kept. The only change is an ad-ditional Canny method between Find road and Car analysis step. This is needed because for the other parkings Canny is needed for Road detection, but in this case Canny is only necessary for Car Detection (Fig. 4.17 and 4.18).

Open time

(RGB)

Gray

HSV

Find road

Canny

Car analysis

Input

Image frame

Output

Cars found

(60)
(61)

Chapter 5

Results

On this chapter, it will be presented the different type of achieved results during this work. At first section, a short description of the image database used for all tests. Next, some results related with single-boards consumption and at last, it will be presented the results analysis relatively the measured times of the OpenCV algorithms.

5.1

Parkings image database

To this thesis, it was mainly considered three parkings. All these present different details that characterize them for image processing. As can see in Fig. 5.1:

• Parking 1 - Library’s parking: presents an irregular floor, in Portuguese known as Cal¸cada Portuguesa and few lines of parking place (Fig. 5.2);

• Parking 2 - CIAQ’s parking: presents a regular type composed by light tar and some

vegetation that difficult the image segmentation (Fig. 5.3);

• Parking 3 - Rectory’s parking: also presents a light tar floor, being the parking

(62)

Figure 5.1: University of Aveiro - Parking Map [14].

(63)

Figure 5.3: Parking 2 - CIAQ’s parking.

5.2

Power consume measurements

It was measured the power consumption on all single-boards. This measurement is very important because as was possible look along this dissertation, one of the main problems nowadays on drones vehicles is the battery life. Then, it is important to know which is the best board according with this parameters.

They are measured the current and voltage values. The following graph 5.4 shows the different current values while processing or not processing:

(64)

0.46 0.56 0.52 0.69 1.32 1.68 0 0.5 1 1.5 2 Stand by Processing

Raspberry Pi 2 Model B IGEPv2 DM3730 EPIA-P910 Current (A)

Figure 5.4: Boards current measurements.

For the voltages, the measured values on power source were:

• Raspberry Pi 2 Model B: 5,03V (stand by / processing); • IGEPv2 DM3730: 5,19V (stand by) / 5,15V (processing); • EPIA-P910: 12,05V (stand by / processing).

From these, it was generated the Table 5.1 that presents the stand-by (Terminal mode

waiting user input) and processing consumptions:Power consumption

Raspberry Pi 2 Model B IGEPv2 DM3730 EPIA-P910

Stand by 2,3138 W 2,9064 W 15,906 W

Processing 2,6156 W 3,5535 W 20,244 W

Stand by Processing

Power

(65)

5.3

OpenCV scan image

Before the performance analysis of the single-boards regarding the processing of images for car detection, an experiment regarding the simple pixel access on images was performed and the experimental results are presented in Tables 5.2 and 5.3. The details about the scanning methods used for accessing all the pixels are presented in Subsection 4.2.1.

An obvious result can be observed. The processing time for color images are three times slower when comparing to the grayscale version. This is due to the fact that there are three times more information to process.

Regarding the three scanning methods explored, the most efficient one is accessing the pixels directly by the pointer to the image data. This is due to the fact that there are no processing overheads related to the function calls. However, regarding programming issues, the use of iterators or the method at are easier and safer since there are internal validations, for example on the pixel positions inside the image. Moreover, it is necessary to validate if the image data is continuos in the computer memory before using the pointer access.

With this simple experiment, we conclude that the best single-board used is the

EPIA-P910 when looking to the processing performance. This is an expected result since the

processor is the best among the three. However, this board is also the one that has more consumption as presented in the previous section.

(66)

BG R c ol or R es ol u ti on P C - U b u n tu 14.04.3 LTS R as p b er ry P i 2 M od el B IG EP v2 D M 3730 EP IA -P 910 2880 x 1800 38,1325 m s 570,415 m s 652,456 m s 265,613 m s 2160 x 1350 20,6358 m s 322,152 m s 365,419 m s 150,125 m s 1440 x 900 9,84476 m s 141,681 m s 163,589 m s 69,0636 m s 720 x 450 2,44719 m s 35,5517 m s 41,1878 m s 17,7531 m s 360 x 225 0,632086 m s 8,36241 m s 9,1035 m s 3,84881 m s 2880 x 1800 158,532 m s 2119,29 m s 2233,27 m s 786,105 m s 2160 x 1350 80,3547 m s 1192,18 m s 1257 m s 444,259 m s 1440 x 900 36,1698 m s 525,477 m s 558,768 m s 199,845 m s 720 x 450 8,70073 m s 131,546 m s 140,256 m s 50,4966 m s 360 x 225 2,20703 m s 32,3575 m s 34,2429 m s 12,0491 m s 2880 x 1800 153,323 m s 2334,68 m s 2515,62 m s 978,114 m s 2160 x 1350 89,1168 m s 1313,05 m s 1415,06 m s 549,82 m s 1440 x 900 37,9868 m s 579,066 m s 627,929 m s 245,641 m s 720 x 450 9,4726 m s 144,931 m s 157,575 m s 62,3179 m s 360 x 225 2,34163 m s 35,6452 m s 38,3214 m s 14,9592 m s S ta nd by 5,03 V / 0,46 A / 2,3138 W 12,05 V / 1,32 A / 15,906 W P roc es si ng 5,03 V / 0,52 A / 2,6156 W 12,05 V / 1,68 A / 20,244 W P oi n te r ac ce ss Ite rator me th od O n -th e-fl y ad d re ss c al cu lati on w ith r efe re n ce r etu rn in g P ow er

(67)

G ray s cal e R es ol u ti on P C - U b u n tu 14.04.3 LTS R as p b er ry P i 2 M od el B IG EP v2 D M 3730 EP IA -P 910 2880 x 1800 13,2915 m s 188,685 m s 218,058 m s 89,1916 m s 2160 x 1350 7,13441 m s 106,368 m s 123,184 m s 51,6209 m s 1440 x 900 3,11628 m s 47,3593 m s 54,7592 m s 23,6376 m s 720 x 450 0,786463 m s 11,3377 m s 12,8039 m s 5,14567 m s 360 x 225 0,189322 m s 2,74069 m s 2,70154 m s 1,27818 m s 2880 x 1800 62,6718 m s 881,55 m s 967,058 m s 377,153 m s 2160 x 1350 31,9733 m s 496,105 m s 544,171 m s 213,9 m s 1440 x 900 14,2148 m s 220,604 m s 241,643 m s 95,7223 m s 720 x 450 3,46991 m s 54,9586 m s 59,0642 m s 23,1021 m s 360 x 225 0,942688 m s 13,634 m s 14,2832 m s 5,79741 m s 2880 x 1800 38,7273 m s 559,644 m s 657,067 m s 267,01 m s 2160 x 1350 22,9128 m s 314,896 m s 368,65 m s 153,483 m s 1440 x 900 9,66941 m s 139,998 m s 164,465 m s 68,2604 m s 720 x 450 2,28287 m s 34,4804 m s 39,7442 m s 16,3213 m s 360 x 225 0,60191 m s 8,54232 m s 10,0145 m s 4,08152 m s S ta nd by 5,03 V / 0,46 A / 2,3138 W 12,05 V / 1,32 A / 15,906 W P roc es si ng 5,03 V / 0,52 A / 2,6156 W 12,05 V / 1,68 A / 20,244 W P oi n te r ac ce ss Ite rator me th od O n -th e-fl y ad d re ss c al cu lati on w ith r efe re n ce r etu rn in g P ow er

(68)

5.4

Algorithms results

This section presents detailed experimental results regarding the processing time of the algorithms developed for car detection and described in Chapter 4. The Tables 5.4, 5.5, 5.6 and 5.7 present results regarding the first attempt. Tables 5.8, 5.9, 5.10 and 5.11 present results of the second attempt.

Based on the results presented on the referred tables, it is possible to verify that there was a significant processing timing improvement between first and second algorithms attempts.

Relatively to open time, grayscale and HSV steps, these methods don’t have many details to change, being their times usually constant.

The Canny method was used in both attempts and these also constant. The biggest improvement was made with the optimization of border detection, which is done in different ways for which parking type. As interest regions are smaller with Bebop 2’s algorithm, edge density evaluation is performed faster. This is represented on the following tables.

There was a significant improvement in overall times measured with the second algorithm. While on the first attempt it was concluded that measured times were not suitable for onboard processing, because it was impossible to wait 7 seconds before capturing the next frame, currently it is plausible to create a real time system on board to a low-speed flight, between 1 and 2 m/s, so that the board is able to process all images without delay.

The results obtained allows to conclude that the best single board used is the EPIA-P910 when looking to the processing performance. As stated before, this board is also the one that has more consumption as presented in the previous section.

(69)

Fr am e 1 2 3 4 5 6 7 8 Bl oc k Ste ps : 3 O pe n Ti m e (RG B) 45 49 46 47 46 48 45 44 1920x1080 G ray 12 9 9 9 9 9 9 9 D raw Car s [ 1 50 00 - 80 00 0 ] HS V 42 42 41 42 42 42 42 41 Bl ur 14 13 13 13 13 13 13 13 Can ny 63 64 66 66 71 65 64 60 Re m ov e bo rd er s 216 254 193 229 267 464 457 321 Re m ov e co lo ur s 32 32 32 32 33 35 33 34 Bl ocks 156 144 147 162 154 147 144 146 Lo w d en si ty re gi on s 61 50 45 46 43 44 79 87 Car s fo un d 5 5 4 3 3 2 6 6 Car s on im ag e 6 5 4 3 3 2 7 6 Car c ou nt bas ed o n re gi on s ize 1 1 1 1 1 1 1 1 % 83% 100% 100% 100% 100% 100% 86% 100% TO TAL (m s) 647 663 598 652 683 873 892 761 % AV G 96%

(70)

Fr am e 1 2 3 4 5 6 7 8 Bl oc k Ste ps : 3 O pe n Ti m e (RG B) 189 198 190 194 195 189 187 188 1920x1080 G ray 40 37 37 37 37 37 37 37 D raw Car s [ 1 50 00 - 80 00 0 ] HS V 160 144 146 146 146 146 144 146 Bl ur 192 196 188 188 188 193 188 188 Can ny 353 352 361 363 375 354 341 338 Re m ov e bo rd er s 2341 2706 2070 2249 2680 3595 3918 2699 Re m ov e co lo ur s 485 485 489 486 486 503 504 507 Bl ocks 1354 1353 1350 1348 1349 1350 1360 1364 Lo w d en si ty re gi on s 781 675 604 621 518 608 1004 1096 Car s fo un d 5 5 4 3 3 2 6 6 Car s on im ag e 6 5 4 3 3 2 7 6 Car c ou nt bas ed o n re gi on s ize 22 23 23 22 23 22 22 22 % 83% 100% 100% 100% 100% 100% 86% 100% TO TAL (m s) 5976 6225 5510 5701 6043 7042 7750 6629 % AV G 96%

(71)

Fr am e 1 2 3 4 5 6 7 8 Bl oc k Ste ps : 3 O pe n Ti m e (RG B) 121 144 121 126 126 120 119 121 1920x1080 G ray 35 21 22 21 22 21 22 21 D raw Car s [ 1 50 00 - 80 00 0 ] HS V 139 129 130 128 130 130 128 129 Bl ur 56 52 51 51 51 51 51 51 Can ny 260 258 264 267 279 261 250 248 Re m ov e bo rd er s 1063 1223 930 984 1119 1667 1758 1192 Re m ov e co lo ur s 202 202 204 203 203 210 211 212 Bl ocks 741 740 751 760 765 751 745 741 Lo w d en si ty re gi on s 446 394 357 367 310 359 571 618 Car s fo un d 5 5 4 3 3 2 6 6 Car s on im ag e 6 5 4 3 3 2 7 6 Car c ou nt bas ed o n re gi on s ize 42 39 27 24 23 24 24 24 % 83% 100% 100% 100% 100% 100% 86% 100% TO TAL (m s) 3486 3560 3099 3079 3162 3729 4015 3492 % AV G 96%

(72)

Fr am e 1 2 3 4 5 6 7 8 Bl oc k Ste ps : 3 O pe n Ti m e (RG B) 188 197 188 193 193 188 183 186 1920x1080 G ray 63 53 53 52 52 52 52 53 D raw Car s [ 1 50 00 - 80 00 0 ] HS V 243 223 219 217 217 217 219 217 Bl ur 736 722 721 722 722 720 720 720 Can ny 485 481 511 497 515 486 468 465 Re m ov e bo rd er s 5274 5741 5536 6582 7829 9557 10151 7991 Re m ov e co lo ur s 502 502 507 504 503 522 523 527 Bl ocks 1393 1394 1394 1393 1395 1393 1402 1405 Lo w d en si ty re gi on s 1181 1027 927 988 816 960 1508 1627 Car s fo un d 5 5 4 3 3 2 6 6 Car s on im ag e 6 5 4 3 3 2 7 6 Car c ou nt bas ed o n re gi on s ize 52 54 52 52 52 52 52 52 % 83% 100% 100% 100% 100% 100% 86% 100% TO TAL (m s) 10204 10470 10151 1124 12336 14189 15320 13283 % AV G 96%

(73)

Bibliote ca Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 48 57 52 58 53 56 54 56 G ray 13 10 10 10 12 10 11 10 HS V 48 44 44 43 44 42 43 43 Fi nd ro ad 89 92 96 90 88 89 87 87 Can ny 76 74 76 76 75 74 75 73 Car an al ys is 19 19 19 18 19 19 22 21 Car s fo un d 1 1 1 1 1 1 2 2 Car s on im ag e 1 1 1 2 1 1 2 2 % 100% 100% 100% 50% 100% 100% 100% 100% TO TAL (m s) 302 302 304 300 297 297 299 301 % AV G 94% CI AQ Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 44 54 52 47 54 49 51 50 G ray 10 10 10 9 9 13 12 9 HS V 44 47 43 42 42 43 43 43 Fi nd ro ad 104 127 116 106 105 103 104 102 Car an al ys is 19 18 20 19 17 18 17 17 Car s fo un d 1 1 2 2 1 1 1 1 Car s on im ag e 1 1 2 2 1 1 1 1 % 100% 100% 100% 100% 100% 100% 100% 100% TO TAL (m s) 228 264 249 231 235 232 234 228 % AV G 100% Table 5.8: PC - 1920x1080 (2nd attempt) .

(74)

Bibliote ca Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 192 199 193 202 192 197 193 196 G ray 37 34 33 34 33 33 34 34 HS V 144 132 130 131 129 130 132 133 Fi nd ro ad 1022 1027 1002 1012 1001 1003 1007 1000 Can ny 384 333 329 335 332 327 331 329 Car an al ys is 388 392 382 373 378 386 438 454 Car s fo un d 1 1 1 1 1 1 2 2 Car s on im ag e 1 1 1 2 1 1 2 2 % 100% 100% 100% 50% 100% 100% 100% 100% TO TAL (m s) 2285 2234 2186 2205 2182 2193 2254 2265 % AV G 94% CI AQ Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 179 176 183 177 182 177 183 175 G ray 36 33 33 33 33 33 33 33 HS V 141 132 129 131 131 129 130 131 Fi nd ro ad 806 795 800 795 797 786 788 790 Car an al ys is 382 383 409 389 364 387 368 362 Car s fo un d 1 1 2 2 1 1 1 1 Car s on im ag e 1 1 2 2 1 1 1 1 % 100% 100% 100% 100% 100% 100% 100% 100% TO TAL (m s) 1661 1636 1670 1643 1624 1628 1620 1609 % AV G 100%

(75)

Bibliote ca Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 141 144 142 146 140 144 137 145 G ray 36 22 23 23 22 22 22 22 HS V 138 128 130 127 128 130 128 129 Fi nd ro ad 431 430 422 428 421 427 422 420 Can ny 284 278 272 281 284 280 277 271 Car an al ys is 226 225 223 218 222 225 245 237 Car s fo un d 1 1 1 1 1 1 2 2 Car s on im ag e 1 1 1 2 1 1 2 2 % 100% 100% 100% 50% 100% 100% 100% 100% TO TAL (m s) 1309 1280 1265 1276 1269 1280 1284 1276 % AV G 94% CI AQ Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 128 125 131 127 130 127 129 127 G ray 36 22 22 22 22 22 22 22 HS V 135 129 131 133 129 127 134 128 Fi nd ro ad 494 478 498 481 491 483 490 481 Car an al ys is 226 223 234 227 216 224 218 215 Car s fo un d 1 1 2 2 1 1 1 1 Car s on im ag e 1 1 2 2 1 1 1 1 % 100% 100% 100% 100% 100% 100% 100% 100% TO TAL (m s) 1073 1030 1069 1042 1041 1035 1044 1025 % AV G 100%

(76)

Bibliote ca Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 239 248 252 259 253 259 249 257 G ray 62 53 53 53 53 53 53 53 HS V 242 218 218 218 219 218 218 218 Fi nd ro ad 2162 2407 2330 2327 2299 2331 2332 2306 Can ny 526 509 512 512 517 508 508 505 Car an al ys is 407 409 402 389 398 408 463 441 Car s fo un d 1 1 1 1 1 1 2 2 Car s on im ag e 1 1 1 2 1 1 2 2 % 100% 100% 100% 50% 100% 100% 100% 100% TO TAL (m s) 3785 3990 3912 3905 3884 3924 3990 3946 % AV G 94% CI AQ Fr am e 1 2 3 4 5 6 7 8 1920x1080 O pe n Ti m e (RG B) 223 219 231 222 231 223 231 223 G ray 62 53 53 53 53 53 53 53 HS V 242 219 219 218 218 218 218 218 Fi nd ro ad 1109 1086 1105 1088 1093 1083 1088 1083 Car an al ys is 408 402 432 414 384 406 390 381 Car s fo un d 1 1 2 2 1 1 1 1 Car s on im ag e 1 1 2 2 1 1 1 1 % 100% 100% 100% 100% 100% 100% 100% 100% TO TAL (m s) 2189 2124 2186 2140 2125 2129 2128 2117 % AV G 100%

(77)

Chapter 6

Conclusions

In order to have an autonomous drone, one of the capabilities it should have is an au-tonomously perception of the world surrounding it. One hand, this board should be able to process images in real time. On the other hand, its consumption should be as lower as possible.

In order to have onboard image processing, a drone should include a single-board. The main objectives defined were, in part, achieved. It was possible to test all different single-boards and seeing the differences between several performances.

The proposed use of Raspberry boards is clearly the best solution. It has available a large online community that offer several processing image solutions, as also a operating system, developed specifically to it, that have a regular updates.

Though the EPIA-P910 model have had better timing performance, we can’t forget that we are talking about a x86 board model. Itself size and power consume wouldn’t allow a viable solution. So, the Raspberry solution is a good solution, not only by your timing performance, as also by small size that allow coupling it on board a drone.

6.1

Future Work

As concluded previously, the Raspberry board seems to be the solution more viable. Half this work, the Raspberry Pi foundation released the third model of its board. How-ever, in this work only was tested the Raspberry Pi 2 Model B. So, as future proposed, according the new features of this new model, refereed previous on this work, it will be pos-sible, through the wireless connection embedded on Raspberry board, as also through the library available using Node.JS to control the Bebop drone model, to create an autonomous system.

(78)
(79)
(80)
(81)

Fr am e 1 2 3 4 5 6 7 8 Bl oc k Ste ps : 2 O pe n Ti m e (RG B) 44 50 46 46 47 45 45 47 1920x1080 G ray 10 10 9 9 9 9 9 9 D raw Car s [ 1 50 00 - 80 00 0 ] HS V 44 44 41 41 44 41 41 47 Bl ur 14 13 13 13 14 13 3 14 Can ny 63 63 66 66 70 64 64 65 Re m ov e bo rd er s 220 234 194 230 265 469 501 324 Re m ov e co lo ur s 32 32 32 32 32 33 35 34 Bl ocks 63 62 64 64 66 63 70 62 Lo w d en si ty re gi on s 56 45 41 43 38 40 79 80 Car s fo un d 5 4 3 3 3 2 6 6 Car s on im ag e 6 5 4 3 3 2 7 6 Car c ou nt bas ed o n re gi on s ize 1 1 1 1 1 1 1 1 % 83% 80% 75% 100% 100% 100% 86% 100% TO TAL (m s) 552 559 513 552 590 783 862 688 % AV G 91%

Referências

Documentos relacionados

A exposição itinerante do Departamento de Mulheres da Igreja do Norte da Alemanha busca resgatar histórias de mulheres que, desde o movimento da Reforma no século XVI até

Na hepatite B, as enzimas hepáticas têm valores menores tanto para quem toma quanto para os que não tomam café comparados ao vírus C, porém os dados foram estatisticamente

Ter conjunção carnal ou praticar outro ato libidinoso com menor de 14 (catorze) anos: Pena - reclusão, de 8 (oito) a 15 (quinze) anos. § 1º Incorre na mesma pena quem pratica

É importante que os alunos mantenham o mesmo professor ao longo dos três anos de ensino de Ciências Físico-Químicas, não só por o professor já conhecer os

No presente estudo a maioria dos idosos apresentou uma satisfação média com o suporte social e não se verificou uma relação entre o estado civil e o suporte

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

The fact that the political implications of magic realism do not seem to apply to the film The House o f the Spirits, while it “breaths from [Like Water for Chocolate ’