• Nenhum resultado encontrado

Sistema para processamento e manipulação de informação recolhida durante o processo de rastreamento 3D de blocos de espuma.

N/A
N/A
Protected

Academic year: 2021

Share "Sistema para processamento e manipulação de informação recolhida durante o processo de rastreamento 3D de blocos de espuma."

Copied!
52
0
0

Texto

(1)

F

ACULDADE DE

E

NGENHARIA DA

U

NIVERSIDADE DO

P

ORTO

System for manipulation of data

gathered during the 3D tracking

process of foam blocks

Paulo Miguel Curto Marú

Mestrado Integrado em Engenharia Electrotécnica e de Computadores Supervisor: Marco Silva

FEUP Supervisor: Aníbal Matos

(2)
(3)

Resumo

Ao longo da história, o homem sentiu a necessidade de continuar a evoluir de modo a maximizar o rendimento dos recursos. Do caçador-coletor até ao homem civilizado dos nossos dias, houve um grande salto que se deve apenas à necessidade. Os avanços em ciência e tecnologia permitiram uma sociedade cada vez mais eficiente. Em particular, o setor industrial que tem foco na produção e pretende minimizar o desperdício, automatizar a produção e, portanto, aumentar o lucro.

A análise e manipulação de dados é uma ferramenta eficaz para resolver muitos problemas relacionados com indústria, mas também pode melhorar um sistema que não tinha soluções ante-riormente, principalmente devido à falta de tecnologia específica.

Esta dissertação aborda esse problema ao analisar um conjunto de dados totalmente novo que esá a ser adquirido através de uma nova ferramenta que é capaz de retornar coordenadas tridimensionais de pontos detectados por uma combinação de laser e câmara. Esta implementação de uma nova tecnologia está a permitir informações mais complexas e detalhadas, garantindo fiabilidade em cada etapa do processo.

O tema de estudo será focado essencialmente na representação 3D. É um dos assuntos mais complexos desta dissertação, dado que foram feitos poucos esforços para melhorar esta tecnologia a nível de programação. A otimização também é um foco principal, uma vez que garante que o processo seja melhorado e, portanto, cumpre o objetivo principal de qualquer empresa industrial.

(4)
(5)

Abstract

Throughout history man have felt the need to keep evolving in a way to maximise resource in-come. From the hunter-gatherer up to the civilised man from our days, there was a huge leap that’s due only to need itself. The advances in science and technology allowed for a increasingly more efficient society. In particular, the industry sector has its focus on production which means minimising waste and automate production and therefore increasing profit.

Data analyses and manipulation is a power tool to solve many industry related issues, but it can also enhance a system which had no previous solutions before, mainly due to the lack of a specific technology.

This dissertation addresses this problem by analysing a whole new data set that is being ac-quired through a new tool that is able to return three-dimensional coordinates of points detected by a combination of laser and camera. This implementation of a new technology is allowing more complex and detailed information, granting reliability in each step of the process.

The subject of study will essentially focus on 3D representation. It is one of the most intricate subjects of this dissertation given that little efforts have been made to improve this technology on a programming level. Optimisation is also a main focus since it guarantees that the process is improved and therefore we are fulfilling the main goal of any industrial company.

(6)
(7)

Acknowledgements

First of all I thank FEUP for providing the conditions for me to grow as a person and as an engineer, allowing me to reach my goals in life.

I thank professor Aníbal Matos for the supervising this project, clarifying all my questions and keeping me focused.

I thank Flexipol for receiving and including me in the company, allowing me to gain experience in the industry. A special thanks goes to Mauro Pereira, Marco Silva and Fernanda Carvalho.

To all my closest friend, thank you for the fun moments, they were important to keeping me in the mindset needed to continue and complete this period of my life. A special thanks to Tiago Seabra, with whom I learnt a lot during the three years of our journey together. Specially the last year, as we shared all the projects and grew as engineers.

Rita Sousa, you were my biggest support during these five years. You kept pushing me and making sure that I stayed in right path. You are the reason of my success. For all that I thank you! Finally, I thank my family that provided all the conditions that I ever needed, have given me my education and always pushed me into the intellectual world since they realised my potential. Paulo Marú

(8)
(9)

“ Strive not to be a success, but rather to be of value. ”

Albert Einstein

(10)
(11)

Contents

1 Introduction 1 1.1 Context . . . 1 1.2 Objective . . . 1 1.3 Structure . . . 2 2 Literature Review 3 2.1 Acquisition software and hardware . . . 3

2.2 3D representation . . . 5

2.2.1 Delaunay triangulation . . . 5

2.2.2 Poisson . . . 5

2.3 Point Cloud Library . . . 6

2.3.1 A History of PCL . . . 7

2.3.2 PCL structure . . . 7

2.4 Software . . . 8

2.4.1 PCL requirements . . . 8

2.4.2 Microsoft Visual Studio . . . 9

2.4.3 Robot Operating System . . . 9

2.4.4 LabVIEW . . . 9 2.4.5 Matlab . . . 10 2.5 Databases . . . 10 2.5.1 Relational model . . . 11 2.5.2 Object-oriented model . . . 12 2.6 Discussion . . . 13

3 Acquired data analysis 15 3.1 Retrieving acquired data . . . 15

3.2 Data analysis from block measures . . . 16

3.2.1 Data organisation for 3D representation . . . 17

3.2.2 Data analysis for the optimisation method . . . 19

3.3 Discussion . . . 20

4 Software developed: SiMBlo 21 4.1 Software structure . . . 21

4.2 Implementation of 3D visualisation of a block . . . 21

4.3 Implementation of the optimisation algorithm . . . 24

(12)

x CONTENTS 5 Results 27 5.1 Results . . . 27 5.2 Validation process . . . 29 6 Conclusion 31 6.1 Conclusions . . . 31 6.2 Future Work . . . 32 References 33

(13)

List of Figures

2.1 Diagram of a bun measurement . . . 4

2.2 Example of a Delaunay triangulation . . . 6

2.3 Illustration of Poisson reconstruction . . . 6

2.4 Most relevant modules in PCL . . . 7

2.5 Hierarchical model diagram . . . 10

2.6 Relational model concept . . . 11

2.7 Hypothetical relational database tables . . . 13

2.8 Example of the object-orient model . . . 14

3.1 Section of the text file which results from the acquisition software . . . 16

3.2 Distance between consecutive profiles . . . 17

3.3 Number of points per profile . . . 18

3.4 Example of a profile reading error . . . 19

4.1 Example of a raw point cloud from a sample block . . . 22

4.2 Point cloud example after adding the missing faces of the block . . . 23

4.3 Surface reconstruction using PCL’s Poisson algorithm . . . 24

4.4 Scanned profile with areas of allowance . . . 25

5.1 Full length point cloud representation . . . 28

5.2 Poisson method in an unprocessed point cloud . . . 28

5.3 Block representation after voxel grid filtering . . . 29

(14)
(15)

List of Tables

2.1 Example of SQL queries . . . 12

(16)
(17)

Abbreviations

2D Two-Dimensional 3D Three-Dimensional DB Database

PCL Point Cloud Library MLS Moving least squares MSVC Microsoft Visual C/C++ MSVS Microsoft Visual Studio

IDE Integrated development environment API Application programming interface ROS Robot Operating System

DBMS Database management system SQL Structured Query Language

ANSI American National Standards Institute

ISO International Organization for Standardization JSON JavaScript Object Notation

(18)
(19)

Chapter 1

Introduction

This chapter is going to have a description of the project. In section1.1, it’s presented a global analysis of the problem and its context. The objectives will be addressed in section1.2. Finally, the document structure will be described.

1.1

Context

This dissertation was done in a company environment at Flexipol - Espumas Sintéticas, S.A. This company works essentially in the production of synthetic foam for different purposes. This process has multiple stages. Initially a foam block with 60 metres is produced. Since it is a chemical process where a great material expansion occurs, most of the blocks end up with an undesired shape.

Therefore, the upcoming step consist in cutting the block to the ideal size and shape. However this produces unwanted waste.

To try and correct this issue or even improve the chemical process of foam production, this company has bought and put to use an acquisition tool called Bun Scanner. It reads three-dimensional points on the surface of the block and returns them through a specific file type.

Nevertheless, despite being used, this system in not making the process any more efficient than it was before. To do so, we must use the acquired data and manipulate it, creating an algorithm to optimise this process.

This algorithm consists in a 3D visual representation of the block to make it easier to spot deformities and, based on those, define the points in which the cut of the block must be made. Based on this changes alone and the statistics it can provide, it is expected to get a huge upgrade in the foam production, as well as the reduction of losses by the company.

1.2

Objective

Bearing in mind that, as an industrial company, the ultimate purpose is always to increase the income. Generating more profitable blocks is the objective. This can be achieved either by

(20)

2 Introduction imising the volume of the block after its cut or by improving the production in the early stages, producing a block free of flaws and malformations.

The second option, although it looks as the better approach for the long term, has some issues. To begin with, being a chemical process, there will always be multiple variables which are hard to control and this causes inconsistency. For instance, a slight difference in temperature can result in completely different blocks. So, this would be a be an improvement that would take a long period of time and although it can be done within the company, the objective of this dissertation is to give a short term response.

This leads to the first option, where the goal is to implement a system that manipulates and processes the acquired data and returns specific instructions to where the block must be cut. This step will have the purpose of maximise the volume of the block after its cutting. Besides, the system should allow the visualisation in 3D of a small part of the block that is considered with the most pronounced deformity.

This system should help in the cutting stage: by maximising the volume we are reducing waste and therefore reducing the losses.

In order to reach the desired objective, the following targets were set. • Transfer the acquired points to a database;

• Read the points from the DB;

• Determine the optimum cutting points, which will be determined by the point with the larger flaw;

• Create a visual representation in 3D of a section of the block that contains the point de-scribed above.

1.3

Structure

The rest of this document is organised as follows. The literature review is presented in chapter2, where the tools and requirements for this project are explained and a brief discussion on whether they are a good choice for the context of this dissertation or not.

Chapter3presents a detailed analysis of the acquired data and its quality, specifying the prob-lems resulting from that analysis and presenting solutions.

The implementation of this project is presented on chapter4. The structure of the project is described and the methods and algorithms implemented are presented in detail.

The chapter6 presents the final conclusions and future directions that can be added to the project.

(21)

Chapter 2

Literature Review

First it is introduced a section to understand the hardware used on the acquisition, detailing the working method and its uses.

This chapter presents the subjects relating the study of 3D objects representation obtained by laser and camera scanning.

It will also be exposed issues relating different types of data bases, as well as optimisation methods. The tools and software that were considered and can be used to approach the problem are going to be presented.

2.1

Acquisition software and hardware

The software and hardware used for acquisition is from Range Metrics. It’s called bunscanner and its main purpose is to serve the foam industry by offering a reliable capture of the block.

Figure2.1presents a diagram of an actual measurement. The bunscanner has two lasers and two cameras. The cameras are programmed to search for the laser, and through an intricate method of calibration between both lasers and cameras, the measurement is achieved. The foam conveyor is considered the zero in the Y axis, and one of the cameras’ pole is considered the zero in the X axis.

With calibration perfected, the acquisition will be done in "slices" or profiles. Each profile contains up to 1000 points. The number of profiles can vary with the length of the foam block being measured. In this project, the number of profiles was approximately 1800 since the blocks had 60 metres.

This measurement method is capable of providing complete side and top profiles, since the laser goes along the side of the block up to the opposite edge, in both sides. With all of the surface covered, the shape information that results from this is highly accurate.

The software integrated in this solution for foam industries allows the operator for a real-time view of the profiles being scanned. It also creates a file that contains all the data. This data is then filtered for the purpose of eliminating noise and the reflection points that occur in every measure.

(22)

4 Literature Review

(23)

2.2 3D representation 5 However, to read this data the software provides a way to convert it into an organised text file. This file is the starting point of this project since it contains the data that is going to be processed. Apart from that, there’s a setup menu for editing parameters, testing the cameras and calibra-tion. This means the user can frequently verify and re-calibrate the hardware if needed.

2.2

3D representation

With great technological advances taking place it’s possible, through a scanning process, to obtain a large amount of points that define an object surface. The resulting data of this process is called a point cloud and it contains the cluster of all the gathered points. These point clouds are particularly interesting in areas such as 3D modelling and creating mechanical parts, but also have a large application for visualisation, animation and customisation of objects.

Despite being possible to visualise the point cloud itself and even recognise what’s being il-lustrated, most applications require a well defined surface, which also contributes to an easier recognition of the representation. To do this, there are several techniques of surface reconstruc-tion. For instance, alpha shapes [1], triangulation methods, marching cubes [2], MLS [3], grid projection [4] or the Poisson method. It is important to understand and study these algorithms in depth in order to make a wise choice when it comes to actually reconstruct a surface.

2.2.1 Delaunay triangulation

Most triangulation surface reconstructions are based on the Delaunay triangulation algorithm, de-veloped by Boris Delaunay [5].

This algorithm creates triangles, based on a set of points. The idea is based on circumcircles. For a given point set, no point should be inside a circumcircle created by another set of three points. To do this, the algorithm maximises the minimum angle in a triangle to create triangles that contain no point inside it. If there are four point in the circumcircle, the algorithm splits the quadrangle into two triangles.

There are several algorithms of this method for computing. One example is the flip algorithm that basically flips the triangles edges if the first were not a Delaunay triangle, and keeps doing this to all triangles until all are Delaunay’s triangles. This algorithm has the disadvantage of not granting convergence.

In figure2.2is presented an example of the Delaunay triangulation with the circumcircles all empty in the interior.

2.2.2 Poisson

Reconstructing 3D surfaces from point samples is a well studied problem in computer graphics. It allows fitting of scanned data or filling of surface holes [6]. This method was proposed by Michael Kazhdan in 2006 and its focus is to create a surface reconstruction from oriented points. This method uses the Poisson equation to create ones inside the surface and zeros in the outside.

(24)

6 Literature Review

Figure 2.2: Example of a Delaunay triangulation

By extracting an appropriate isosurface, we obtain the reconstructed surface. To get a smoother surface and a greater detail level, this method uses a combination of global and local fitting. The goal of the final reconstruction is also to be watertight, this is there are no holes in the surface if the points are correctly oriented towards the outside of the surface. In figure2.3it is represented the base of the algorithm.

2.3

Point Cloud Library

Algorithms for manipulation and visualisation of point clouds, as well as the methods for surface reconstruction mentioned above, are well documented and, since the emerging need to use those more frequently, the Point Cloud Library [7] was born.

It is a standalone, open source project for 2D and 3D image and point cloud processing. In its core, there are multiple modules and each contain several algorithms to account for all the users needs. Its programming language is C++, and one of the most used languages in the world [8].

(25)

2.3 Point Cloud Library 7

Figure 2.4: Most relevant modules in PCL

2.3.1 A History of PCL

PCL became a first-class citizen project at March of 2011, when Radu B. Rudu decided to create a system for 3D point cloud processing. The first algorithms were developed by independent research groups worldwide and slowly transitioned into code by the PCL community.

It is still being developed, with frequent updates and an increasingly more organised structure and documentation, making this an appealing resource to work with this type of data.

2.3.2 PCL structure

PCL structure is divided in modules. Each module has multiple algorithms that perform tasks within the module’s subject. Figure2.4presents some of the most relevant topics.

For the context in which this project is inserted, the modules that stand out the most are: • Filters - allows for noise removal, down-sampling, outlier removal or even cropping. This

module is very useful when dealing with unorganised point clouds. By down-sampling with some methods, the points that remain stay at similar distances between each other, produc-ing a result that is similar to an organised point cloud.

• Surface - this is a very important module due to the ability to build a surface from a point cloud. Most algorithms used in this library are constantly being updated for optimisation, ensuring the best performance possible. Although it contains a large variety of algorithms,

(26)

8 Literature Review most are a specific application of a method amongst many others that were possible to im-plement. This is the personal choice of the developer.

• Visualisation - contains the necessary methods to visualise a point could. In addition, it also allows the user to personalise the point cloud, by either colouring it or changing the points’ opacity and size. It can also be use to create basic 3D shapes like cubes or spheres.

The non referred modules also play a critical role. They are used in the modules mentioned above. For instance, the Poisson algorithm requires the usage of the module kdtree for searching and partitioning. Yet, other modules have no impact for this project and therefore are considered irrelevant.

2.4

Software

Software choice is an important step of this dissertation. In order to create a program able to correctly manipulate the acquired data, the software must be capable to handle that specific type of data and perform with robustness. Given that there’s a great focus on 3D representation, this software should also be appropriate to serve this goal.

2.4.1 PCL requirements

If the aim is to use the PCL, we must take into consideration its requirements. First of all, PCL has several dependencies. This means we must have that required software to install the PCL. To make it friendly and easy for new users, an all-in-one installer is provided. This contains the necessary software that is needed for the PCL to be used in a project.

It can also be installed by downloading the dependencies independently, keeping in mind that this method requires a greater understanding of the software versions and which one to download.

However, compiling PCL and adding it to a project demands another toolkit: CMake. 2.4.1.1 CMake

CMake is an open-source system that, basically, uses simple configuration files called CMake-Lists.txt to generate standard build files, for instance a project/solution on MSVC [9]. CMake can generate a native build environment that will compile source code, create libraries, generate wrappers and build executables in arbitrary combinations.

CMake supports in-place and out-of-place builds, and can therefore support multiple builds from a single source tree. It also supports static and dynamic library builds. Another feature of CMake is that it generates a cache file that is designed to be used with a graphical editor. For example, when CMake runs, it locates files, libraries, executables and may encounter optional build directives.

(27)

2.4 Software 9

2.4.2 Microsoft Visual Studio

MSVS is an IDE from Microsoft. Since it was developed by Microsoft, this is the ideal tool to develop programs that will be used in the Windows operating system. For example, visual studio uses Microsoft software such as Windows API and Windows Forms.

MSVS comes equipped with a code editor, including code completion component, and a code re-factoring tool. It includes an integrated debugger that works both at machine-level and source-level.

Visual Studio supports 36 different programming languages and allows the code editor and debugger to support nearly any programming language, given that a language-specific service exists. Some examples of the supported languages are C, C++, VB.NET, C#, F# and TypeScript. Support for other languages such as Python, Ruby, Node.js, and M among others is available via language services installed separately.

Although the Professional and Enterprise versions come with a monetary cost, Microsoft pro-vides a free version called the Community version. Although this version cannot be used is some scenarios, its still a viable tool. If we take into account that it is possible to create a project with CMake for MSVS including the PCL, it becomes clear that it’s the perfect solution for a Windows user.

2.4.3 Robot Operating System

ROS is a framework for writing robot software [10]. Contains a set of tools and libraries aiming to simplify the complex task of creating a robust behaviour for a variety of robotic platforms.

Despite being a robotics oriented software, the PCL can be included in a project using CMake to generate the needed files. This makes it a good option for a Linux user, given that it only runs on this operating system. The ROS website also provides numerous tutorials for new users, making this an even more appealing software.

Alongside there is a very well structured documentation making it easy to explore the software features.

2.4.4 LabVIEW

LabVIEW is a software for engineering that simplifies the production of new systems due to its graphical programming. This language known as G avoids long lines of code and it is very similar to our thinking process. Thus making programming more intuitive, faster to work with, and overall increasing the productivity.

The disadvantage of using LabVIEW is being a paid product. This represents an obstacle since there are no licenses for this software in the dissertation environment.

Nevertheless, if the possibility of buying a license exists, this is a software that contains plenty of modules capable to present 3D objects and processing point clouds. Therefore making it a very useful software for this project.

(28)

10 Literature Review

Figure 2.5: Hierarchical model diagram

2.4.5 Matlab

Matlab stands for matrix laboratory and was intended primarily for numerical computing. The focus on Matlab’s developers, MathWorks, was to implement an easy to use language but capable of doing matrix manipulation, function plotting or even implementation of algorithms. It has built-in a large set of documentation with examples and tutorials, detailbuilt-ing how to use each function.

Although the main goal is to get a good numerical tool, Matlab also comes with a visuali-sation toolkit. Contains multiple methods for image processing and even surface reconstruction algorithms, like Delaunay triangulation.

Matlab is always a powerful tool, and for this project can be taken into consideration since we have previous experience using it and can fulfil the dissertation purposes.

2.5

Databases

A database is any organised collection of data. Its purpose is to manage large amounts of data that, due to its structure, allows for simple searches for elements. For companies, these data collections are vital since they are the main component in the information system. Its structure stays the same for many years, proving it is a powerful tool and also reliable.

Software which is used to manage such databases is called DBMS. Some examples of DBMS are Oracle, Microsoft Access or Microsoft SQL server. DBMS is a collection of programs that facilitate the access, retrieval and security of databases. Basically it is the tool for the user to operate in the database.

There are different models of databases, all with the same objective of gathering information in an organised manner. The first models were based in navigational databases, which needed a reference to find an object. Since this was used on a time where databases were stored in tapes, this model allowed to reach the wanted element without having to read every single one. Examples of these models are the hierarchical and the network models. Figure 2.5 represents the model diagram.

Currently the most used model is the relational. Its predecessor is the flat model which consists of two-dimensional data arrays. In this model it is considered that all elements in a certain column

(29)

2.5 Databases 11

Figure 2.6: Relational model concept

have similar characteristics, and all elements in a certain row are assumed to be related to one another.

2.5.1 Relational model

The relational model was first introduced by E. F. Codd [11]. This model is an approach to managing data using a structure consistent with first-order logic, where all data is represented in a n-tuple, i.e. a finite ordered list of elements, and these are grouped in relations.

The relational model’s main idea was to describe a database as a collection of predicates over a finite set of variables, describing constraints on the possible values. Figure2.6 represents the relational model concept as described by E. Codd.

A database created on based on this model is called a relational database. Given that a database needs to deal with issues that were not taken into account by the original relational model, it wasn’t possible to implement a relational database as it was conceived. For instance, computers organise data by types, there are integers, chars, booleans and others. The relational model does not specify what are the data types can or cannot be supported.

Despite this ambiguities, relational databases are build under the table format. What was considered a relation translates into a table. The n-tuples are basically the rows of the table and an attribute is a column of the table. To get to a specific value, both the row and the column should be specified. Since the body of the table is defined by its attributes, there are also update operators for including or removing rows. In figure2.6we can understand this transition from the relational model to the relational database. It is worth mention that in both cases, the information is not supposed to be ordered.

At this point, relational databases were just theory because they lacked a language that could cope with this model. SQL came to fill that gap. It was developed by IBM and had the objective of validating the relational model. Due to its capability to fulfil the objective, this language started to get used by other producers and this lead to the standardisation of the language by the ANSI and ISO.

(30)

12 Literature Review Table ’A’ Query Result

C1 C2 C1 C2 a b SELECT * FROM A a b c d c d C1 C2 C1 a b SELECT C1 FROM A a c d c C1 C2 C1 C2

a b SELECT * FROM A WHERE C1 = a a b c d

Table 2.1: Example of SQL queries

Nowadays, SQL is the most used language for database operations. It has multiple DBMS built around it. Table2.1presents an example of a query in this language and its result, keeping in mind that the SQL queries are done with the following key words:

• SELECT - retrieves data from one or more tables, making sure the DB is not altered. SE-LECT followed by an asterisk should return all the columns of the query;

• FROM - specifies the table to retrieve the data;

• WHERE - specifies one or more conditions to retrieve the data; • GROUP BY - groups the retrieved data;

• ORDER BY - orders the retrieved data;

• DISTINCT - guarantees that the retrieved data is not repeated.

When having to make restrictions or conditions, there are also the logical operators AND, OR, NOT, and the relational operators, for instance <, > or =. For updating, including or deleting attributes on a table there are also the commands INSERT, UPDATE and DELETE, which are used instead of SELECT.

There are some DBMS to handle SQL, some are mentioned above, like the Microsoft SQL server or Microsoft Access, but there are several others, for instance PostgreSQL or MySQL.

With all the above, generally the tables created to such database model have still some spe-cial characteristics. The most important is the primary key, which grants that there are only one attribute in that column with a certain value. The foreign key establish relations between two or more tables. In figure2.7is an example the relation between two tables and the primary key being also the foreign key.

2.5.2 Object-oriented model

With the object-oriented programming concept gaining popularity, that paradigm was applied to databases. In this model each object contains a defined set of parameters. This is a major advantage

(31)

2.6 Discussion 13

Figure 2.7: Hypothetical relational database tables

since it possible to create multiple objects of one certain type and the parameters will already be on that new object. Another advantage is that this model is built to grow horizontally, this is, we can establish a new parameter for an object type and all the object will be update to have that new parameter. This model still ensures relation between different object given that we can create a parameter, and pass the value of that parameter to another object, granting a relation. In figure

2.8, we can observe an example of this model for two objects that have the Activity Code as the relation parameter.

The databases using this model are designed to work with object-oriented programming lan-guages such as Python, Java, C# or C++. This is a great advantage given the growing popularity of some of these languages, for instance Python.

Most of these databases offer a query language, allowing objects to be found using a declar-ative programming approach. Accessing data using this model is faster because an object can be retrieved directly without a search.

Finally, this model is ideal for objects with large amounts of parameters, since it keeps the efficiency while dealing with massive data sizes per object.

2.6

Discussion

For the 3D representation, the choice of using PCL is clearly the best option. It is a well developed library and contains all the methods that are going to be necessary to implement this part of the algorithm.

The surface reconstruction algorithm that was chosen is the Poisson method. PCL has this method as one of the most developed and frequently updated methods. The fact that it is a water-tight surface also contributed for this choice.

For the software, we could use either ROS or MSVS depending on the operating system that was available. Since Windows was the operating system provided by the company, MSVS had to

(32)

14 Literature Review

Figure 2.8: Example of the object-orient model

be the choice. This and the usage of PCL forced the usage of CMake, which otherwise wouldn’t be possible to build the desired files so easily.

When deciding between a database model, the initial choice was to use a relational model along with MySQL as the DBSM since it was a database explored before and we could have made faster progress with it. However, the standard within the company is to use an object-oriented database that uses JSON documents, the MongoDB.

(33)

Chapter 3

Acquired data analysis

Before the start of the project, it important to study what is the best way to collect the desired data. And to achieve the objective of this dissertation, it is vital to understand how is the behaviour of our data. Knowing this, we can perceive if there are limitations, or if the data is within the expected.

This chapter addresses the methodologies used for gathering and retrieving data acquired through the Bunscanner in section 3.1. In section 3.2 provides a detailed analysis of the data output and how it will affect the project.

3.1

Retrieving acquired data

As mentioned in section2.1, the acquired data is only readable in a text file. This file comes with a well defined structure, making possible for creating an algorithm to retrieve its data for further processing.

Initially, the algorithm would focus only on retrieving points in all three axis. To achieve this, the text file structure was analysed and it is clear, by observing a part of a file in figure3.1, that the profile is described by the word Profile and the lines below are of the actual values to be retrieved. The algorithm to read and retrieve the desired data, that is, the X, Y and Z points, was as follows.

1. Read the file line by line until the word Profile was reached.

2. After finding this word, read two lines below and retrieve that data to an array. This will retrieve all the Z coordinates of each profile.

3. Read nine lines below the word Profile, and retrieve all the data to another array until it reaches the Profile word once more to return to the beginning of the cycle. This step retrieves both X and Y coordinates.

Simple array manipulation allowed to get the X and Y coordinates into separate arrays. This way, all points were separated by axis and each axis had its array.

(34)

16 Acquired data analysis

Figure 3.1: Section of the text file which results from the acquisition software

Although this process had no problems in retrieving data, it was not practical since the files needed to be specified in the software, and it could only be done one by one. To solve this, a member of the company provided a software that automatically retrieved the data to a database.

The database was structured in two collections: one keeping the general information of each foam block, and the other containing all the profiles of the blocks. In order not to repeat processing on the same block, was added to the document of each block a Boolean variable that was false by default and becomes true after that block was processed.

With this change, the algorithm for the data retrieving also changed. The major change is making a query to the database to get the profiles of the unprocessed blocks. The rest of the algorithm remains similar.

3.2

Data analysis from block measures

Having a data collecting method, the next obvious step was to see its behaviour. It is important to understand how is the data being acquired to realise if it is ready for the next stage of this project, which is to process it for a visual 3D representation and optimisation of volume after cutting.

For the 3D representation, our main focus will be in the organisation of data. This is, all points should be at approximately the same distance between each neighbour points, in all three axis. This is important due to the surface reconstruction algorithms being created for organised point clouds, as said in2.3.

When considering the data for the optimisation, we must make sure that none of the profiles comes with miss readings or even points completely outside of what is considered to be normal for a profile. The software should filter all these unwanted points, as said in 2.1, but its is still necessary to verify that it is performing as intended.

Another issue that must be taken into account is in the actual position of the foam block when the reading is happening. Since all the points are being measured in relation to a fixed reference, this can cause problems due to the position of the block in the conveyor: there are no fixed position

(35)

3.2 Data analysis from block measures 17

Figure 3.2: Distance between consecutive profiles

for the block and this means that its left or right side can vary position in relation to the fixed reference. This specific problem will be discussed with greater detail and will be presented a solution in chapter4.

3.2.1 Data organisation for 3D representation

When approaching this subject, what we should expect as an output of the data is, at least, the same number of points per profile, the distance between consecutive profiles be approximately the same and the distance between a point and its neighbours also to be the same.

The first analysis focused on the distance between consecutive profiles. To calculate this dis-tance we just subtract the Z coordinate of consecutive profiles. Equation3.1represents the distance between two profiles, where i is the profile number, d is the distance and Z is the coordinate of the Z axis.

dPi = ZPi+1− ZPi (3.1)

After exporting this information, an histogram was created for facilitate analysis. In figure3.2

we can observe one example of the distance between consecutive profiles from a randomly chosen block.

It is clear by observing figure3.2that there’s a large variety of distances between profiles. This can be a source for problems when using an algorithm for surface reconstruction, mainly due to such large distances that goes up to 340mm.

To solve this issue, the ideal solution was to use a method of linear interpolation to make the profiles distribution even throughout the entire length of the block. This method would be done

(36)

18 Acquired data analysis

Figure 3.3: Number of points per profile

by applying linear interpolation between two points of consecutive profiles to find a new point, repeating this process to all points in the profiles, and to all profiles. Although this method looks ideal, in fact, for it to work, the number of points per profile must be the same, otherwise there will be points that won’t be interpolated. So, we analysed the number of points per profile, and the results are presented in figure3.3. Making an analysis of this figure, we understand that it is not easy to put this method into practice.

The next proposed solution was to rearrange the position of each profile so that the distance between consecutive ones is the same. The equation that was used in this method is presented below.

dc=

Blen

Ptotal

(3.2) Where dcis the calculated distance, Blenis the block length and Ptotalis the total profile number.

Although this method is easy to implement, it causes an obvious issue. The block will become distorted after rearranging the distance between profiles, without changing the rest of the block. It was nonetheless the method used, since the distortion does not interfere with the shape and flaws of the block, which is the focus on the visualisation.

The most important issue stands in the actual gaps between readings. This means that, when we have a gap distance of 300 millimetres, we will miss any flaw that could exist in that zone of

(37)

3.2 Data analysis from block measures 19

Figure 3.4: Example of a profile reading error

the block. Since this is caused by the hardware of the company and other factors that we cannot control, these problems won’t be solved.

3.2.2 Data analysis for the optimisation method

For this method to result in a flawless algorithm that tells its user the optimal points for cutting a certain block, we must make sure that there are no errors in any readings from the acquisition software.

In order to do this analysis, we used the software from the actual Bunscanner. It allows to see every profile in a 2D plot. After observing the profiles of just one block, it was clear that either the first or last profiles could come with some error. By analysing figure3.4we can see an example of an error that can occur.

This happens when the laser light slightly touches the back or the front parts of the block, making this profiles deficient and faulty.

There are two possible solutions that can be implemented further along the project. The most obvious is not considering both first and last profiles when implementing the algorithm for the optimisation of the block’s volume. The other solution was to consider a certain allowance zone, and apart from that area no points should be considered valid.

The option we took was to mark zones in the profiles and consider only the points contained in that zone. These allowances were defined based on a document provided by the company, in which are described the maximum and minimum values of width and height for each type of foam.

(38)

20 Acquired data analysis

3.3

Discussion

Having the knowledge of the flaws and imperfections in the acquisition stage of the project, we could more easily establish a path to eradicate the issues and provide the objective output from the project.

This was one of the most critical parts of this project. Without this study many of the results would not be as intended and the reason behind the poor results would not exist. The analysis before implementation granted the best outcome with logical reasoning behind every decision.

(39)

Chapter 4

Software developed: SiMBlo

This chapter is going to focus on the software developed which we called SiMBlo. The structure of the project will be detailed. The methods to implement the 3D visualisation of the block and the volume optimisation will be described.

This chapter includes the methods mentioned in chapter3and provides greater detail in the explanation of the solution that was used.

4.1

Software structure

The structure of this project will be divided in four stages, which result in one application that can be used by any operator that possesses a computer. The stages are the following.

1. Retrieve data from a database where the blocks are saved.

2. Implement the algorithm to find the optimal cutting values for the volume optimisation. 3. Call the 3D visualisation software, which return as PNG file of faulty zones of the block. 4. Save the new measures and file in the database.

It is important to understand that the visualisation part of the software was done in a language different from the rest of the application. For this reason, in the main application we must add it as a DLL and make the call to that part of the software.

This structure allows to develop the software in way that can be upgraded for having more features. With this we can also simplify the coding that is needed since we have a direction to follow.

4.2

Implementation of 3D visualisation of a block

The implementation of 3D visualisation of a block was done with the use of PCL as the base of this part of the project.

(40)

22 Software developed: SiMBlo

Figure 4.1: Example of a raw point cloud from a sample block

The first objective here, was to represent a point cloud and use this as a starting point for the rest of the application. The visualisation module of the PCL was used for this purpose. Once we gathered the data in arrays, these were used to create a point cloud. Point clouds have no unit, so they are relative. Since all point were of the same order of magnitude, there are no issues regarding this subject. Having the point cloud filled, it was created a viewer to visualise the actual point cloud. The viewer receives as input the point cloud and represents it in relation to a reference. It is possible to adjust settings such as camera position, point size, background colour, among others. The profiles have no bottom and no frontal and rear faces, as can be seen in a example in figure

4.1, where it was used a sample from a segment of one block.

Since the objective is to represent the entire block, this points had to be added manually with-out changing the shape of the block.

For the bottom of the block, the points were added by following a simple algorithm. First we needed to find the minimum Y coordinate that was done by cycling through all the points and saving the minimum Y. Finally, cycling through the points again, for each point, add a new point in the exact same coordinates but in the Y coordinate saved before.

For the extremes of the block, the implementation was similar to the implementation for the bottom part. Since we have the Z coordinate for the first and last profile saved in the Z array, we just need to create points across that plane. By cycling through the wanted profile, for each X coordinate, create points with all the coordinates in the Y axis. This created a square that would go above the desired points in some parts of the block. To fix this, a restriction was made to the Y axis, so that it does not go above 85% of the height of the block. In figure4.2there’s a visual representation of the point cloud for one example block.

Having the point cloud completed and making an accurate representation of a block, the goal was to create a surface for the entire block, assuring the best possible details so that the basic shape

(41)

4.2 Implementation of 3D visualisation of a block 23

Figure 4.2: Point cloud example after adding the missing faces of the block of the block and the defects are able to be seen.

A surface reconstruction requires the use of normal vectors, given that the surface is based on the orientation of those same vectors. In PCL there is a method for normal estimation that includes the k-d tree search method. It basically organises the points in a k-dimensional space and after that we can search for points and neighbours using that structure.

The previously chosen method to do this was the Poisson algorithm, included in PCL, that guarantees a watertight surface. The first step was to configure the setting for the Poisson algo-rithm, according to the algorithm that is embedded in the PCL. The parameters to be set were depth that defines the detail level, iso divide that helps with memory reduction when running the program, solver divide which helps with memory reduction but increases reconstruction time, samples per node which can be used to handle noisy samples and scale that also helps getting more detail. Finally it is needed to give the input point cloud to be the base of the reconstruction.

Although all settings and parameters were according to what’s suggested in the documentation, the reconstruction would always result in a bad shape. The surface couldn’t join together and a large hole would appear. Since the surface reconstruction is based on the normal estimation, the decision was to create viewer to verify the normal vectors and validate its orientation. In fact, the problem was in the normal vectors orientation, since they were pointing to opposite directions in a specific zone of the block.

The solution was to point the normal vectors manually to a desired point. The objective was to have the vectors all point to the outside of the block. The idea then was to use the centre of block and the reverse the direction of the vectors by simply multiplying all normal vectors by -1. PCL has a method to find the centre of the block. This method was used and the solution was implemented. Figure4.3shows the result of the surface reconstruction based on the example point

(42)

24 Software developed: SiMBlo

Figure 4.3: Surface reconstruction using PCL’s Poisson algorithm cloud presented in figure4.2.

The final step is to retrieve a snapshot of the block. PCL also has included this feature, making the snapshot and exporting it to an image file.

Even though this implementation performs to fulfil the objective and takes approximately 15 seconds to run, it has to withstand a much larger data size when dealing with blocks of 60 metres. We should make sure that the time to run the application remains low (under 30 seconds) and that the detail that is possible to get is similar to the reality, as the example shown above. After just testing for the point cloud with this block size, we can realise that the time consumed increases to more than a minute. The level of detail decreases dramatically, and this was expected due to the proportions of the block and the relation between length and height/width.

The final decision was to represent only pieces of the block. The area that is going to be determined as the most faulty and with bigger flaws will be the one that is processed in this algorithm. So this changes the points that are received by this part of the application but the rest of the process remains the same.

4.3

Implementation of the optimisation algorithm

For the optimisation algorithm, we must consider the problems mentioned in chapter 3. One of the most critical issues, was the lack of a constant reference of the block’s positioning on the conveyor. This leads to false information if we look to the coordinate values that are retrieved from

(43)

4.3 Implementation of the optimisation algorithm 25

Figure 4.4: Scanned profile with areas of allowance

the scanner. For instance, if we have an point with the X coordinate being 305mm and another being 310mm, we can’t state which one is the farther to the right (assuming the right side to be the positive side of X) because the block could simply be tilted towards the right of the conveyor and, in relation to the reference of the scanner is farther but, in reality, it can be the opposite.

The solution was to create a constant that could be defined based on the data retrieved. This constant is the centre of the profile and can be calculated using the retrieved points. The algorithm cycles through every point of the profile and calculates the average of the X coordinate between the opposite points, for example between the first and last point. This is achieved by using the index value of the array in which the points are saved. After calculating the defined value of 200 averages, it is calculated the average of all those resulting averages and getting a single centre value. All points will be compared to this reference point and so we work with distances rather than coordinates.

For the areas of allowance, it was also considered the distance to the centre point of the profile. For better understanding of this algorithm there’s a diagram representative of these areas in figure

4.4, which is an actual profile scanned by the Bunscanner.

The algorithm then had the following steps. Analysing profile by profile, the point which had the smallest distance to the centre and was contained within the area of allowance, that point coordinates were saved, both to the right side and to the left side of the block. After having two

(44)

26 Software developed: SiMBlo separate arrays of the smallest distances in each profile, we would find the minimum distance in each array, so that it results in the points that are more pronounced to the inside of the block, one distance to the left side and the other to the right side. This could happen in different profiles, this is, in different length values of the block. However these are the optimal cutting points for the vertical cuts, since they are the ones that go more inwards to the block. Alongside this process, the point that has the lowest Y coordinate value and is above the line that is highlighted in green in figure4.4, is also saved in a different array. Again, in that same array, we get the minimum value and that is the optimal cutting point for the horizontal cut.

After having the values, we call the 3D visualisation algorithm, sending the obtained points as input and this algorithm then returns the image file as an output.

All these obtained values and the image file are then sent to the database. It is made a query to find the block with the same ID that was processed and we insert a new object in that document called measurements which contains all the points that were found through the algorithm, the image file, the optimal width and the final volume of the block obtained using this method.

(45)

Chapter 5

Results

In this chapter the results obtained in different stages of the project will be presented and discussed. The validation idea and process will be detailed, regarding issues that occurred in this stage of the project.

5.1

Results

The first important result obtained was the point cloud for the full length block, in which was determined the huge lack of detail that could be achieved with this representation, alongside the very slow process of rendering such point cloud. This representation can be seen in figure5.1. Looking at the point cloud is clear the lack of detail, due to the dimensions of the block we can not perceive any flaw that it may have. This was the main reason for changing the 3D view algorithm and show only a smaller portion of the block, since it was possible to filter the point cloud for getting a smaller number of point and enhance the rendering time.

After realising that the block would be shown in a smaller portion, it was necessary to solve the gaps between profiles that caused problems when applying the Poisson method of surface reconstruction, as can be seen in figure5.2. The first attempt was trying some of the up sampling methods that are included in PCL. This methods resulted in more point, more rendering time but none of those were filling the gaps as intended. The next attempt was to filter the point cloud, using a voxel grid method that would ensure the same number of points per voxel and this should organise the point cloud, reducing the points in the voxels with more points. This way the gap shouldn’t be an issue. When implemented this method, we verify that we lose a great amount of points and therefore a lot of the detailed provided by those, as presented in figure5.3.

The end result was after applying the adjustment on the Z axis, placing all profiles at equal distances between each neighbour profile. The example is shown in the previous chapter in figure

4.3. Multiple blocks were processed by this method with results similar to this example.

The results obtained for the volume optimisation aren’t visible since they are numbers that represent what should be the optimal cutting point for maximise the final volume of the block. These values need validation to prove that are more efficient than the method that is currently

(46)

28 Results

Figure 5.1: Full length point cloud representation

(47)

5.2 Validation process 29

Figure 5.3: Block representation after voxel grid filtering

being used on the cutting process. If this project proves that can create block with bigger volume than the ones being created currently, and the algorithm gets validated after proving it works for all of the blocks.

5.2

Validation process

The validation process should be done by comparing the width and height values that were used to cut the block to the ones resulting from the SiMBlo, verifying if SiMBlo’s values were superior. This process should be done in multiple blocks so that it shows robustness and consistency. How-ever, we could only do this comparison in one block. There was a malfunction in the Bunscanner that lead to the impossibility of making scans. Since the workers don’t keep track of he values that were used to cut the block for longer than a week, it was impossible to get more than one block for validation.

This validation, though, revealed that either our algorithm has some flaws, or the Bunscanner was not well calibrated at the time of the scanning, given that the width that it was cut was larger than the one provided by SiMBlo, even though SiMBlo gave a larger height. What leads us to believe that this can also be a bad calibration of Bunscanner is that the length that the scanner provides is smaller when compared to the real length of the block (approximately 55m scanned and 61,5m measured).

The rest of the blocks that were processed by SiMBlo couldn’t be validated because the records of the cutting values did not exist at the time of the request.

(48)
(49)

Chapter 6

Conclusion

This chapter describes the conclusions taken after completing this project. The future work is also mentioned as a note of what can be improved within the company and in the project itself.

6.1

Conclusions

Most industries and companies use the latest technology available to improve the quality of their product. This project was inserted in this context, where one part of the production needed to studied to be able, not only to increase the quality of the product and get more income out of it, but also to store data that is always available and can be consulted to perform a statistical analysis, allowing even for a greater understanding of the product being made in the company.

The subject of study of this project was essentially focused on the tools for 3D visual repre-sentation. We conclude that there are still progress to be made in this area, given that the only available tool that could implement this feature was the PCL. It is used for programming and its language is one of the most used in the world.

The optimisation had to be done by analysing the process of making foam and the shape of the foam itself. Only then it was possible to develop an idea for the algorithm. It takes a great knowledge of the process to spot issues that can cause problems in the actual implementation.

Finally, the objectives that were proposed in chapter1were all accomplished. All the outputs were verified by the person responsible by the project in which this dissertation is inserted. The visualisation was considered as an achieved goal. The optimisation lacked validation but was also considered reliable data and of great importance since it was put to use almost immediately to determine what can be improved in the production stage.

Overall, the proposed solution offers a variety of advantages for understanding and acknowl-edge the production process. It provides new data that reveals where the deformities occur for each type of foam, when they occur and how they occur, since it is available the image of the de-formity. Alongside it has the potential to enhance the cutting stage by offering the optimal points and obtain the best possible block.

(50)

32 Conclusion

6.2

Future Work

The main objective of this project was considered accomplished. Nonetheless there are improve-ments to be made and work to further simplify the using of the application.

Most importantly is the validation of the optimisation algorithm. Only by doing this will we know for sure that the behaviour of the software is as expected.

The hardware of the Bunscanner should also be improved in the future. The position of the blocks in the conveyor must be fixed to get accurate and precise values without having to make ad-ditional calculations. Altering the conveyor is important as well because will improve the readings of the scanner and the profiles will be read with equal distances between each other.

The 3D visualisation can also be improved if we are able to implement a full block represen-tation without losing detail and maintaining low rendering times. This could be achieved by doing code optimisation or ever studying new methods to reach this goal.

Finally, it was important to create a website in which there were displayed all the information about the block and the possibility to see statistics that could be created in that website.

(51)

References

[1] Nataraj Akkiraju, Herbert Edelsbrunner, Michael Facello, Ping Fu, Ernst P. Mücke, and Car-los Varela. Alpha shapes: Definition and software, 1995.

[2] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. SIGGRAPH Comput. Graph., 21(4):163–169, August 1987.

[3] David Levin. The approximation power of moving least-squares. Math. Comput., 67(224):1517–1531, October 1998.

[4] Ruosi Li, Lu Liu, Ly Phan, Sasakthi Abeysinghe, Cindy Grimm, and Tao Ju. Polygonizing extremal surfaces with manifold guarantees. In Proceedings of the 14th ACM Symposium on Solid and Physical Modeling, SPM ’10, pages 189–194, New York, NY, USA, 2010. ACM. [5] B. Delaunay. Sur la sphere vide. Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii i

Es-testvennyka Nauk, 7:793–800, 1934.

[6] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, SGP ’06, pages 61–70, Aire-la-Ville, Switzerland, Switzerland, 2006. Eurographics Association. [7] Radu Bogdan Rusu and Steve Cousins. 3D is here: Point Cloud Library (PCL). In IEEE

International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011.

[8] Top 20 Most Popular Programming Languages in 2017.

http://www.business2community.com/tech-gadgets/ top-20-popular-programming-languages-2017-01791470#

4dFGZMFh7biZBKVV.97.

[9] CMake Overview. Available athttps://cmake.org/overview/. [10] About ROS. Available athttp://www.ros.org/about-ros/.

[11] E. F. Codd. A relational model of data for large shared data banks. Commun. ACM, 13(6):377–387, June 1970.

(52)

Referências

Documentos relacionados

Alguns ensaios desse tipo de modelos têm sido tentados, tendo conduzido lentamente à compreensão das alterações mentais (ou psicológicas) experienciadas pelos doentes

Dos objectivos traçados no nosso trabalho, verificamos que da análise quantitativa da transição das Lux GAAP para as IFRS nas diferentes rubricas, bem como nos

Além disso, o Facebook também disponibiliza várias ferramentas exclusivas como a criação de eventos, de publici- dade, fornece aos seus utilizadores milhares de jogos que podem

Desde a educação sanitária que cobre os problemas relacionados com a saúde das populações, bem como com o risco de mães muito jovens ou com idade superior a qua-

No campo, os efeitos da seca e da privatiza- ção dos recursos recaíram principalmente sobre agricultores familiares, que mobilizaram as comunidades rurais organizadas e as agências

keywords Digital images analysis, feature extraction, image segmentation, classifica- tion, content-based image retrieval, similar images, image histogram, edge detection