• Nenhum resultado encontrado

Incorporation of prior knowledge into perception system

Chapter 

Incorporation of prior knowledge into

exteroceptive sensor 3

SourceGrid3 (local)

mSoG

3

SourceGrid3 (global)

mSoG3

exteroceptive sensor 2

SourceGrid2 (local)

mSoG

2

SourceGrid2 (global)

mSoG2

exteroceptive sensor 1

SourceGrid1 (local)

mSoG

1

SourceGrid1 (global)

mSoG1

localisation system

vector maps GISGrid(global)

mGG

prior knowledge combination

SensorGrid mSG

temporal fusion discounting mPGα~mPG

PerceptionGrid mPG

decision making

inputs

outputs

Figure . – Overview of the perception system.

In order to ameliorate and alleviate the deficiencies of available sensors, digital maps are considered as an additional source of information on a par with other sources, e.g. sensors. is fact is crucial for the method as the map cannot be regarded as an infallible source of absolute truth that always gives a correct result. If this were to be true, one could simply condition (e.g. in sense of Bayes’ conditioning) the information obtained from sensors using the map data. Instead, we treat maps rather as just another source of information. In this manner, the handling of information is uniform and completely comprised in the data fusion process.

While designing our system, an important constraint was that a perception system must be easily adaptable for various configurations. is variety may be owed to a failing sensor, to a system mod- ification or update as well as, as a common industrial practice, because of using the same system for a series of different vehicle models. An addition of a sensor should improve the overall performance of the system, whereas a removal could lead to its degradation. e logics of data processing should not nevertheless be altered. In Figure., we recall the architecture with which we achieved these goals.

e previous chapter dealt with processing that depends on the type of the sensor. In this chapter, we will handle invariant parts of our perception system that may seamlessly adapt themselves to different sensors or map providers given corresponding sensor models. is part of our perception system is depicted by Figure..

One of the reasons for which we employ map data is to deduce meta information about the environment and hence restrain possible types of objects to be detected. Secondly, we propose to control different perception dynamics in the same scene, thanks to the map. For instance, once a building is perceived by the perception module, this information should be stored for future use and not forgoen. On the contrary, mobile objects with low remanence in the scene should be updated rapidly, i.e. forgoen or discarded almost as soon as they disappear from the range of view.

In this context, we will use the term remanence to denote the persistence of a given piece of information.

It is related to the time that an object is supposed to spend in the environment represented by a single cell of perception grid before being discarded if it is not seen again. As a consequence, this method allows to manage the occulted zones and objects. e notion of object remanence will be of high importance in our method and so we devote the whole Chapter to the methods for information discounting.



. Spatial fusion: fromSourceGridtoSensorGrid

SourceGrids (SoGs)

prior knowledge combination

GISGrid(GG)

SensorGrid

mSG temporal fusion

discounting mPG ~αmPG

PerceptionGrid mPG

decision making

spatial fusion

outputs

Figure . – Part of the perception system where the information fusion takes place independently of exact sensor types.

. Spatial fusion: from SourceGrid to SensorGrid

e proposed method incorporates maps into sensor data in order to ameliorate the perception. e part responsible for this stage is schematically depicted in Figure.asspatial fusion. e maps are the source of prior information which can be used to gain more insight about the vehicle environment.

e fusion of the information fromGISGridwith the sensor data stored in theSourceGrids (SoGs) (cf. Figure.) is performed on a cell-by-cell basis.

Grid transformations

At first, all grids are transformed into the same spatial reference frame and into a common frame of discernmentΩP G. If aSoGis in polar coordinates, it has to be converted into Cartesian reference as shows Figure.. In our approach, a bi-linear interpolation has been used for this transform. Each localSourceGrid(SoG) can be transformed into global reference framework using the pose provided by the proprioceptive sensor. Figure.illustrates the general idea of this process.

Next, a transformation of the CartesianSoGis applied in order to obtain a world-referenced grid. is transformation consists of one rotation and one translation. e rotation is done with a bi-linear transformation, because one cell may be partially projected on many cells. Bi-linear transformation can interpolate values, so, in the transformed cell, masses are set to mean values of the neighbourhood of the polar cell. Such a method can cause a phenomena of edge smoothing, but a well-chosen grid size renders this effect negligible.

Incorporating map data

At this stage, the prior knowledge from maps is injected into sensor data. If multipleSoGs exist, the conjunctive rule of combination (denoted ) is used to combine them before further processing, as expressed by Equation.. Given that the masses inSoGs, indexed from1 toNs, are denoted mSi,



local polar SourceGrid

local Cartesian SourceGrid

global Cartesian SensorGrid

polar to Cartesian transformation GNSS pose

local to global transformation rotation

GISGrid global Cartesian SourceGrid

prior knowledge combination

+ +

+ IMU

Figure . – Transformation fromSourceGrid(SoG) toSensorGrid(SG).

time

SourceGrid

GISGrid SensorGrid

T T R R R R R R T T

D Drivable N Non-drivable

R Roads F Free

M Moving S Stopped

T Intermediate O Occupied

U Unmapped infrastructure

MSU SU MS

Figure . – Example of incorporating map data intoSensorGrid(SG) on a D grid. T represents the intermediate space andR– road surface.

