• Nenhum resultado encontrado

.. Implementation and results

... Results of perception on real data

e presented results prove that the use of map data in a perception system for self-driving vehicles is going to thrive in the near future. e importance of handling both aspects mentioned above, namely the fusion of map and sensor data along with the management of temporal information, has been demonstrated using real data. We have performed multiple tests on our test-bed vehicle.

... Real-time implementation on a test-bed vehicle

An important part of this research work was the implementation of tested approaches on C++ Pacpus framework in an equipped car (Heudiasyc). e vehicle, called Carmen, is presented in Figure..

Described methods have been implemented using a modular approach. Proceeding in this way per- mied us to test various parts of the developed system by a simple update, addition or removal of concerned components. e component-based architecture has been presented in Figure..

One of the advantages of such a structure of our system was that multiple sensor models and discount methods could have been tested. Moreover, the fusion algorithm was made independent of the exact type of sensor in use. Such a decomposition of our system permits to easily develop the method by adding supplementary modules, e.g. a trajectory planner or a new sensor.

. Perspectives

.. Method validation

Reference data One of the perspectives for future work is the use of reference data to validate the results. is is somehow problematic as no reliable and precise data set for perception of intelligent vehicles exist so far. One of the aempts, the KITTI dataset, which has become popular recently (Geiger, Lenz, et al.), could be a potential candidate for a reference data set. is database pro- vides both camera and lidar data synchronised properly with localisation information from a hybrid Global Positioning System (GPS)-Inertial Measurement Unit (IMU) system. Moreover, the data are annotated, e.g. objects are described by their class (car, van, truck, pedestrian, etc.), bounding box and

-dimensional (D) dimensions. Each object is annotated with its translation and rotation with respect to the reference frame. Unfortunately, the KITTI has a major flow preventing the use for the presented perception system. Namely, the object data is obtained using the Velodyne lidar sensor that is used as well to provide the point cloud data. Such a situation means that the learning and test data are obtained from the same sensor and hence, using them for validation purposes would be biased and lead to overestimating the capabilities of the tested system.

Learning parameters In any case, a reference data set, even a biased one, would be a valuable resource to improve the described system. First of all, algorithm parameters could be learnt automat- ically or semi-automatically. As presented in Chapter, the remanence of different object classes can be inferred through a learning process when enough data is available. Another parameter that could be learnt from reference data is the most appropriate discounting scheme. at is to say, without any knowledge about the interdependencies between various object classes, one could find which type of



discounting method should be used. is could help to decide between optimistic and conservative approaches or find the proper parametrisation of the general discounting rule.

Algorithm validation Apart from defining the best parameters of the perception system, a refer- ence data set could help to choose the most appropriate fusion rule. It would be of high value to determine whether proposed fusion rule based on Yager’s operator behaves beer than other rules, like conjunctive rule or cautious rule. What is more, possessing a reference data set offers a possibility to quantitatively validate the results. e verification that we have performed and which would be interesting to develop is the projection of the resulting perception grid into the camera image. is approach permits to visually verify the algorithm.

.. Map-based localisation

Map data It is envisioned that the hypothesis of accurate maps will be removed. Considerable work on creating appropriate error models for the data source will be needed. Such an improvement will be a step towards the use of our approach for navigation system in autonomous vehicles. Map information will be used to predict object movements. Lastly, more work is to be done to fully explore and exploit

D map information.

Localisation An already started work is to perform a map-based localisation module using map data and a lidar. Such an approach would possibly make a Global Navigation Satellite System (GNSS) module unnecessary or limit its usage only to get the initial guess about the vehicle position. e basic idea is to find the correspondence between the buildings and other elements of infrastructure obtained from two sources. e first source being a lidar-based perception method and the second one — a map. e research on this subject has started and preliminary results have been described in (Mendes de Farias

).

Object-level description An important stage would be to pass from the cell-level grid description to a higher level one. For instance, it would be a huge benefit for the robustness of the perception system to describe detected obstacles at the object level. is is possible through segmentation of the perception grid, extracting object information from it and tracking these objects. While tracking, one can also estimate and predict the speed and the direction of each object. Furthermore, the predicted position of an object may be used to improve the next detection step by limiting the conflictual information. Of course, such additional processing needs computational resources, which is to be taken into account when building a real-time system.

.. Vehicle navigation

Trajectory planning Coupling the perception module with a system for trajectory planning. An immediate candidate would be the so called tentacle method. Tentacle methods, e.g. as the one de- scribed by Hundelshausen et al. (), use input occupancy grids to define the driving corridor. e presented perception system would further improve such approaches. Firstly, describing explicitly the drivable and non-drivable free space would avoid problems with the distinction of. Secondly, object- level description is another enhancement that would permit to predict the movement of other road users and to adapt accordingly the trajectory. One should imagine that detecting two objects, one going in the



. Perspectives

(a) Without movement predic- tion.

V

estimated

V

estimated

(b) With prediction of object movement.

Figure . – Comparison of a tentacle-based trajectory planning algorithm. Dark blue: ego-vehicle, blue: detected vehicles, red: estimated positions of vehicles at the next execution of trajectory planning algorithm. Note that this figure does not take into account the geometric model of the ego-vehicle, thus allowing situation where the middle tentacle engages in a very narrow corridor between vehicles.



Figure . – Illustration of using virtual fences to limit possible car trajectory. Fences in green visible in front of the vehicle. Source:http://www.extremetech.com/extreme/189486-how-googles- self-driving-cars-detect-and-avoid-obstacles.

same direction as the intelligent vehicle, the other going in the opposite direction would need different handling. e former would be expected to advance, so that the candidate tentacles go further up to the new, predicted, position of the object. For the laer object, the candidate trajectories would be shorter, as the estimated position limits the possible manoeuvres. A schematic illustration of such behaviour can be seen in Figure.. Additionally, an important modification of the tentacle approach would be to use fully the rich information encoded in perception grids. Namely, such an algorithm should take into account the certainty that cells on a candidate trajectory are free and use this quantity as an additional score for judging the tentacle’s suitability. Besides, a Ph.D. thesis is starting on a similar subject starts at the University of Technology of Compiègne in Autumn .

Emergency mode Another possibility of enhancing the trajectory planning system would be to add an emergency driving mode. One of important differences between this mode and the normal cruise mode would be to permit the vehicle to drive on (normally) non-drivable spaceN. is can be motivated by the behaviour of human drivers. For instance in case of an emergency vehicle approaching a car, the car’s driver would move its vehicle to let the emergency move on. If necessary, he would need to drive on pavements or grass, otherwise risking to block the approaching vehicle and possibly lead to catastrophic consequences. Another situation where the necessity of going onto the non-drivable space shows up is the obstacle avoidance. More precisely, the manoeuvre of critical obstacle avoidance that is decisive in the choice between injury and death. One can easily imagine the situation where the driver (human or autonomous) dodges away on the emergency lane or on the grass in order to avoid the head-on collision with an oncoming vehicle. If we can ever dream of intelligent vehicles on our roads, such scenarios have to be taken into account.

Incorporating traffic rules Another idea would be to add the map information about road lanes and traffic rules to the resulting grid through conditioning. For example, a turn restriction would modify the grid provided to the trajectory planning module; this will happen in the way so that the grid itself restricts the drivable free space. is technique is conceptually similar to existing methods in trajectory planning, where virtual barriers limit the possible movements of the vehicle, as illustrated in Figure..



. Perspectives

.. Technicalities

Implementation improvements A grid-based approaches hugely benefit from parallel processing, as the computations performed on each cell are identical. An efficient implementation would take this fact into account and use techniques of programming on massively parallel processors, like General- Purpose computing on Graphics Processing Unitss (GPGPUs). We have already performed successful tests using CUDA and OpenCL libraries for this purpose. ese preliminary tests have shown that by applying this technique, one could largely decrease necessary computation times.



Part VI