• Nenhum resultado encontrado

Data Fusion Using Different Activation Functions in Artificial Neural Networks for Vehicular Navigation

N/A
N/A
Protected

Academic year: 2017

Share "Data Fusion Using Different Activation Functions in Artificial Neural Networks for Vehicular Navigation"

Copied!
15
0
0

Texto

(1)

Data Fusion Using Different Activation

Functions in Artificial Neural Networks for

Vehicular Navigation

MALLESWARAN M

Department of Electronics and Communication Engineering, Anna University of Technology Tirunelveli,

Tirunelveli, Tamil Nadu- 627007, India mallesh1971@yahoo.com

DR. VAIDEHI V

Department of Information Technology, MIT Campus, Anna University Chennai,

Chennai, Tamil Nadu, India

ANGEL DEBORAH S

Department of Electrical and Electronics Engineering, Anna University of Technology Tirunelveli,

Tirunelveli, Tamil Nadu- 627007, India angeldebo@gmail.com

Abstract:

Global positioning System (GPS) and Inertial Navigation System (INS) data can be integrated together to provide a reliable navigation. GPS/INS data integration provides reliable navigation solutions by overcoming each of their shortcomings, including signal blockage for GPS and increase in position errors with time for INS. This paper aims to provide GPS/INS data integration utilizing Artificial Neural Network (ANN) architecture. This architecture is based on Feed Forward Neural Networks, which generally includes Radial Basis Function (RBF) neural network and Back Propagation neural network (BPN). These are systematic methods for training multi-layer artificial networks. The BPN-ANN and RBF-ANN modules are trained to predict the INS position error and provide accurate positioning of the moving vehicle. This paper also compares performance of the GPS/INS data integration system by using different activation function like Bipolar Sigmoidal Function (BPSF), Binary Sigmoidal Function (BISF), Hyperbolic Tangential Function (HTF) and Gaussian Function (GF) in BPN-ANN and using Gaussian function in RBF-ANN.

Keywords: GPS, INS, ANN, RBF, BPN, BPSF-BPN, BISF-BISF, HTF-BPN, GF-BPN and GF-RBF.

1. Introduction

1.1. GPS/INS Integration

(2)

position, velocity and attitude information. In general, an inertial measuring unit (IMU), which incorporates three-axis accelerometers and three-axis gyroscopes, can be used as positioning and attitude monitoring devices [2-3]. However, INS cannot operate appropriately as a stand-alone navigation system because the INS accuracy deteriorates with time due to possible sensor errors that exhibit long term error growth.

Figure 1. GPS/INS Integration

Therefore, the GPS/INS integration provides a navigation system that has superior performance in comparison with either a GPS or an INS stand-alone system. For instance, GPS position components have approximately white noise characteristics with bounded errors and can therefore be used to update INS and improve its long-term accuracy. On the other hand, INS provides positioning information during GPS outages thus assisting GPS signal reacquisition after an outage and reducing the search domain required for detecting and correcting GPS values. INS is also capable of providing positioning and attitude information at higher data rates than GPS

1.2. Existing Problem

Kalman filtering was applied for a number of years to provide an optimal GPS/INS integrated module [4]. The major inadequacy related to utilization of KF for GPS/INS integration is the necessity to have a predefined accurate stochastic model for each of the sensor errors. Furthermore, prior information about the covariance values of both INS and GPS data as well as the statistical properties of each sensor system has to be known accurately.Therefore, it is usually difficult to set the proper stochastic model for each inertial sensor that works efficiently at all environments and reflects the long-term behaviour of sensor errors. The difficulty of modelling the errors of INS raised the need for a model-less INS/GPS integration technique. One of the model-less technique is neural networks.

2. Neural Networks

An artificial neural network (ANN) is a massively parallel-distributed processor that can be used to model highly complex and non-linear stochastic problems. The ANN is formed of smaller units called neurons and is trained through a learning process, while interneuron connection strengths known as synaptic weights, are used to store the knowledge [5-6].The learning process comprises of two stages: the first is the training through certain learning algorithms; the second is the prediction when the ANN processes the input data to estimate the output based on the stored knowledge [7-9].

Figure 2. Training procedure of ANN module for modeling INS position errors

(3)

the corresponding INS position component to obtain the corrected position component. Neural network figure out how to perform their function on their own. It determines their function based only upon sample inputs. It has ability to generalize i.e. produce reasonable outputs for inputs it has not been taught how to deal with.

Figure 3. Prediction procedure of ANN module for modeling INS position errors

There are many different types of ANN according to its inherent structure and learning algorithms. The choice of the type of ANN depends on its suitability to a particular application. In this study, Feed Forward Architecture neural network has been implemented. Feed Forward Architecture includes RBF-ANN and BPN-ANN.

2.1. Radial Basis Function

Radial Basis Function (RBF) Neural Network is a multilayer network. Among the vast variety of neural networks, the RBF-NN is a quite commonly used structure. The design of a RBF-NN in its most basic form consists of three separate layers as shown in figure 4.

Figure 4. Architecture of RBF

The input layer is the set of source nodes (sensory units).The second layer is a hidden layer of high dimension. In the hidden layer Euclidean distance (represented as ||…||) is calculated and most commonly used Gaussian radial basis function is applied. The output layer gives the response of the network to the activation patterns applied to the input layer. The parameters w1, w2, w3, w4, w5 are weights and w0 is bias in the output layer. Linear activation function is used in the output layer. RBF include the following advantages: They are universal approximators. They have more compact topology than other neural networks. Their learning speed is high because of their locally tuned neurons. The hidden layer is easier to interpret than the hidden layer in a Multilayer perceptron (MLP).The transformation from the input space to the hidden-unit space is nonlinear. On the other hand, the transformation from the hidden space to the output space is linear.

2.2. Back Propagation

(4)

 

Figure 5. Architecture of BPN neural networks

On the other hand, the transformation from the hidden space to the output space uses linear activation function.

3. Experimental Results and Discussion

3.1. Experimental setup

In this experiment, the training mode and the prediction mode of BPN-ANN and RBF-ANN were utilized consecutively. The BPN-ANN was trained using different activation functions like BPSF, BISF, HTF and GF. The RBF-ANN was using GF. During the presence of GPS signal, the proposed system relies on GPS position information to train the network. During the training stage, the BPN-ANN and RBF-ANN module are trained to mimic the latest vehicle dynamic, determine the INS position error, and correct the corresponding INS position component. The data is processed as follows: first, the INS and GPS signals are taken as input vector and target vector respectively. The input vector is send as input signal to input node and target vector is send as one of the input signal to output node during the training mode. The value of the weight is initially considered to be a small value between -1 to 1(or) between -0.5 to 0.5 in case of BPN-ANN module. Whereas in case of RBF the weight is initialized between 4 to 5 and bias is chosen close to the first input value. The learning rate of this experimental set up is 0.0001 for BPN. The INS position is then used as the input to BPN-ANN and RBF-ANN, and is trained until it nearly equals the target GPS position. The training is carried out using different activation functions in BPN-ANN and the performance is analysed and also compared to RBF-ANN training results. The training procedure continues working until GPS signal blockage is detected. In this case, the proposed system works in the prediction mode where the ANN modules predicts the corresponding INS position error based on the knowledge stored during the training procedures. This predicted INS position error is then removed from the corresponding INS position component to obtain the corrected INS position.

3.2. Simulation results

GPS Signal is generated from C/A generator (Coarse-Acquisition code Generator).GPS position value is used as target vector value and INS position value is used as input vector in Mat lab. GPS latitude and longitude value is given in the trajectory path shown in figure 6.

(5)

By training the RBF-ANN using Gaussian activation function the output for latitude and longitude components are obtained during the first epoch. Then further training is carried out by changing the weight and bias values until the actual output is closely equal to the expected target value. Training proceeds until the stopping condition is reached. The outputs of RBF-ANN training for latitude and longitude components are given in figure 7 and 8 respectively.

Figure 7. Output of RBF-ANN training for latitude component

Figure 8. Output of RBF-ANN training for longitude component

Figure 9. Output of BPN-ANN training for latitude component using BPSF

Figure 10. Output of BPN-ANN training for latitude component using BISF

Figure 11. Output of BPN-ANN training for longitude component using BPSF

(6)

Figure 13. Output of BPN-ANN training for latitude component using HTF

Figure 14. Output of BPN-ANN training for latitude component using GF

Figure 15. Output of BPN-ANN training for longitude component using HTF

Figure 16. Output of BPN-ANN training for longitude component using GF

Similarly, by training the BPN-ANN module using different activation function like BSPF, BISF, HTF and GF the output is obtained during the first epoch. Further the training is done because the actual output is not closely equal to the required target output. After each epoch the weight values and bias vale are updated. The training proceeds until the stopping condition is reached. The stopping condition for ANN modules may be the minimization of errors, number of epochs, etc. Figure 9 and 10 shows the output of BPN-ANN of latitude component using BPSF and BISF. Similar output of BPN-ANN training for the longitude component using BPSF and BISF is shown in figure 11 and 12 respectively. Likewise HTF and GF is used for training in BPN-ANN, the output of latitude component is given in figure 13 and 14. The output of BPN-ANN training for longitude component using HTF and GF is shown in figure 15 and 16. From figure 7- 16 it is found that at the initial period of time the corrected INS value is not so close to GPS value but it becomes closer and closer as time increases. Besides the above scenario, from figure 10, 12, 14 and 16 it is found that the corrected INS value got by using BISF and GF in BPN-ANN is nearly equal to the target GPS value. After each and every epoch, error between the original GPS position and corrected INS are calculated. It is found that after each and every epoch, the error value varies. As the error reduces performance of the system increases.

3.3. Performance analysis

(7)

Figure 17. Performance measure curve for RBF-ANN

Figure 18. Performance measure curve for BPN-ANN using BPSF

(8)

Figure 20. Performance measure curve for BPN-ANN using HTF

Figure 21. Performance measure curve for BPN-ANN using GF

TABLE I. MEAN SQUARE ERROR AND CORRECTED INS VALUE OF RBF-ANN

Epoch no

Mean Square

error

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

1 21.7720 26.6000 30.9000 35.3000 39.8000 44.4000

3 18.9020 26.7000 31.1000 35.6000 40.2000 44.9000

5 16.2520  26.8000 31.3000 35.9000 40.6000 45.4000

7 13.8220 26.9000 31.5000 36.2000 41.0000 45.9000

9 11.6120 27.0000 31.7000 36.5000 41.4000 46.4000

11 9.6220 27.1000 31.9000 36.8000 41.8000 46.9000

13 7.8520 27.2000 32.1000 37.1000 42.2000 47.4000

15 6.3020 27.3000 32.3000 37.4000 42.6000 47.9000

17 4.9720 27.4000 32.5000 37.7000 43.0000 48.4000

19 3.8620 27.5000 32.7000 38.0000 43.4000 48.9000

21 2.9720 27.6000 32.9000 38.3000 43.8000 49.4000

23 2.3020 27.7000 33.1000 38.6000 44.2000 49.9000

(9)

When BISF and GF activation functions are used the case is same as that of RBF-ANN but the time consumed to evaluate is more when compared to RBF-ANN and BPN-ANN using BPSF and HTF. The mean square error value is reduced greatly in BPN-ANN using BISF and GF when compared to RBF-ANN and BPN-ANN using BPSF and GF.

TABLE II. CORRECTED INS VALUE USING BPSF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

2 28.0101 26.8645 25.9669 22.5468 16.1978

4 28.2272 28.9418 32.6871 38.4864 45.9158

6 28.4221 30.5910 36.8042 44.3846 49.7462

8 28.5966 31.5966 38.7474 44.9641 49.9775

TABLE III. CORRECTED INS VALUE USING BISF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

20 29.1317 31.3365 33.6805 36.1497 38.7557

40 29.5213 33.0987 36.9259 40.9815 45.2547

60 29.7377 34.0401 38.5779 43.3048 48.1828

80 29.8568 34.5224 39.3582 44.3086 49.3322

100 29.9219 34.7641 39.7136 44.7220 49.7584

120 29.9575 34.8839 39.8729 44.8889 49.9131

TABLE IV. CORRECTED INS VALUE USING HTF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

2 28.0082 26.8434 25.8957 22.4141 15.9828

4 28.2255 28.9247 32.6364 38.3924 45.6875

6 28.4205 30.5777 36.7777 44.3711 49.7399

8 28.5952 31.8392 38.7362 44.9633 49.9770

The numerical values of corrected INS values and mean square error values obtained during different epoch for latitude component using RBF-ANN is shown in table I. The corrected INS value got during different epochs of BPN-ANN training using different activation functions for latitude component are given in table II, III, IV and V respectively. The mean square error values obtained using the various activation functions in BPN-ANN for latitude component are given in table VI and VII. The values of bias updates and weight updates in RBF-ANN for latitude component are given in table VIII and IX. Some of the weight updates between input layer and hidden layer of BPN-ANN for latitude component are shown in table X and XI. Similarly the values of the weights between hidden layer and output layer are given in table XII and XIII.

TABLE V. CORRECTED INS VALUE USING GF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

20 29.1396 31.3535 33.6962 36.1710 38.7813

40 29.5257 33.1080 36.9343 40.9925 45.2674

60 29.7402 34.0449 38.5819 43.3098 48.1880

80 29.8581 34.5248 39.3600 44.3107 49.3342

100 29.9227 34.7653 39.7145 44.7229 49.7591

120 29.9579 34.8845 39.8733 44.8892 49.9134

TABLE VI. MEAN SQUARE ERROR VALUE USING BPSF AND HTF IN BPN-ANN

Epoch No BPSF HTF

1 147.3969 146.2281

2 382.7624 387.3440

4 30.4861 31.3078

6 6.5171 6.5796

(10)

TABLE VII. MEAN SQUARE ERROR VALUE USING BISF AND GF IN BPN-ANN

Epoch No BISF GF

1 133.3434 182.2500

20 51.7744 51.5170

40 10.3921 10.3321

60 1.8377 1.8261

80 0.3169 0.3148

100 0.0559 0.0555

120 0.0103 0.0102

TABLE VIII. BIAS UPDATES DURING EACH EPOCH OF RBF-ANN

Epoch

No Bias(W0) Epoch No Bias(W0)

1 26.6000 13 27.2000

3 26.7000 15 27.3000

5 26.8000 17 27.4000

7 26.9000 19 27.5000

9 27.0000 21 27.6000

11 27.1000 23 27.7000

12 27.1500 24 27.7500

TABLE IX. WEIGHT UPDATES IN OUTPUT LAYER DURING EACH EPOCH OF RBF-ANN TRAINING

Epoch No

Weight Updates

W1 W2 W3 W4 W5

1 4.1000 4.3000 4.4000 4.5000 4.6000

3 4.2000 4.4000 4.5000 4.6000 4.7000

5 4.3000 4.5000 4.6000 4.7000 4.8000

7 4.4000 4.6000 4.7000 4.8000 4.9000

9 4.5000 4.7000 4.8000 4.9000 5.0000

11 4.6000 4.8000 4.9000 5.0000 5.1000

13 4.7000 4.9000 5.0000 5.1000 5.2000

15 4.8000 5.0000 5.1000 5.2000 5.3000

17 4.9000 5.1000 5.2000 5.3000 5.4000

19 5.0000 5.2000 5.3000 5.4000 5.5000

21 5.1000 5.3000 5.4000 5.5000 5.6000

23 5.2000 5.4000 5.5000 5.6000 5.7000

24 5.2500 5.4500 5.5500 5.6500 5.7500

TABLE X. WEIGHT UPDATES AT HIDDEN UNIT USING BPSF IN BPN-ANN

Epoch No

Weight Updates at hidden unit(Wij)

W11 W12 W13 W14 W15

2 -1.5500 -1.5500 -1.5500 -1.5500 -1.5500

4 -1.5500 -1.5500 -1.5500 -1.5501 -1.5501

6 -1.5500 -1.5500 -1.5500 -1.5565 -1.5862

8 -1.5500 -1.5500 -1.5525 -2.8885 -15.3641

(11)

on the simulation results and performance analysis, the analysis charts as shown in figure 23 and 24 are obtained. The values of analysis chart are given in table XXVII.

TABLE XI. WEIGHT UPDATES AT HIDDEN UNIT USING HTF IN BPN-ANN

Epoch No

Weight Updates at hidden unit(Wij)

W11 W12 W13 W14 W15

2 -1.5500 -1.5500 -1.5500 -1.5500 -1.5500

4 -1.5500 -1.5500 -1.5500 -1.5501 -1.5501

6 -1.5500 -1.5500 -1.5500 -1.5563 -1.5728

8 -1.5500 -1.5500 -1.5523 -2.8380 -10.2265

TABLE XII. WEIGHT UPDATES AT OUTPUT UNIT USING BPSF IN BPN-ANN

Epoch No

Weight Updates at output unit(Wjk)

W11 W21 W31 W41 W51

2 0.5201 0.8972 0.9501 1.6011 2.5782

4 0.4115 0.0663 -1.0660 -2.1494 -3.1368

6 0.3141 -0.5934 -2.3011 -3.5372 -3.8734

8 0.2268 -1.0967 -2.8841 -3.6735 -3.9179

TABLE XIII. WEIGHT UPDATES AT OUTPUT UNIT USING HTF IN BPN-ANN

Epoch No

Weight Updates at output unit(Wjk)

W11 W21 W31 W41 W51

2 0.5211 0.9072 0.9735 1.6341 2.6211

4 0.4124 0.0747 -1.0487 -2.1255 -3.0914

6 0.3149 -0.5865 -2.2911 -3.5323 -3.8707

8 0.2276 -1.0911 -2.8787 -3.6716 -3.9163

TABLE XIV. MEAN SQUARE ERROR AND CORRECTED INS VALUE OF RBF-ANN

Epoch no

Mean Square

error

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

1 20.5367 57.5000 61.5000 57.0000 60.5012 74.9060

3 17.7279 57.6000 61.7000 57.3000 60.9012 75.4061

5 15.1391  57.7000 61.9000 57.6000 61.3012 75.9061

7 12.7704 57.8000 62.1000 57.9000 61.7013 76.4062

9 10.6217 57.9000 62.3000 58.2000 62.1013 76.9063

11 8.6930 58.0000 62.5000 58.5000 62.5013 77.4063

13 6.9844 58.1000 62.7000 58.8000 62.9014 77.9064

15 5.4958 58.2000 62.9000 59.1000 63.3014 78.4065

17 4.2272 58.3000 63.1000 59.4000 63.7014 78.9065

19 3.1787 58.4000 63.3000 59.7000 64.1015 79.4066

21 2.3502 58.5000 63.5000 60.0000 64.5015 79.9067

22 2.0185 58.5500 63.6000 60.1500 64.7015 80.1567

TABLE XV. CORRECTED INS VALUE USING BPSF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

2 56.2497 54.5829 54.9542 52.0761 39.3211

4 57.0489 59.6006 59.8637 67.1881 78.9967

(12)

TABLE XVI. CORRECTED INS VALUE USING BISF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

10 57.9316 60.0274 58.3328 60.8993 66.7383

20 58.8668 62.3553 59.5471 63.7951 73.5936

30 59.3839 63.6202 60.2165 65.3561 77.0738

40 59.6665 64.2875 60.5798 66.1674 78.6997

50 59.8199 64.6341 60.7753 66.5810 79.4295

60 59.9028 64.8126 60.8800 66.7899 79.7511

65 59.9287 64.8660 60.9124 66.8513 79.8358

TABLE XVII. CORRECTED INS VALUE USING HTF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

2 56.2459 54.5653 54.9820 52.1732 39.0773

4 57.0459 59.5904 59.8700 67.1852 79.2812

6 57.6832 62.3643 60.8211 67.0036 79.1610

TABLE XVIII. CORRECTED INS VALUE USING GF IN BPN-ANN

Epoch no

Corrected INS value

CINS1 CINS2 CINS3 CINS4 CINS5

10 57.9401 60.0456 58.3566 60.9039 66.7104

20 58.8715 62.3654 59.5604 63.7976 73.5786

30 59.3865 63.6256 60.2237 65.3574 77.0666

40 59.6679 64.2903 60.5837 66.1681 78.6965

50 59.8206 64.6355 60.7774 66.5814 79.4280

65 59.9290 64.8665 60.9132 66.8514 79.8353

TABLE XIX. MEAN SQUARE ERROR VALUE USING BPSF AND HTF IN BPN-ANN

Epoch No BPSF HTF

1 159.1576 159.2737

2 407.3262 410.7380

4 8.0391 7.9638

6 2.7357 2.6100

TABLE XX. MEAN SQUARE ERROR VALUE USING BISF AND GF IN BPN-ANN

Epoch No BISF GF

1 161.5676 153.4500

10 49.8424 49.9105

20 12.3407 12.3553

30 2.8325 2.8342

40 0.6359 0.6357

50 0.1436 0.1434

65 0.0160 0.0159

TABLE XXI. BIAS UPDATES DURING EACH EPOCH OF RBF-ANN

Epoch

No Bias(W0) Epoch No Bias(W0)

1 57.5000 13 58.1000

3 57.6000 15 58.2000

5 15.1391 17 58.3000

7 12.7704 19 58.4000

9 10.6217 21 58.5000

(13)

TABLE XXII. WEIGHT UPDATES AT HIDDEN UNIT USING BPSF IN BPN-ANN

Epoch No

Weight Updates at hidden unit(Wij)

W11 W12 W13 W14 W15

1 0.1112 0.3285 -0.0344 0.4020 0.0630

2 -1.5500 -1.5500 -1.5500 -1.5500 -1.5499

4 -1.5500 -1.5500 -1.5500 -1.5500 -1.5472

6 -1.5500 -1.5500 -1.5500 -1.5520 -0.3657

TABLE XXIII. WEIGHT UPDATES AT HIDDEN UNIT USING HTF IN BPN-ANN

Epoch No

Weight Updates at hidden unit(Wij)

W11 W12 W13 W14 W15

1 0.1112 0.3285 -0.0344 0.4020 0.0630

2 -1.5500 -1.5500 -1.5500 -1.5500 -1.5500

4 -1.5500 -1.5500 -1.5500 -1.5500 -1.5472

6 -1.5500 -1.5500 -1.5500 -1.5519 -0.3506

TABLE XXIV. WEIGHT UPDATES AT OUTPUT UNIT USING BPSF IN BPN-ANN

Epoch No

Weight Updates at output unit(Wjk)

W11 W21 W31 W41 W51

1 0.2541 0.2892 0.3482 0.0505 -0.3162

2 0.4348 1.1848 0.5937 1.2531 3.7073

4 0.0352 -0.8223 -0.8792 -2.3027 -3.9226

6 -0.2832 -1.9299 -1.1661 -2.2593 -3.8903

TABLE XXV. WEIGHT UPDATES AT OUTPUT UNIT USING HTF IN BPN-ANN

Epoch No

Weight Updates at output unit(Wjk)

W11 W21 W31 W41 W51

1 0.2541 0.2892 0.3482 0.0505 -0.3162

2 0.4383 1.1934 0.5847 1.2288 3.7562

4 0.0383 -0.8166 -0.8817 -2.3034 -3.9754

6 -0.2804 -1.9262 -1.1671 -2.2607 -3.9522

TABLE XXVI. WEIGHT UPDATES IN OUTPUT LAYER DURING EACH EPOCH OF RBF-ANN TRAINING

Epoch No

Weight Updates

W1 W2 W3 W4 W5

1 3.5000 4.0000 -4.5000 3.5000 14.4000

3 3.6000 4.1000 -4.4000 3.6000 14.5000

5 3.7000 4.2000 -4.3000 3.7000 14.6000

7 3.8000 4.3000 -4.2000 3.8000 14.7000

9 3.9000 4.4000 -4.1000 3.9000 14.8000

11 4.0000 4.5000 -4.0000 4.0000 14.9000

13 4.1000 4.6000 -3.9000 4.1000 15.0000

15 4.2000 4.7000 -3.8000 4.2000 15.1000

17 4.3000 4.8000 -3.7000 4.3000 15.2000

19 4.4000 4.9000 -3.6000 4.4000 15.3000

21 4.5000 5.0000 -3.5000 4.5000 15.4000

22 4.5500 5.0500 -3.4500 4.5500 15.4500

TABLE XXVII. VALUES OF ANALYSIS

Criteria RBF BPN-BPSF BPN- BISF BPN-HTF BPN-GF Latitude

MSE 2.0495 2.6925 0.0103 2.7126 0.0102

Longitude

MSE 2.0185 2.7357 0.0160 2.6100 0.0159

No of Epochs (latitude)

24 8 120 8 120

No of Epochs (longitude)

(14)

Figure 22.MSE Analysis chart

Figure 23.Epochs Analysis chart

Thus feed forward neural networks like RBF and BPN were analysed based on their performance exhibited during experimentation.

4. Conclusion

(15)

References

[1] J. Farrell, “The Global Positioning System and Inertial Navigation,” McGraw-Hill Professional, 1998.

[2] D.K. Mynbaev, “ Errors of an inertial navigation unit caused by ring laser gyro errors,” in: Proceedings of the IEEE Position Location and NavigationSymposium, 1994, pp. 833–838.

[3] G. Dissanayake, S. Sukkarieh, “The aiding of a low-cost strapdown inertial measurement unit using vehicle model constraints for land vehicle applications,” IEEE Trans. Robot. Automat. 17 (5) (2001) 731–747.

[4] Basil H,Anathasayanam M,and Puri S, “Adaptive Kalman Filter Tuning in Integration of Low-Cost MEMS-INS/GPS,” AIAA

Guidance, Navigation and Control Conference, Providence, RI, Aug. 16-19 2004.

[5] Chaing ,K.W., Noureldin, A., and El-Sheimy, N., “Multi-sensors integration using neuron computing for land vehicle navigation,” GPS Solutions, 6, 3(2003), 591-600.

[6] El-Sheimy, N.,and Abdel-Hamid, W., “An adaptive neuro-fuzzy model to bridge GPS outages in MEMS- INS/GPS land vehicle

navigation,” GNSS 2004,ION,Sept. 21-24, 2004, Long Beach, CA, 2004.

[7] Aboelmagd Noureldin , Ahmed Osman and Naser EI_ Sheimy , “A neuro-wavelet method for multi-sensor system integration for vehicular navigation,” J.Meas.Sci.Technol.,vol. 15, no. 2, pp. 404-412, Feb. 2004.

[8] Rashad sharaf, Aboelmagd Noureldin & Ahmed Osman & Naser EI_ Sheimy , “Online INS/GPS Integration with a Radial Basis Function Neural Network,” IEEE Systems magazine march 2005.

[9] Rashad sharaf, Aboelmagd Noureldin, “Sensor Integration for Satellite-Based Vehicular Navigation Using Neural Networks,” IEEE Trans. Neural Networks, vol. 18, no. 2, March 2007.

 

Referências

Documentos relacionados

v^ (e^) onde argmax denota a maximização de um conjunto de diversos máximos possíveis. Uma das vantagens desta estratégia é a de que existe independência entre a escolha do

 6H D UHWLUDGD GD QLFRWLQD LQGX] XPD PDLRU LQJHVWmR GH DOLPHQWRV H VH R SDGUmR DOLPHQWDU GRV IXPDGRUHV DSUHVHQWD HVFROKDV DOLPHQWDUHV PHQRV VDXGiYHLV

Tabela C.9 – Resultados de Simulação para Controle Semafórico Fuzzy convencional, Volume Alto e Fluxo Variável tab1 - sit1 Fila Residual Fila Residual Tempo Verde Tempo Verde.. tab1

Este subcapítulo tem como objectivo apresentar a área referente ao Plano Director de Reconversão Urbana do Cazenga, Sambizanga e Rangel (Luanda Norte), de forma

Esta modificação no modelo de seleção dos livros didáticos representou uma ampla melhoria do ensino catarinense nas escolas com a possibilidade de seleção de livros didáticos sobre

A distribuição bastante localizada de alguns endemismos na região leste, montanhosa, faz crer que se está a assistir a uma fragmentação de habitat, com a

Com base nestas questões, este estudo tem como objetivo estudar, ainda que de forma exploratória, algumas dimensões da imagem corporal positiva (apreciação

Procura-se assim identificar os tipos de encenação e performances privilegiados pelos artistas em contexto de espetáculo musical e perceber como esta necessidade