neural network

Top PDF neural network:

DYNAMIC APERIODIC NEURAL NETWORK FOR TIME SERIES PREDICTION

DYNAMIC APERIODIC NEURAL NETWORK FOR TIME SERIES PREDICTION

We ran through all eight markets' stock index data using out myopic KAII neural network and we got different results from different markets. Most of the predicti[r]

16 Ler mais

A Neural Network Approach to Time Series Forecasting

A Neural Network Approach to Time Series Forecasting

We present an improved algorithm, based on GRNN, for the time series forecasting. GRNN is a neural network proposed by Donald F. Specht in 1991 [3]. This algorithm has a number of advantages over competing algorithms. GRNN is non-parametric. It makes no assumptions concerning the form of the underlying distributions. A major problem with the ARIMA and GARCH methodology and the MLP algorithm is that they are global approximators, assuming that one relationship fits for all locations in an area. Unlike these algorithms, the GRNN is a local approximator. In these algorithms local models are turned into heterogeneous forecasting models adequate to local approximation. GRNN is simpler than other existing algorithms. It has only one parameter (smoothing factor σ , where 0 < σ ≤1 ) that needs to be specified, but our research suggests that the performance is not very sensitive to the parameter σ . However, we face a dilemma when applying the GRNN to the time series forecasting task. If we provide only the most recent past value, the GRNN generates the smallest forecasting error but does not accurately forecast the correct direction of change. On the other hand, if we provide multiple past observations, the GRNN can forecast the direction of change correctly, but the forecasting error appears to proportionally increase with an increasing number of input values. In order to overcome this problem, we propose a derivative of the GRNN, which we call GRNN ensemble. Using the MLP, the ARIMA & the GARCH methodologies, trial-and-error methods are applied in order to secure the best fitting model. The advantage of the proposed algorithm is that it is very simple to implement, neither the trial-and-error process nor prior knowledge about the parameters is required.
Mostrar mais

5 Ler mais

A Neural Network Model for Forecasting CO2 Emission

A Neural Network Model for Forecasting CO2 Emission

The neural network model chosen for the estimation of the pollutant emission into the atmosphere has a MLP feed-forward structure, with Levemberg- Marquardt learning algorithm and sigmoid activation function. The Levenberg-Marquardt training algorithm has been adopted because the classic steepest descent approach has a rather slow convergence to an absolute minimum due to its use of the gradient. This algorithm also uses the information on the error function Hessian without computing it explicitly, being therefore particularly fast when the number of inputs is not high. In the chosen neural network model the main features are:
Mostrar mais

6 Ler mais

AUTOMATED EDGE DETECTION USING CONVOLUTIONAL NEURAL NETWORK

AUTOMATED EDGE DETECTION USING CONVOLUTIONAL NEURAL NETWORK

Convolutional Neural Networks (Convolutiobal Neural Network) are variants of MLPs which are inspired from biology. From Hubel and Wiesel’s early work on the cat’s visual cortex , we know there exists a complex arrangement of cells within the visual cortex. These cells are sensitive to small sub-regions of the input space, called a receptive field, and are tiled in such a way as to cover the entire visual field. These filters are local in input space and are thus better suited to exploit the strong spatially local correlation present in natural images. Additionally, two basic cell types have been identified: simple cells (S) and complex cells (C). Simple cells (S) respond maximally to specific edge-like stimulus patterns within their receptive field. Complex cells (C) have larger receptive fields and are locally invariant to the exact position of the stimulus.
Mostrar mais

7 Ler mais

3D Convolutional Neural Network for Liver Tumor Segmentation

3D Convolutional Neural Network for Liver Tumor Segmentation

As discussed above, large datasets were hard to come by. In 2009, ImageNet [8] was presented as a solution to this problem. This project aimed to help the scientific community have a centralized dataset which could be a benchmark for new Computer Vision applications. It contained 3.2 million full resolution images, subdivided in multiple trees, and could be used for a variety of problems, including image classification, object detection and group clustering. With this new dataset, an annual classification challenge accompanied it. Many different approaches were taken, until, in 2012, a Convolutional Neural Network (CNN) called AlexNet [16] won by a significant lead.
Mostrar mais

37 Ler mais

Comparison of Statistical and Neural Network Techniques in Predicting Physical Properties of Various Mixtures of Diesel and Biodiesel

Comparison of Statistical and Neural Network Techniques in Predicting Physical Properties of Various Mixtures of Diesel and Biodiesel

The combination of neural network architectures, training algorithms, weights and biases with minimum goal was selected as the desired neural network. In order to check its validity the blend properties of fresh samples were predicted using selected neural network and compared with the experimental measurements. The selected neural network was further generalized using early stopping

4 Ler mais

Sentence Recognition Using Hopfield Neural Network

Sentence Recognition Using Hopfield Neural Network

In their paper, Jones et al [6] have talked about how Language Modeling can be used to solve the problem of sentence recognition. They have used probabilistic grammar along with a Hidden Markov Identifier for adequately completing this task. In [7], an algorithm had been proposed to describe a framework for classifier combination in grammar-guided sentence recognition. Hybrid Techniques have also been used for the aforesaid problem. In [8], Hidden Markov Model (HMM) and Neural Network (NN) Model have been combined for the solution. Here, Word Recognition had been using a Tree- Structured dictionary while Sentence Recognition is done using a word-predecessor conditioned beam search algorithm to segment into words and word recognition. In [9], sentence recognition has been achieved which uses a template based pattern recognition and represents words as a series of diphone-like segments. In [10], word co- occurrence probability has been used for sentence recognition. The incurred results were also compared with the method using the Context Free Grammar. Binary Neural Networks have also been successfully used in the task of pattern recognition. Binary Hamming Neural Network has been applied to recognize sentences and have been found to sufficiently successful in this regard. The system proposed also takes advantage of greater speed of the Binary Networks to provide a very efficient solution to the problem of sentence recognition [11]. David and Rajsekaran have talked about how Hopfield classifiers can be used as a tool in Pattern Classification [12]. A combination of Hopfield Neural Network and Back Propagation Approach has also been used to propose method of vehicle license character recognition supported by a study of the relations among the study rate, error precision and nodes of the hidden layer [13].
Mostrar mais

6 Ler mais

Facial Expression Classification Based on Multi Artificial Neural Network and Two Dimensional Principal Component Analysis

Facial Expression Classification Based on Multi Artificial Neural Network and Two Dimensional Principal Component Analysis

Multi Artificial Neural Network (MANN), applying for pattern or image classification with parameters (m, L), has m Sub-Neural Network (SNN) and a global frame (GF) consisting L Component Neural Network (CNN). In particular, m is the number of feature vectors of image and L is the number of classes.

8 Ler mais

Neural Network Based Model Refinement

Neural Network Based Model Refinement

Refining the neural network reduces the number of inputs necessary for estimation. If the network is not further used for estimation, the remained variables are used as input for model generation, using a model generator from a certain model class. If also a neural network is used for estimation, a second network is built having as inputs, the variables obtained from refinement, training is done and then validation is performed by comparing the initial network error with the second network error to see if the second network performance is acceptable.
Mostrar mais

9 Ler mais

Artificial Neural Network : A Brief Overview

Artificial Neural Network : A Brief Overview

Unsupervised learning is a machine learning technique that sets parameters of an artificial neural network based on given data and a cost function which is to be minimized. Cost function can be any function and it is determined by the task formulation. Unsupervised learning is mostly used in applications that fall within the domain of estimation problems such as statistical modeling, compression, filtering, blind source separation and clustering[14].In unsupervised learning we seek to determine how the data is organized. It differs from supervised learning and reinforcement learning in that the artificial neural network is given only unlabeled examples.
Mostrar mais

6 Ler mais

An Artificial Neural Network for Data Forecasting Purposes

An Artificial Neural Network for Data Forecasting Purposes

Considering the fact that markets are generally influenced by different external factors, the stock market prediction is one of the most difficult tasks of time series analysis. The research reported in this paper aims to investigate the potential of artificial neural networks (ANN) in solving the forecast task in the most general case, when the time series are non-stationary. We used a feed-forward neural architecture: the nonlinear autoregressive network with exogenous inputs. The network training function used to update the weight and bias parameters corresponds to gradient descent with adaptive learning rate variant of the backpropagation algorithm. The results obtained using this technique are compared with the ones resulted from some ARIMA models. We used the mean square error (MSE) measure to evaluate the performances of these two models. The comparative analysis leads to the conclusion that the proposed model can be successfully applied to forecast the financial data. Keywords: Neural Network, Nonlinear Autoregressive Network, Exogenous Inputs, Time Series, ARIMA Model
Mostrar mais

12 Ler mais

Computational Neural Network for Global Stock Indexes Prediction

Computational Neural Network for Global Stock Indexes Prediction

Recently, researchers have focused on using artificial intelligence and data mining techniques to analyze historical data and to recognize subtle relationships between variables in the financial market. Neural networks methodology is good at pattern recognition, generalization, and predicting trends. It could tolerate imperfect data, and does not require formulas or rules. It could also determine the relationship between variables and detect relevant patterns in the data. In this paper, Neural Network technique will be used for building a neutral network model for the prediction of the major stock market indexes in the United State, Europe, China and Hong Kong.
Mostrar mais

5 Ler mais

Developing a Neural Network based Index for Sentiment Classification

Developing a Neural Network based Index for Sentiment Classification

There are four different SO indexes as the input neurons in our proposed method. We are interested in which one is critical for classifying sentiment. The structure pruning of NN might provide a possible solution. In the training of neural networks, some input nodes might be considered as irrelevant and then be removed. That’s common in the case of [18], where the input attributes are pruned rather than the hidden neurons. Su et al. [16, 17] attempted to determine the important input nodes of a neural network based on the sum of absolute multiplication values of the weights between the layers. Only the multiplication weights with large absolute values are kept and the rests are removed. The equation for calculating the sum of absolute multiplication values is defined as follows.
Mostrar mais

6 Ler mais

Using Neural Network for DJIA Stock Selection

Using Neural Network for DJIA Stock Selection

This paper shows that GGAP-RBF has huge time complexity as compared to MLP and ANFIS. Moreover, GGAP-RBF does not out-perform MLP and ANFIS in Recall Rate. The paper also shows that there is positive relationship between predictions of the trained networks with the equities appreciation, which may result in better earnings for investment. A systematic equities selection approach based on ROC curve is proposed. As investors may want to focus on limited number of equities, we can choose the equities based on the strength of the predicted output values from neural network. We demonstrated that, the higher the predicted values, the higher the chances of having positive appreciations.
Mostrar mais

8 Ler mais

A Note on Hopfield Neural Network Stability

A Note on Hopfield Neural Network Stability

[21] J. Wang, X. H. Tang and X. J. Lin, “Recoverable Watermarking Algorithm for Text Authentication and Synonym Replacement Based on Hopfield Neural Network,” Pattern Recognition and Artificial Intelligence (in Chinese, English abstract), Vol. 28, No. 2, pp. 139–147, 2015.

5 Ler mais

Neural Network Based 3D Surface Reconstruction

Neural Network Based 3D Surface Reconstruction

The light source direction and the normal vector from the input 2-D images in the left side of the symmetric neural network are separated and then combined inversely to generate the reflectance map for diffuse reflection in the right side of the network. The function of each layer is discussed in detail below.

6 Ler mais

BATCH GRADIENT METHOD FOR TRAINING OF PI-SIGMA NEURAL NETWORK WITH PENALTY

BATCH GRADIENT METHOD FOR TRAINING OF PI-SIGMA NEURAL NETWORK WITH PENALTY

In this letter, we describe a convergence of batch gradient method with a penalty condition term for a narration feed forward neural network called pi-sigma neural network, which employ product cells as the output units to inexplicit amalgamate the capabilities of higher-order neural networks while using a minimal number of weights and processing units. As a rule, the penalty term is condition proportional to the norm of the weights. The monotonicity of the error function with the penalty condition term in the training iteration is firstly proved and the weight sequence is uniformly bounded. The algorithm is applied to carry out for 4-dimensional parity problem and Gabor function problem to support our theoretical findings.
Mostrar mais

10 Ler mais

Research on Spatial Estimation of Soil Property Based on Improved RBF Neural Network

Research on Spatial Estimation of Soil Property Based on Improved RBF Neural Network

To seek optimal network parameters of Radial Basis Function (RBF) Neural Network and improve the accuracy of this method on estimation of soil property space, this study utilizes genetic algorithm to optimize three network parameters of RBF Neural Network including the number of hidden layer nodes, expansion speed and root-mean-square error. Then, based on optimized RBF Neural Network, spatial interpolation is conducted for arable soil property under different sampling scales in the study area. The estimation result is superior to RBF Neural Network method without optimization and geostatistical method in terms of the fitting capacity and interpolation accuracy. Compared with the result of space estimation by RBF Neural Network method without optimization, among the 5 schemes, the forecast errors of RBF Neural Network optimized by genetic algorithm reduce greatly. Mean absolute error (MAE) reduces 0.4868 on the average and root-mean-square error (RMSE) reduces 1.492 on the average. Therefore, RBF Neural Network method optimized by genetic algorithm can gain the information about regional soil property spatial variation more accurately and provides technical support for arable land quality evaluation, accurate farmland management and rational application of fertilizer.
Mostrar mais

8 Ler mais

Accelerating the training of convolutional neural network

Accelerating the training of convolutional neural network

This issue can be mitigated by reducing the desired range of applications and only considering problems that deal with images. By applying some techniques from image processing, such as convolutions, which use a reduced number of parameters, and pooling, to reduce spatial dimen- sions, in 1999, one of the first CNN’s was created [1]. A deeper architecture was proposed in 2012 [2], which won the ImageNet competition, proving the relevance of not only this type of neural network, but also of Deep Learning (DL). Ever since then, various different types of CNN’s have been state of the art methods for any type of problem dealing with images, as attested in chapter 3.
Mostrar mais

78 Ler mais

DESIGN AND ANALOG VLSI IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK

DESIGN AND ANALOG VLSI IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK

In this paper we are using back propagation algorithm [5]-[6] as a training Algorithm for the proposed neural network. Back-propagation network (BPN) is the best example of a parametric method for training supervised multi-layer perception neural network for classification. BPN like other SMNN (supervised multi layer feed forward neural network) models has the ability to learn biases and weights. It is a powerful method to control or classify systems that use data to adjust the network weights and thresholds for minimizing the error in its predictions on the training set. Learning in BPN employs gradient-based optimization method in two basic steps: to calculate the gradient of error function and to compute output by the gradient.
Mostrar mais

14 Ler mais

Show all 3951 documents...