The honey **bee** is categorized into 178 number of species, and there are ten genus-group names[24]. The social structure, quality of nectar and fertilization of crops are the few most important properties for making these species most famous. Due to various geographical locations and differences in weather pattern, the honey bees are found **with** different color, shape and nature, but irrespective of those differences there are some basic jobs that they perform on daily basis. The foraging or finding new food source and information sharing about new source are the most common jobs. The selection of the food source depends on many parameters. The food source is selected on the basis of the quality of nectar, distance of food source from the **colony**, quantity of nectar. Apart from food sources the honey bees are categorized according to their assignment. There are two main types of **bee** found in the honey **bee** **colony** such as : 1) Employed **bee** and 2) unemployed **bee**. The unemployed **bee** is one type of **bee** which doesn't know the food source location. This type of **bee** generally search the food source randomly or attend the food source from the knowledge of waggle dance. The scout **bee** is one kind of unemployed **bee** which starts the searching of a food source without its knowledge. Appearance of this kind of **bee** generally varies from 5-30% of the total population. The other type of unemployed **bee** is the onlooker **bee** which starts to find the food source by the knowledge gained from the waggle dance. The employed bees are the successful onlooker **bee** **with** the knowledge of food source. The employed bees generally share the food source information **with** the other **bee**, and guide others about richness and the direction of food source. In reality the guidance of employed bees is reflected through their waggle dance in the specific dance area, and this dancing area is the main information sharing center for all employed honey bees. The onlookers **bee** is the most informative **bee**, as all the information about food source is available in dancing area. On the basis of available information in the dancing area the onlookers **bee** selects the best one. Sharing information also depends on the quantity of food, and hence the recruitment of honey **bee** depends on the quantity and richness of food source. When nectar amount is decreased, the employed **bee** becomes an unemployed **bee**, and abandon the food source. The algorithmic process of **artificial** **bee** **colony** simulates the real life scenario of searching a food source, and maintain various types of bees involved in the searching and collecting the nectar. The population of the algorithm represents the honey **bee** involved in the process of collecting nectar, and it can be expressed as follows:

Mostrar mais
14 Ler mais

Many other evolutionary approaches can be seen in research papers to optimize **Neural** Networks. Already research shows **artificial** intelligence individually and in conjunction **with** other optimization techniques have solved many challenging task. This paper uses PSO as a learning algorithm to find initial starting weights and biases for FFNN and improves the classification rate for **training** and testing samples of benchmarking databases. Finding exact or optimum layers and number of neurons in the layers is again an issue, choice of proper **neural** **network** **with** optimum parameters is again **based** on trial and error. And most often it becomes a tedious job. For such case PSO **with** FFNN at least finds the weights and biases values that can help FFNN to push itself nearer the convergence [11] even **with** the wrong guess. In most cases, it is found that the **Backpropagation** tends to under or over a problem.

Mostrar mais
6 Ler mais

Introduced another higher order **feed** **forward** polynomial **neural** **network** called the pi-sigma **neural** **network** (PSN)[2], which is known to provide naturally more strongly mapping abilities than traditional **feed** **forward** **neural** **network**. The **neural** networks consisting of the PSN modules has been used effectively in pattern classification and approximation problems [1,7,10,13]. There are two ways of **training** to updating weight: The first track, batch **training**, the weights are updating after each **training** pattern is presented to the **network** in [9]. Second track, online **training**, the weights updating immediately after each **training** sample is fed (see [3]). The penalty condition term is oftentimes inserted into the **network** **training** algorithms has been vastly used so as to amelioration the generalization performance, which refers to the capacity of a **neural** **network** to give correct outputs for untrained data and to control the magnitude of the weights of the **network** structure [5,6,12]. In the second track the online **training** weights updating become very large and over-fitting resort to occur, by joining the penalty term into the error function [4,8,11,14], which acts as a brute-force to drive dispensable weights to zero and to prevent the weights from taking too large in the **training** process. The objective of this letter to prove the strong and weak convergence main results which are **based** on **network** algorithm prove that the weight sequence generated is uniformly bounded.

Mostrar mais
10 Ler mais

10 Ler mais

Multilayer **Feed** **Forward** **Neural** **Network** (MFNN) has been successfully administered architectures for solving a wide range of supervised pattern recognition tasks. The most problematic task of MFNN is **training** phase which consumes very long **training** time on very huge **training** datasets. An enhanced linear adaptive skipping **training** algorithm for MFNN called Half of Threshold (HOT) is proposed in this research paper. The core idea of this study is to reduce the **training** time through random presentation of **training** input samples without affecting the network’s accuracy. The random presentation is done by partitioning the **training** dataset into two distinct classes, classified and misclassified class, **based** on the comparison result of the calculated error measure **with** half of threshold value. Only the input samples in the misclassified class are presented to the next epoch for **training**, whereas the correctly classified class is skipped linearly which dynamically reducing the number of input samples exhibited at every single epoch without affecting the network’s accuracy. Thus decreasing the size of the **training** dataset linearly can reduce the total **training** time, thereby speeding up the **training** process. This HOT algorithm can be implemented **with** any **training** algorithm used for supervised pattern classification and its implementation is very simple and easy. Simulation study results proved that HOT **training** algorithm achieves faster **training** than the other standard **training** algorithm.

Mostrar mais
9 Ler mais

In this paper, we present a new efficient **method** for accurate eye localization in color images. Our algorithm is **based** on robust feature filtering and explicit geometric clustering. This combination enhances localization speed and robustness by relying on geometric relationships between pixel clusters instead of other properties extracted from the image. Furthermore, its efficiency makes it well suited for implementation in low performance devices, such as cell phones and PDAs. Experiments were conducted **with** 1532 face images taken from a CCD camera under (real-life) varying illumination, pose and expression conditions. The proposed **method** presented a localization rate of 94.125% under such circumstances.

Mostrar mais
8 Ler mais

Multi layer **feed** **forward** **artificial** **neural** networks have the capacity of modeling nonlinear functions (e.g., Cybenko, 1988; Hornik et al, 1989). This property allows their ap- plication in control schemes, where an internal model of the dynamic system is needed, as is for example the case in pre- dictive control (Clarke et al, 1987a, 1987b). A commonly used way of representing the internal model of the dynamics of the system has been to design the **neural** **network** to learn a system approximation in the form of a discrete model **with** delayed inputs of the NARMA type (Non-linear Auto Re- gressive Moving Average) (Leontaritis and Billings, 1985a, 1985b; Chen and Billings, 1989, 1990 and 1992; Narendra and Parthasarathy, 1990; Hunt et al, 1992; Mills et al, 1994; Liu et al 1998; Norgaard et al, 2000). The **neural** net de- signed and trained in this way has the disadvantage of need- ing too many neurons in the input and hidden layers. In recent works, the use of a **neural** ordinary differential equation (ODE) numerical integrator as an approximate dis- crete model of motion, together **with** the use of Kalman fil- tering for calculations of control actions, was proposed and tested in the predictive control of dynamic systems (Rios Neto, 2001; Tasinaffo and Rios Neto, 2003). It was shown and illustrated **with** tests that **artificial** **feed** **forward** **neural** networks could be trained to play the role of the dynamic system derivative function in the structure of ODE numeri- cal integrators, to get internal models in nonlinear predictive control schemes. This approach has the advantage of reduc- ing the dimension and complexity of the **neural** **network**, and thus of facilitating its **training** (Wang e Lin, 1998; Rios Neto 2001). It was also shown that the stochastic nature and the good numerical performance of the Kalman filtering param- eter estimator algorithm make its choice a good one, not only to train the **feed** **forward** **neural** **network** (Singhal et al, 1989; Chandran, 1994; Rios Neto, 1997), but also to estimate the predictive control actions (Rios Neto, 2000). Its use allows considering the errors in the output patterns in the supervised **training** of the **artificial** **neural** networks. It also allows the possibility of giving a stochastic meaning to the weight ma- trices present in the predictive control functional.

Mostrar mais
12 Ler mais

Ventura et al. (2009) calculated percentage of coincidence among breeding values for weight at 205 days in cattle Ta- bapuã, originating from the **neural** networks and the values predicted by BLUP. Considering the first hundred animals, the percentage was 66% and for subsequent classifications matching the value was even lower (26%). Guided by results the authors did not recommend the use of **neural** networks in genetic evaluations when to insert new animals in the future that are not contained in the database trained.

5 Ler mais

This paper presents Ant lion optimization (ALO) technique to solve optimal load dispatch problem. Ant lion optimization (ALO) is a novel nature inspired algorithm. The ALO algorithm mimics the hunting mechanism of ant lions in nature. Five main steps of hunting prey such as the random walk of ants, building traps, entrapment of ants in traps, catching preys, and re-building traps are implemented. Optimal load dispatch (OLD) is a **method** of determining the most efficient, low-cost and reliable operation of a power system by dispatching available electricity generation resources to supply load on the system. The primary objective of OLD is to minimize total cost of generation while honoring operational constraints of available generation resources. The proposed technique is implemented on 3, 6 & 20 unit test system for solving the OLD. Numerical results shows that the proposed **method** has good convergence property and better in quality of solution than other algorithms reported in recent literature.

Mostrar mais
10 Ler mais

Quantization is defined as the process of reducing the number of bits required to store an image [14]. It is a lossy compression technique. In DCT, quantization is done by dividing the given matrix by standard quantization matrix to suppress the AC coefficients. In this work, most of the high frequency coefficients **with** less information are rounded to zero in thresholding step. The remaining coefficients are floating point numbers and so occupy more bytes for representation. To reduce bit rate, these coefficients are rounded to integer values. A constant Q is used for the purpose of quantization. The quantization levels used here are 4, 8, 16 and 32. The equations for quantization and de- quantization are as follows:

Mostrar mais
8 Ler mais

Several approaches have been proposed to model the specific intelligent behaviors of honey **bee** colonies and applied for solving combinatorial type problems. The behavior of honey bees shows many characteristics like synergy and cooperation, so honey bees colonies have aroused great interests in modelling intelligent performance these years [9, 10], but these algorithms are most mechanism by the marriage in bees. This algorithm considered an **artificial** **bee** **colony** as dynamical system gathering information from an environment and regulating its behavior in accordance to it. They instated a robotic idea on the foraging behavior of **bee** colonies. Usually, all these robots are physically and functionally identical, so that any robot can be randomly substituted by the others. The **colony** possesses a significant tolerance, the failure in a single agent does not stop the performance of the whole system. The individual robots, like to have limited capabilities and limited knowledge of the environment. On the other hand, the **colony** extends collective intelligence.

Mostrar mais
14 Ler mais

The Gradient Descent algorithm was very slow in converging for the required value of the performance index. The average time required for **training** the **network** using the Levenberg- Marquardt algorithm was the least whereas, maximum time was required for **training** the **network** using the Scaled Conjugate Gradient Descent algorithm. The **training** algorithm employing Bayesian Regularization continuously modifies its performance function and hence, takes more time as compared to the Levenberg-Marquardt algorithm but this time is still far less than the Scaled Conjugate Gradient Descent **method**. From Table 1, it can be established that the Levenberg-Marquardt algorithm is the fastest of all the **training** algorithms considered in this work for **training** a **neural** **network** to identify the multimachine power system. Since the **training** time required for different **training** algorithms have been compared, the conclusion drawn from the results for the offline **training** may also be extended to online **training**. Therefore, it can be assumed that similar trend of **training** time required by the different **training** algorithms will be exhibited during online **training** of the proposed **neural** identifier for continuous updating of the offline trained identifier.

Mostrar mais
15 Ler mais

This paper proposes a new architecture for spiking **neural** networks for wood defect classification The proposed architecture consists of a **feed** **forward** **network** of spiking neurons which is fully connected between the input and hidden layers **with** multiple delayed synaptic terminals (m ) and partially connected between the hidden and output layers, **with** each output neuron linked to different hidden neurons. An individual connection consists of a fixed number of m synaptic terminals, where each terminal serves as a sub-connection that is associated **with** a different delay and weight between the input and hidden layers. The weights of the synaptic connections between the hidden and output neurons are fixed at 1. Experiments were carried out **with** a number of **network** structures **with** different parameters and learning procedures. The networks finally adopted had 17 input neurons in the input layer. One input neuron was therefore dedicated for each mean value. There were 13 output neurons, one for each defect category, and thirteen hidden neurons (the number of hidden neurons here depends on the number of classes). Table 2 show the configurations of the networks used.Fig.4 and 5 shows the structure of the networks and the multi-synapse connections respectively.

Mostrar mais
5 Ler mais

Experimental design methods [9] were developed originally by Fisher [10]. However, classical experimental design methods are too complex and not easy to use. Furthermore, a large number of experiments have to be carried out as the number of the process parameters increases. To solve this important task, the Taguchi **method** uses a special design of orthogonal array to study the entire parameter space **with** only a small number of experiments. The experimental results are then transformed into a signal-to-noise (S/N) ratio. The S/N ratio can be used to measure the deviation of the performance characteristics from the desired values. Usually, there are three categories of performance characteristics in the analysis of the S/N ratio: the lower-the- better, the higher-the-better, and the nominal-the-better. Regardless of the category of the performance characteristic, a larger S/N ratio corresponds to better performance characteristic. Therefore, the optimal level of the process parameters is the level **with** the highest S/N ratio. Furthermore, a statistical analysis of variance (ANOVA) is performed to identify the process parameters that are statistically significant. The optimal combination of the process parameters can then be predicted **based** on the above analysis.

Mostrar mais
9 Ler mais

In the last decades, several studies have been published for structural behaviour evaluation using **neural** networks, namely for earthquake engineering applications. ANNs have been used to predict the linear [7] and the nonlinear [8,9] dynamic responses of structures subject to earthquakes for damage assessment [10–13], namely using fragility curves [14,15], or for seismic reliability assessment [16,17]. A Monte Carlo simulation technique was also adopted for generating data used for **training** ANNs [18]. Traditional seismic vulnerability assessment methods that use a mean capacity curve (which is representative of a given structural typology) for estimating seismic vulnerability and earthquake damage have some problems in the structural performance evaluation of an individual building. The main reason is related to the dispersion of values around the mean curve. The real capacity curve of a given building will probably be different from the typological mean capacity curve. This means that the average result of a typical typological capacity curve can lead to overestimating or underestimating the real seismic damage, depending on the studied building.

Mostrar mais
14 Ler mais

The methodology proposed by Box and Jenkins, in 1970, makes it possible to undertake an analysis of the behaviour of time series, **based** on a joint double study: on the one hand, there is an autoregressive component that is established in accordance **with** the previous statistical history of the variables considered and, on the other hand, there is a treatment of the random or stochastic factors, specified through the use of moving averages. Due to their delineation scheme and operative resolution, these models allow for the incorporation of seasonal analyses and the isolation of the trend component, also making it possible to go deeper into the interrelations between these components, which are integrated into the evolution of the series under study (Parra & Domingo, 1987; Chu, 1998). The models introduced by Box and Jenkins exclusively describe stationary series, or, in other words, series **with** constant mean and variance over time and autocovariance dependent only on the extent of the phase lag between the variables, so that one should begin by checking or provoking the stationarity of the series (Pulido, 1989). These are the so-called ARIMA (Autoregressive Integrated Moving Average) models, which are quite suitable for short-term forecasting and for the case of series

Mostrar mais
19 Ler mais

Adaptive update lifting scheme **based** Interactive **artificial** **bee** **colony** algorithm is proposed in this paper. Wavelet transform **based** compression technique is used for images and multimedia files. Approximation and detail coefficients are extracted from the signal by filtering in wavelet transform. To increase frequency resolution both approximation and detail coefficients are re-decomposed up to some level. **Artificial** **bee** **colony** algorithm by local search finds different update coefficients to get quality of compressed image by choosing optimally best update coefficient. In IABC, the affection between employed bees and the onlooker bees is found by considering the concept of universal gravitation. By passing on control parameter different values, the universal gravitation involved in the IABC has a single onlooker **bee** & variety of quantities of employed bees. As a result, IABC compared **with** existing image compression schemes such as wavelet transform and **Artificial** **Bee** **colony** Algorithm, the proposed work gives better PSNR.

Mostrar mais
12 Ler mais

Dr. D. Najumnissa and Dr. T. R. Rangaswamy proposed in [6] proposed an Eliptic seizure detection technique using wavelet transform and Adaptive Neuro-Fuzzy Logic. In this research, they tried to increase the diagnostic importance of EEG using Wavelet transform coefficients and Adaptive Neuro Fuzzy Inference System (ANFIS). For the analysis of seizure in EEG signals, 50 subjects out of which 20 normal and 30 seizures were used. Here the EEG signals were first decomposed into time and frequency domain using wavelet transform technique and then the statistical features were calculated to describe their distribution. An Adaptive Neuro Fuzzy Logic **based** system was used for the classification of epileptic seizure. For this the ANFIS systems were trained initially to detect epileptic seizures and then classify them. Further, BPN algorithm is used to study and compare the datasets. Average True Positive Rate and True Negative Rate weare found to be 97% and 99% respectively. The presented ANFIS classifier combined the **neural** **network** adaptive capabilities and the fuzzy logic qualitative approach.

Mostrar mais
3 Ler mais

Abstract - This work investigates the use of **artificial** **neural** networks in modeling an industrial fermentation process of Pleuromutilin produced by Pleurotus mutilus in a fed-batch mode. Three **feed**-**forward** **neural** **network** models characterized by a similar structure (five neurons in the input layer, one hidden layer and one neuron in the output layer) are constructed and optimized **with** the aim to predict the evolution of three main bioprocess variables: biomass, substrate and product. Results show a good fit between the predicted and experimental values for each model (the root mean squared errors were 0.4624% - 0.1234 g/L and 0.0016 mg/g respectively). Furthermore, the comparison between the optimized models and the unstructured kinetic models in terms of simulation results shows that **neural** **network** models gave more significant results. These results encourage further studies to integrate the mathematical formulae extracted from these models into an industrial control loop of the process.

Mostrar mais
12 Ler mais

After return function that needs to be maximized is defined, reinforcement learning uses several algorithms to find the policy which produces the maximum return. Naive brute force algorithm in first step calculates return function for each possible policy and chooses the policy **with** the largest return. Obvious weakness of this algorithm is in case of extremely large or even infinite number of possible policies[1][14]. This weakness can be overcome by value function approaches or direct policy estimation. Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for one policy; usually either the current or the optimal estimates. These methods converge to the correct estimates for a fixed policy and can also be used to find the optimal policy.

Mostrar mais
6 Ler mais