• Nenhum resultado encontrado

Artificial Intelligence Techniques – Some Attractive and Powerful Computational Tools for Geodesy and Geomatics I.D. Doukas

N/A
N/A
Protected

Academic year: 2024

Share "Artificial Intelligence Techniques – Some Attractive and Powerful Computational Tools for Geodesy and Geomatics I.D. Doukas "

Copied!
16
0
0

Texto

(1)

Artificial Intelligence Techniques – Some Attractive and Powerful Computational Tools for Geodesy and Geomatics I.D. Doukas

Aristotle University of Thessaloniki

Department of Civil Engineering, Division of Geotechnical Engineering Laboratory of Geodesy and Geomatics, GR-541 24, Univ. Box #465, Hellas

Abstract: There is a variety of definitions about Artificial intelligence (AI). In this paper, the following definition is selected advisedly: AI is the development of computer systems to solve difficult problems, which can not be solved by an ex- haustive examination of all possible solutions since these may be too many. From this point of view, as this definition guides, a brief review of some of AI-tools is attempted with two targets: To present some of the tools (older-“classics” and younger-“exotics”) and to explore their penetrating potentiality in the opportune fields of Geodesy and Geomatics.

1. Introduction-Some useful general terms

The terminology below can be found in many (slight or not) variations according to different scientific views, books, Internet and other sources. In any case, the se- lected terminology here is fully consistent with the related bibliography of this pa- per. Furthermore, this bibliography is rich enough to allow the reader for deeper explorations of the extremely wide scientific field of Artificial Intelligence.

The term intelligence is always defined as the ability to learn effectively, to react adaptively, to make proper decisions, to communicate in language or images in a sophisticated way, and to understand.

Artificial intelligence (AI), the science and engineering of making intelligent ma- chines, is a term coined by Prof. John McCarthy in 1956. AI is a general idiom which includes, among others, evolutionary algorithms, genetic programming, arti- ficial neural networks, cellular automata and fuzzy systems (Rajabi et al., 2009), (Kasabov, 1998), (Kalogirou, 2007), (McCarthy, 2007). The main objectives of AI are to develop methods and systems for solving problems, usually solved by the intellectual activity of humans, for example, image recognition, language and speech processing, planning, and prediction, thus enhancing computer information systems; and to develop models which simulate living organisms and the human brain in particular, thus improving our understanding of how the human brain works.

Knowledge (alternatively, the problem of dealing with knowledge), was for many

(2)

years a research field for psychology and sociology. The evolutions of AI trans- formed the knowledge-problem to a problem of representation of knowledge in computers.

Knowledge Engineering (KE) is a branch of Artificial Intelligence (AI). It is an area that mainly concentrates on activities with knowledge (including knowledge acquisition, representation, validation, inference and explanation). It is a discipline devoted to integrating human knowledge in computer systems, which means to building knowledge-based systems (Ding, 2001).

Heuristic (a word with Greek roots) means discovery. Heuristic methods are based on experience, rational ideas, and rules of thumb. Heuristics are based more on common sense than on mathematics. Heuristics are useful, for example, when the optimal solution needs an exhaustive search that is not realistic in terms of time. In principle, a heuristic does not guarantee the best solution, but a heuristic solution can provide a tremendous shortcut in cost and time (Kasabov, 1998).

Soft Computing (SC) (Zadeh 1994), (Kecman, 2001), (a term sometimes met as Softcomputing), is a concept introduced by Iranian professor Asgar Lotfi Zadeh in the early 1990s. SC is an evolving collection of methodologies for the representa- tion of the ambiguity in human thinking. SC refers to a collection of computational techniques in computer science, artificial intelligence, machine learning and some engineering disciplines, which attempt to study, model, and analyze very complex phenomena: those for which more conventional methods have not yielded low cost, analytic, and complete solutions. Earlier computational approaches could model and precisely analyze only relatively simple systems. More complex systems aris- ing in biology, medicine, engineering, earth sciences, ecology, the humanities, management sciences, and similar fields often remained intractable to conventional mathematical and analytical methods. The core methodologies of SC are Fuzzy Logic (FL), Neuro-Computing (NC) (or neural modeling, brain theory, (Artificial) Neural Networks (ANN)), Probabilistic Reasoning (PR), Evolutionary Computa- tion (EC) (especially the Genetic Algorithms (GA) and the Evolution Strategies (ES)), chaotic systems, belief networks, parts of learning theory. SC targets at ex- ploiting the tolerance for imprecision and uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions.

Overall, there is a rather permanent confusion about the terms AI, SC, KE and their components-branches. A few (among others) reasons are: their differences in age of appearance on the scientific stage, the involvement of other disciplines (such as computer science, mathematics etc.) and the fact that there is not yet an elucidation and official classification of the sectors/branches of AI. For example, FL, ANN, etc., both appear to belong to the SC and to the AI. In any case, for convenience these overlaps will be resolved here, by considering for the rest of the paper all the relevant tools as tools of AI/SC/KE.

Generally speaking, in order AI to achieve modeling human intelligence, there are two exemplars:

(3)

(1) The symbolic: It is based on symbol manipulation. Symbolic AI rule-based systems can be used when the problem knowledge is in the form of well defined, rigid rules; no adaptation is possible, or at least it is difficult to implement (Ka- sabov, 1998). A symbolic system consists of two sets:

a) A set of elements (or symbols) which can be used to construct more com- plicated elements or structures. The symbols have semantic meanings.

They represent concepts or objects.

b) A set of processes and rules, which, when applied to symbols and struc- tures, produce new structures.

(2) The subsymbolic. It is based on Neurocomputing (or Neuro Computing (NC)). NC is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is an interdiscipli- nary science that links the diverse fields of neuroscience, cognitive science and psychology with electrical engineering, computer science, mathematics and phys- ics.

Hard Computing (HC) (i.e. conventional («traditional») computing), requires a precisely stated analytical model and often a lot of computation time. HC mainly is based on formal logical systems, such as sentential logic and predicate logic, or rely heavily on computer-aided numerical analysis (the finite element analysis is a representative example). HC has its foundations on binary logic, crisp systems, numerical analysis and crisp software. There is no need to mention here that many analytical models are valid for ideal cases and real-world problems exist in a non- ideal environment.

Comparing the contrasts between SC and HC, SC resemble biological processes more closely than HC. Also, the SC techniques often complement each other. The premises of HC are: precision, certainty and rigor. The premises of SC are: The real world problems are pervasively imprecise and uncertain. Precision and cer- tainty carry a cost. For a particular given problem, HC strives for exactness and full truth, while SC exploits the given tolerance of imprecision, partial truth, and uncer- tainty. Finally, inductive reasoning plays a larger role in SC than in HC (Du and Swamy, 2006), (Kirankumar and Jayaram, 2008), (Syed and Cannon, 2004).

2. Areas and tools of AI/SC/KE Most important tools of AI/SC/KE are:

2.1. Artificial Neural Networks (ANN) (Du and Swamy, 2006), (Ra- jabi et al., 2009), (Rao, 1995), (Taylor and Smith, 2006):

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems (such as the brain) process infor- mation. A brain consists of a large number of cells, referred to as "neurons". A

(4)

neuron receives impulses from other neurons through a number of "dendrites". De- pending on the impulses received, a neuron may send a signal to other neurons, through its single "axon", which connects to dendrites of other neurons. Like the brain, ANNs consist of elements, each of which receives a number of inputs, and generates a single output, where the output is a relatively simple function of the inputs. Research on ANNs dates back to the 1940s; their discipline is well devel- oped with wide applications in almost all areas of science and engineering. The key element of this paradigm is the novel structure of the information processing sys- tem. Main purpose of this model is simulation of an ideal form of basic processors in brain and self continuity, signal processing and self-organization abilities. From a more mathematical point of view, an ANN is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Information is passed between these units, along interconnec- tions. An incoming connection has two values associated with it, an input value and a weight. The output of the unit is a function of the summed values of the in- puts, multiplied by the weights. Learning in biological systems involves adjust- ments to the synaptic connections that exist between the neurons, also an attribute of ANNs.

When ANNs are used to predict numeric values, they typically have just one out- put. This is because single-output nets are more reliable than multiple-output nets, and almost any prediction problem can be addressed using single-output nets. On the other hand, ANNs have multiple outputs when used for classification/category prediction.

Statistical methods can be used when statistically representable data are available and the underlying type of goal function is known. ANNs are applicable when problem knowledge includes data without having any knowledge as to what the type of the goal function might be; they can be used to learn heuristic rules after training with data. ANNs also can be used to implement existing fuzzy or symbolic rules, providing a flexible approximate reasoning mechanism (Kasabov, 1998).

ANNs provide an alternative to more traditional statistical methods. ANNs are used for function approximation (like Linear Regression) and ANNs are used for classification (like Discriminant Analysis and Logistic Regression), as well.

Main advantages of ANNs:

• Strong learning and generalization capabilities. After learning the unknown relations from given data set(s), an ANN can then predict, by generaliza- tion, outputs for new samples that were not included in the learning sample set.

• Dealing with a given problem (linear or not), they absorb information (knowledge) in a direct manner through their training-phase. They can handle very large and/or complex systems, with high-dimensional situa- tions.

• They are capable for parallel and distributed processing, associative mem- ory, vector quantization and optimization problems, wide fields of applica-

(5)

tions far beyond the usual problem of function approximation.

• They are capable to deal with (either numerical or analogue) data, if there are no alternative means to use for these data (p.e. due data form, due the high-dimensionality of the data etc.).

• An ANN is a «black-box» that directly learns the internal relations of an unknown system. Consequently, the ANN method is model free. This

«black-box» character allows the user not to need high-level mathematical knowledge and experience.

• The acquired information-knowledge is finally stored, in a compact form, inside the trained network. Furthermore, located there the knowledge, it of- fers to the user easy access and management.

• They can be tolerant to transient bad data, adapt to accommodate true sys- tem change, and based upon the complexity of the architecture, usually have redundancy for internal knowledge because of the weights and inter- nal connections. Although there could be noisy data (a most usual situa- tion), the ANN solutions can be robust. This characteristic is one of the most important credentials of ANNs.

• Being in the generalization mode over a set of not-used in the training mode data, they can deliver high accuracy.

• An ANN is a model of computations that can be implemented in various types of computer hardware.

Main disadvantages of ANNs:

• The information contained into the training set of data, has to (ideally) be spread evenly covering all of the system’s range.

• The design on ANNs lacks strong supportive theory.

• An acceptable solution is not always guaranteed. Additionally, if the out- come is an acceptable solution, its rationalization has not plenty of oppor- tunities.

• The processing could obtain as outcome «memory», not «intelligence». It is about the overtraining-problem, and although the ANN seems trained well, actually is just working on remembering solutions and specially train- ing data, with paucity for real data.

2.2. Fuzzy Systems (FS) (Du and Swamy, 2006), (Rajabi et al., 2009), (Syed and Cannon, 2004):

Fuzzy theory was first introduced in 1965 by Iranian professor Asgar Lotfi Zadeh, (at Berkley University, USA) to implement in uncertainty condition. This theory is able to provide several of phenomena, variables and the ground to deduction, con- trol and decision making in uncertainty condition. Fuzzy logic (FL) provides a means for treating uncertainty and computing with words. This is especially useful to mimic human recognition, which skillfully copes with uncertainty. Fuzzy sys- tems (FS) are conventionally created from explicit knowledge expressed in the

(6)

form of fuzzy rules, which are designed based on experts’ experience. A FS can explain its action by fuzzy rules. FS can also be used for function approximation.

The synergy of FL and ANNs generates neurofuzzy systems, which inherit the learning capability of ANNs and the knowledge-representation capability of FS.

Fuzzy logic (FL) has two different meanings.

 In a narrow sense, FL is a logical system, which is an extension of multi- valued logic.

 In a wider sense, FL is almost synonymous with the theory of fuzzy sets, a theory which relates to classes of objects with "unsharp" boundaries in which membership is a matter of degree. Even in its more narrow defini- tion, FL differs both in concept and substance from traditional multivalued logical systems.

Classical sets follow Boolean logic (i.e. either an element belongs to a set or not) whereas fuzzy sets use the concept of degree of membership. The membership functions define the degree to which an input belongs to a fuzzy set. These mem- bership functions are chosen empirically and optimized using a sample in- put/output data. There are three points of view to define fuzzy membership: seman- tic import model, similarity relation model, and experimental analysis.

In a general view, FS are applicable when the problem knowledge includes heuris- tic rules, but they are vague, ill-defined, approximate, possibly contradictory.

2.3. Evolutionary Computation, Evolutionary Algorithms, Evolu- tionary Strategies (Du and Swamy, 2006), (Goldberg, 1989), (Mai, 2010):

Evolutionary Computation is a computational method for obtaining the best pos- sible solutions in a huge solution space based on Darwin’s survival-of-the-fittest principle. Evolutionary algorithms are a class of robust adaptation and global opti- mization techniques for many hard problems.

The Genetic Algorithm (GA) is the best known and most studied among evolu- tionary algorithms,, while Evolutionary Strategy is more efficient for numerical optimization. Genetic algorithms require neither data sets nor heuristic rules, but a simple selection criterion to start with; they are very efficient when only a little is known to start with (Kasabov, 1998). EC has been applied for the optimization of the structure or parameters of ANNs, FS and neurofuzzy systems. The hybridiza- tion between ANN, FL, and EC provides a powerful combination for solving engi- neering problems.

The first GAs were developed in the early 1970s by John Holland (at University of Michigan, USA). GAs are inspired by the mechanism of natural selection where stronger individuals are likely the winners in a competing environment. The GAs are defined as:

(7)

“... search algorithms based on the mechanics of natural selection and natural ge- netics. They combine survival of the fittest among string structures with a struc- tured yet randomized information exchange to form a search algorithm with some of the innovative flair of human search. In every generation, a new set of artificial creatures (strings) is created using bits and pieces of the fittest of the old; an occa- sional new part is tried for good measure. While randomized, GAs are no simple random walk. They efficiently exploit historical information to speculate on new search points with expected improved performance”.

A GA has three major components.

 The first component is related with the creation of an initial population of m randomly selected individuals. The problem is encoded into binary strings (rows of “1s” and “0s”) to represent the chromosomes, and then the computer generates many of these “bit” strings to form a whole population of them. The initial population shapes the first generation.

 The second component inputs m individuals and gives as output an evalua- tion for each of them based on an objective function known as fitness func- tion (this fitness function replaced the role of death in the biological world). This evaluation describes how close to our demands each one of these m individuals is.

 Finally the third component is responsible for the formulation of the next generation. A new generation is formed based on the fittest individuals of the previous one. This procedure of evaluation of generation N and pro- duction of generation N+1 (based on N) is iterated until a performance cri- terion is met. The creation of offspring based on the fittest individuals of the previous generation is known as breeding. The breeding procedure in- cludes three basic genetic operations: Reproduction, Crossover and Muta- tion.

Evolutionary Strategies (ES) as being introduced in the early 1970s (Mai, 2010) have seen many improvements within the last decades. Nowadays, this approach can be regarded as an alternative to standard optimization techniques in many sci- entific areas, especially in cases where gradient methods like the classical least- squares algorithm fail.

Compared to other optimization techniques, ES algorithms are relatively easy to realize, because the main idea behind it is very simple. They are universal, unde- manding, close to reality, robust, and can be considered as a compromise between volume and path orientated search strategies. Once implemented, the same algo- rithm can be applied to a wide range of problems without any big changes. In many cases it’s even sufficient just to set up the new performance index that’s specific to the actual problem; one rarely needs any additional a priori insight into the mathe- matical/physical nature of the optimization task.

The only necessary condition for the ES to be applicable to a specific problem, is the inherent existence of strong causality, not to be confused with weak causality.

(8)

But on the other hand, there is no guarantee to actually find the global optimum. In addition, the convergence speed of an ES algorithm might be less compared to some alternative methods that are tuned to a specific problem.

Although there are several differences between GA and ES, their between barriers are nowadays being hazy, since both techniques are improved by borrowing the ideas from each other.

2.4. Harmony Search (HS) (Geem, 2010a), (Geem, 2010b):

In optimization problems, if the variables are discrete, they just do not have deriva- tives (a good example is the ready-made cross sectional area of structural mem- bers). If such a case, the HS (Harmony Search) algorithm offers an outlet, since it is based on a novel stochastic derivative. HS algorithm is a phenomenon- mimicking algorithm, which is inspired by the improvisation process of (especially Jazz) musicians. In the HS algorithm, each "musician" (i.e. a decision variable)

"plays" (i.e. generates) a "note" (i.e. a value) for finding a "best harmony" (i.e. a global optimum) all together. The traditional optimization algorithms, in order to detect the right direction of the optimal solution, they give information dealing with gradient. On the contrary, the above mentioned stochastic derivative that characterizes the HS algorithm gives a probability to be selected for each value of a decision variable.

2.5. Swarm Intelligence (SI) (Garg et al., 2009), (Umarani and Selvi, 2010), (Bonabeau and Meyer, 2001), (Kirankumar and Jayaram, 2008):

The collective behavior that emerges from a group of social insects has been dubbed “Swarm Intelligence.” (SI). Social insects work without supervision. In fact, their teamwork is largely self-organized, and coordination arises from the dif- ferent interactions among individuals in the colony. Although these interactions might be primitive (one ant merely following the trail left by another, for instance), taken together they result in efficient solutions to difficult problems (such as find- ing the shortest route among myriad possible paths in Internet traffic).

SI is a design framework based on social insect behavior. Social insects such as ants, bees, and wasps are unique in the way these simple individuals cooperate to accomplish complex, difficult tasks. This cooperation is distributed among the en- tire population, without any centralized control. Each individual simply follows a small set of rules influenced by locally available information. This emergent be- havior results in great achievements that no single member could complete by themselves. Additional properties Swarm intelligent systems possess include: ro- bustness against individual misbehavior or loss, the flexibility to change quickly in a dynamic environment, and an inherent parallelism or distributed action.

The main advantages of SI are: Flexibility, robustness and self-organization (i.e.

the group needs relatively little supervision or top-down control.

(9)

2.6. Cellular Automata (CA) (Rajabi, 2009), (Wolfram MathWorld, 2010), (Wolfram, 2000):

A Cellular Automata (CA) system, is a separated dynamic system which is formed by a group of cells in a single/multi dimensional network. Position of each cell in this positional network depends on previous position and position of neighbor cells. Situation of cells get updated by a set of local probable or absolute rules.

Clearly situation of a cell depends only to its situation in previous time round and position of near neighbors in previous time. All cells of a automaton get updated simultaneously and in parallel manner. As the result condition of all automaton advance in separate time stages. System general condition by completion of situa- tion of all cells will be determined as the result of several interactions. Place of oc- currence of interaction between a cell and its neighbors is a defined property of CA. CA are the simplest models of spatially distributed processes.

Since CA models are clearly spatial, they can be used for urban planning simula- tion and other utilization such as simulating land usage change, freeways traffic and fire advancement as well. Furthermore, the CA method has been universalized through mixing agent-like behavior or non-local search. Agent technologies, is de- veloping agents in database real world or purposeful search on the internet. In an agent-based model, agents symbolizing human or other subjects are in a simulated world of real world.

It is possible to define agent-base systems as a group of agents which have internal interaction in a common space and can change themselves and their environment as well.

CA were invented in the 1940's by the mathematicians John von Neuman and Stanislaw Ulam. Despite the simplicity of the rules governing the changes of state as the automaton moves from one generation to the next, the evolution of such a system is complex indeed.

2.7. Hybrid Systems (Kasabov, 1998), (Kalogirou, 2007), (Taylor and Smith, 2006):

The combination of two or more AI-techniques results into an Hybrid-system. The neuro-fuzzy control is a representative example of this category of systems. On the one side, the merits of a FS are found in the representation of linguistic and struc- tural knowledge by fuzzy sets.

(10)

Figure 1. Mapping the domain space into the solution space-Selection of solution- paths.

To perform fuzzy reasoning and/or logic in a qualitative manner is another strong point of FS.

(11)

Figure 2. Data and/or expertise (theories) availability enlighten on the selection of the method.

On the other side, the strongest points of ANNs are the representation of nonlinear mappings and their - through training – construction. Speaking about FS, their be- havior is comprehensible and "digestible", thanks to their logical structure and their stepwise inference procedures. Speaking about ANNs, simply the aforementioned

«black-box» recapitulates their behavior.

The possibility of combining these two components into a new system, named neuro-fuzzy control, is a rather recent consideration of the scientists. Such a com- bination gives as a resultant a new system armed with «weapons» from both the fuzzy and the neural ones.

The main goal of the design of an intelligent system is to represent as adequately as possible the existing problem domain knowledge in order to better approximate the goal function, in most cases not known a priori. In order a solution to be achieved, there is a set of different methods to select from.

(12)

Figure 3. An indicative qualification of some intelligent systems

(13)

Depending on the type of the problem and the available knowledge about the prob- lem, different methods could be recommended for use (Figure 1). If it is a matter of data population and the available knowledge (expertise), then Figure 2 illustrates a topology of «solution-spaces». Finally, a qualitative comparison of some intelligent systems is illustrated in Figure 3 (Taylor and Smith, 2006).

3. A sample of AI/SC/KE applications in Geodesy and Geomatics Geodesy and Geomatics do offer the “fertile land” for many AI/SC/KE applica- tions. There is an increasing diffusion of such methods and IAG has already estab- lished IAG-WG 4.2.3 (dealing with application of AI in Engineering Geodesy) (IAG-WG 4.2.3, 2010). By comparing problems of Geodesy / Geomatics / Engi- neering Geodesy with problems of AI/SC/KE, there are noteworthy similarities (Kutterer, 2010). Both disciplines use methods based on mathematical stochastics, they also use modeling (for variables, parameters). Finally, the issue of learning in simple “geodetic words” is equivalent to the procedure of model selection and pa- rameter identification.

The relative bibliography is in fact huge and increases rapidly. The space here al- lows only for only some indicative bibliography regarding just broad fields of Ge- odesy/Geomatics. For example, in the GIS field, Kirankumar and Jayaram, (2008) offer an excellent review. The conclusion is that the modeling of the environment and the site selection, the analysis of spatial data and the integration of SC compo- nents, the decision support are some of many essential topics where AI/SC/KE has a powerful influence (Kirankumar and Jayaram, 2008), (Rajabi et al., 2009), (Bartoněk, D., 2003). Going to the GNSS area, there are plenty of applications dealing with GPS and navigation, covering a really wide range of cases, from at- mospheric issues to geoid and space geodesy (Xu et al. 2002), (Doukas and Ioanni- dis, 1997 ), (Coulot et al. 2009), (Crowell, 1992), (Syed and Cannon, 2004), (Liu et al. 2007), (Zaletnyik et al. 2004).

Another excellent review, dealing with AI/SC/KE techniques as applied in Engi- neering Geodesy, is given by Kutterer (2010), while the same does Adeli (2001) for the science of Civil Enginnering. There is plenty of common fields between Engineering Geodesy, Civil Engineering, Geodesy and Geomatics, where geodetic methods (combined or not with AI/SC/KE tools) play key roles (for example ground deformation, landslides, geodetic control nets etc.) (Haberler-Weber 2005), (Einhorn, 2007), (Carbobe et al. 2008).

4. Conclusions

AI/SC/KE form an emerging field that consists of complementary elements of fuzzy logic, neural computing, evolutionary computation, machine learning and probabilistic reasoning. Due to their strong learning, cognitive ability and good tolerance of uncertainty and imprecision, SC techniques have found wide applica-

(14)

tions. Generally speaking SC techniques resemble human reasoning more closely than traditional techniques which are largely based on conventional logical sys- tems, such as sentential logic and predicate logic, or rely heavily on the mathemati- cal capabilities of a computer.

Navigation, deformation analysis, GIS/GPS/Geomatics, deformation network ad- justments, optimization of complex measurement procedures, decision support sys- tems are already some of the geodetic fields with derived and certified benefits arise from the AI/SC/KE penetration and diffusion.

In coming years, AI/SC/KE is expected to play an increasingly important role in the conception and design of systems whose MIQ (Machine IQ) is much higher than that of systems designed by conventional methods.

The younger “meta-modern” AI/SC/KE tools (i.e. Harmony Search and Swarm Intelligence), although they didn’t show their impact on geodetic fields so far, they have ‘an exotic timbre’ of originality, that makes them attractively promising and their role in the geodetic community is expected to be strongly expanded in the near future.

References

Adeli, H., 2001. Neural Networks in Civil Engineering: 1989−2000. Computer- Aided Civil and Infrastructure Engineering 16, pp. 126–142.

Bartoněk, D., 2003. A Genetic Algorithm for Automatic Map Symbols Placement.

Electronic Journal of Polish Agricultural Universities, Vol. 6 (1), Topic: Ge- odesy and Cartography, pp. 1-8.

Bonabeau, E. and Meyer, C., 2001. Swarm Intelligence: A Whole New Way to Think About Business, Harvard Business Review, May, pp. 104-114.

Carbobe, C, Currenti,G. and Negro, C.D., 2008. Multiobjective Genetic Algorithm Inversion of Ground Deformation and Gravity Changes Spanning the 1981 Eruption of Etna Volcano. Journal of Geophysical Research, Vol. 113, 10 pp.

Coulot, D., Collilieux, X., Pollet, A., Berio, P., Gobinddass ,M.L. Soudarin, L. and Willis, P., 2009. Genetically Modified Networks: A Genetic Algorithm Contri- bution to Space Geodesy. Application to the transformation of SLR and DORIS EOP time series into ITRF2005. Geophysical Research Abstracts, Vol.

11, EGU2009-7988.

Crowell, L. B., 1992. Spacetime Geodesy by Neural Networks. Abstracts of the Lunar and Planetary Science Conference, Vol. 23, pp. 273-274.

Ding, L., 2001. Knowledge Engineering and Soft Computing – An Introduction. In:

L. Ding (Editor), A New Paradigm of Knowledge Engineering by Soft Com- puting, Soft Computing Series-Vol. 5, World Scientific Publ. Co., pp. 1-14.

Doukas, I.D., and Ioannidis, I.Th., 1997. Prediction of Time-series Values by Using a Neural Network, "The Earth and the Universe", Volume dedicated to Prof.

L. Mavridis on the occasion of his completing 45 years of academic activities, pp. 401-411, Thessaloniki (in Greek).

(15)

Du, K.L. and Swamy, M.N.S., 2006. Neural Networks in a Softcomputing Frame- work. Springer-Verlag, London Limited, 610 pp.

Eichhorn, A., 2007. Tasks and Newest Trends in Geodetic Deformation Analysis: A Tutorial. 15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, pp. 1156-1160.

Garg, A., Gill, P., Rathi, P. , Amardeep and Garg, K., 2009. An Insight into Swarm Intelligence. International Journal of Recent Trends in Engineering, Vol 2, No. 8, pp. 42-44.

Geem, Z.W., 2010a. Harmony Search Algorithms for Structural Design Optimiza- tion. Springer-Verlag Berlin Heidelberg, 228 pp.

Geem, Z.W., 2010b. State-of-the-Art in the Structure of Harmony Search Algo- rithm. In: Z.W. Geem (Editor), Recent Advances in Harmony Search Algo- rithm, Springer-Verlag Berlin Heidelberg, pp. 1-10.

Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Professional, 432 pp.

Gullu, M. and Yilmaz, I. 2010. Outlier Detection for Geodetic Nets Using ADALINE Learning Algorithm. Scientific Research and Essays Vol. 5 (5), pp.

440-447.

Haberler-Weber, M., 2005. Analysis and Interpretation of Geodetic Landslide Monitoring Data Based on Fuzzy Systems. Natural Hazards and Earth System Sciences, 5, pp. 755–760.

IAG-WG 4.2.3, 2010. Application of Artificial Intelligence in Engineering Geod- esy, http://info.tuwien.ac.at/ingeo/sc4/wg423/wg_423.html (accessed Septem- ber 15, 2010).

Kalogirou, S.A., 2007. Introduction to Artificial Intelligence Technology. In: S.A.

Kalogirou (Editor), Artificial Intelligence in Renewable Energy Systems, Nova Science Publishers, Inc., pp. 1-46.

Kasabov, N.K., 1998. Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering. The MIT Press, 581 pp.

Kecman, V., 2001. Learning and Soft Computing. The MIT Press, 576 pp.

Kirankumar, T.M. and Jayaram, M.A., 2008. Natural Computing in Spatial Infor- mation Systems. 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT-IET, Mandi Gobindgarh. March 29, pp. 262-266.

Kutterer, H., 2010. On the Role of Artificial Intelligence Techniques in Engineering Geodesy. 2nd Workshop on Application of Artificial Intelligence and Innova- tions in Engineering Geodesy (AIEG 2010), Braunschweig, Germany, June, pp. 7-9.

Liu, Z., Du, Z. and Zou, R., 2007. Application of the Improved Genetic Algorithms with Real Code on GPS Data Processing. 3rd International Conference on Natural Computation (ICNC 2007), Vol. 5, pp. 420-424.

Mai, E., 2010. Application of an Evolutionary Strategy in Satellite Geodesy. 2nd Workshop on Application of Artificial Intelligence and Innovations in Engi- neering Geodesy (AIEG 2010), Braunschweig, Germany, June, pp. 47-58.

(16)

McCarthy, J., 2007. What is Artificial Intelligence?. (http://www- formal.stanford.edu/jmc/whatisai/whatisai.html) (accessed September 15, 2010).

Rajabi, M., Mansourian, A. and Borna, K., 2009. A Comparison Between Intelli- gent Algorithms for Solving Site-selection Problems in GIS. 7th FIG Regional Conference Spatial Data Serving People: Land Governance and the Environ- ment – Building the Capacity Hanoi, Vietnam, 19-22 October, pp. 1-10.

Rao, V.B., 1995. C++ Neural Networks and Fuzzy Logic, 549 pp.

Syed, S. and Cannon, M.E., 2004. Fuzzy Logic Based-Map Matching Algorithm for Vehicle Navigation System in Urban Canyons. ION National Technical Meet- ing, San Diego, CA, January 26-28, pp. 982-994.

Syed, S. and Cannon, M.E., 2004. Fuzzy Logic Based-Map Matching Algorithm for Vehicle Navigation System in Urban Canyons. ION National Technical Meet- ing, San Diego, CA, January 26-28, pp. 1-12.

Taylor, B.J. and Smith, J.T., 2006. Validation of Neural Networks via Taxonomic Evaluation. In: B.J. Taylor (Editor), Methods and Procedures for the Verifica- tion and Validation of Artificial Neural Networks. Springer Science+Media, Inc., pp. 51-95.

Umarani, R. & Selvi, V., 2010. Particle Swarm Optimization-Evolution, Overview and Applications. International Journal of Engineering Science and Technol- ogy, Vol. 2(7), pp. 2802-2806.

Wolfram MathWorld. 2010. Cellular Automaton.

http://mathworld.wolfram.com/CellularAutomaton.html (accessed September 15, 2010).

Wolfram, S., 2000. A New Kind of Science. Wolfram Media, 1192 pp.

Wu, C.H., Chou, H.J. and Su, W.H., 2007. A Genetic Approach for Coordinate Transformation Test of GPS Positioning. IEEE Geoscience and Remote Sens- ing Letters, Vol. 4(2), pp. 297-301.

Xu, J., Arslan, T., Wan, D. and Wang, Q., 2002. GPS Attitude Determination Us- ing a Genetic Algorithm. Proceedings Evolutionary Computation, CEC '02, 12-17 May, Honolulu, pp. 998-1002.

Zadeh, L.A., 1994. Fuzzy Logic, Neural Networks and Soft Computing. Commun.

ACM, Vol. 37, No. 3, pp. 77-84.

Zaletnyik P., Vlgyesi L., Palncz B., 2004. Approach of The Hungarian Geoid Sur- face with Sequence of Neural Networks. XXth ISPRS Congress- Youth Fo- rum, 12-23 July Istanbul, pp. 119-122.

Referências

Documentos relacionados