• Nenhum resultado encontrado

7 Conclusão

7.1 Trabalhos Futuros

Desenvolver o presente projeto foi realmente gratificante: ciente de minhas limitações, contudo pude estudar bem mais do que superficialmente o mundo das Redes Neurais Artificiais e vislumbrar esse perfil do tema de estudos de Inteligência Artificial como ele é hoje. Conhecer as arquiteturas e técnicas por trás das Redes Neurais Convolucionais e seus vastos campos de aplicação me deu respostas e me encheu de uma vontade de aprender mais.

É sob esse estímulo que resolvi dar continuidade a meus estudos na forma de um Curso de Mestrado na área. Tenho certeza que o que tive oportunidade de aprender no curso de Computação UFF/CEDERJ me capacita a fazer frente a essa nova etapa, e o presente trabalho me inspira a compor as propostas de projetos de pesquisa a serem submetidos.

Referências Bibliográficas

Adaptive Switching Circuits. Widrow, B. e Hoff, M. E. 1960. 1960. Institute of Radio Engineers, Western Eletronic Show and Convention. pp. 96 - 104.

Ashby, W. R. 1960. Design for a Brain. London : Chapman & Hall, 1960.

Barto, A. G., Sutton, R. S. e Anderson, C. W. 1983. Neurolike Adaptative Elements that Can Solve Difficult Learning Problems. IEEE Transactions on Systems, Man and Cybernetics, SMC-13. 1983, pp. 834 - 346.

Bear, M. F., Connors, B. W. e Paradiso, M. A. 2015. Neuroscience: Exploring the Brain. Philadelphia : Wolters Kluwer, 2015.

Becker, S. 1991. Unsupervised Learning Procedures for Neural Networks. International Journal of Neural Systems. 1991, Vol. 2, pp. 17 - 33.

Bezerra, E. 2018. Introdução à Aprendizagem Profunda. Research

Gate. [Online] 2018. [Citado em: 25 de 09 de 2018.]

https://www.researchgate.net/profile/Eduardo_Bezerra/publication/309321510_I ntroducao_a_Aprendizagem_Profunda/links/5809ee1108ae3a04d624f369/Intro ducao-a-Aprendizagem-Profunda.pdf.

Big Data Definition. Open Methodology. [Online] [Citado em: 28 de 09 de 2018.] http://mike2.openmethodology.org/wiki/Big_Data_Definition.

Braga, A. P., Carvalho, A. P. L. F. e Ludemir, T. B. 2000. Redes Neurais Artificiais: Teoria e Prática. Rio de Janeiro : LTC, 2000.

Convolutional Networks and Applications in Vision. LeCun, Y., Kavukcuoglu, K. e Farabet, C. 2010. Paris : s.n., 2010. IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, ISCAS 2010. pp. 253 - 256.

Cybenko, G. 1989. Approximation by Superpositions of Sigmoid Function. Mathematics of Control, Signals and System. 2, 1989, pp. 303 - 314.

__. 1988. Continuous Valued Neural Networks with Two Hidden Layers are Sufficient. Department of Computer Science, Tufts University. Medford/Somerville : s.n., 1988.

de Oliveira, H. S. 2017. Redes Neurais Convolucionais para Classificação de Expressões Faciais e Emoções. Monografia (Graduação). Boa Vista : s.n., 2017.

Deep Learning Book. Deep Learning Book. [Online] [Citado em: 27 de 09 de 2018.] http://deeplearningbook.com.br.

Deep Learning in a Nutshell: History and Training. Blog NVidia . [Online] [Citado em: 29 de 09 de 2018.] https://devblogs.nvidia.com/deep- learning-nutshell-history-training/.

Deng, L. e Yu, D. 2013. Deep Learning, Methods and Aplications. Foundations and Trends in Signal Processing. 2013, Vol. 7, 3 - 4, pp. 197 - 387. Desjardins, G. e Y., Bengio. 2008. Empirical Evaluation of Convolutional RBMs for Vision. Département d'Informatique et de Recherche Opérationnelle, Université de Montréal. Montréal : s.n., 2008. Techport nº 1327.

DiCarlo, J. J. 2014. Mechanisms Underlying Visual Object Recognition: Humans VS Neurons VS Machines - NIPS Tutorial. YouTube. [Online] 05 de 04 de 2014. [Citado em: 03 de 12 de 2018.] https://www.youtube.com/watch?v=yDvfm7nzIV8.

Filho, O. M. e Neto, H. V. 1999. Processamento Digital de Imagens. Rio de Janeiro : Brassport, 1999.

Fukushima, K. 1980. Neocognitron: A Self-organizating Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position. Biological Cybernetics. 1980, 36, pp. 193 - 202.

Geman, S, Bienenstock, E. e Doursat, R. 1992. Neural Networks and The Bias-variance Dilemma. Neural Computation. 4, 1992, pp. 1 - 58.

Gonzalez, R. E. W. R. C. 2008. Digital Image Processing, 3ed. New Jersey : Prentice Hall, 2008.

Goodfellow, I. J. 2010. Multidimensional, Downsampled Convolution fou Autoencoders. Montréal : s.n., 2010. Technical Report.

Goodfellow, I., Bengio, Y. e Courville, A. 2016. Deep Learning. Cambridge : MIT Press, 2016.

Grossberg, S. 1987. Competitive Learning: from Interactive Activation to Adaptative Resonance. Cognitive Science. 11, 1987, pp. 23 - 63.

Haykin, S. 1996. Adaptative Filter Theory. Englewood Cliffs : Prentice-Hall, 1996.

__. 1994. Neural Networks - A Comprehensive Foundation. New York : Prentice-Hall, 1994.

__. 2001. Redes Neurais: Princípios e Prática. Porto Alegre : Bookman, 2001.

Hebb, D. H. 1949. The Organization of the Behavior. New York : John Wiley & Sons, 1949.

Hopfield, J. J. 1982. Neural Networks and Physical System with Emergent Collective Computational Abilities. Proc. Nati. Acad. Sci. USA. Abril de 1982, Vol. 79, pp. 2554 - 2558.

Hornik, K. 1991. Approximations Capabilities of Multilayer Feedforward Networks. Neural Networks. 1991, Vol. 4, 2, pp. 251 - 257.

Hubel, D. H. e Wiesel, T.N. 1977. Functional Architecture of Macaque Visual Cortex. Proceedings of the Royal Society. B, 1977, Vol. 198, pp. 1 - 59.

Hubel, D. H. e Wisel, T.N. 1962. Receptive Fields, Binocular Interaction and Functional Architecture in the Cat's Visual Cortex. Journal of Physicology. 1962, Vol. 160, pp. 106 - 154.

ImageNet Classification with Deep Convolutional Neural Networks. Krizhevsky, A. e Sutskever, I. Hinton, G. E. 2012. Lake Tahoe : s.n., 2012. NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems. Vol. 1, pp. 1097 - 1105.

Ioffe, S. e Szergedy, C. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Arxiv. [Online] 2015. [Citado em: 29 de 11 de 2018.] https://arxiv.org/pdf/1502.03167.pdf.

Ivakhnenko, A. G. e Lapa, V. G. 1965. Cybernetic Predicting Devices. New York : CCM Information Corp., 1965.

Lang, K. J. e Hinton, G. E. 1988. The Development of the Time- delay Neural Network Architecture for Speech Reconizition. 1988. Thecnical Report CMU-CS-88-152.

LeCun, Y. e Bengio, Y. 1995. Convolutional Networks for Images, Speech, and Time Series. [A. do livro] M. A. Arbib. The Handbook of Brain Theory and Natural Networks. Combridge : MIT Press, 1995.

LeCun, Y. 1989. Generalization and Network Design Strategies. Department of Computer Science, University of Toronto. Toronto : s.n., 1989. CRG-TR-89-4.

LeCun, Y., et al. 1989. Backpropagation Applied to Handwritten ZIP Code Recognition. Neural Computation. 1989, Vol. 1, 1, pp. 541 - 551.

LeCun, Y., et al. 1998. Gradient Based Learning Applied to Document Recognition. Proceedings of the IEEE. 1998, Vol. 86, 11, pp. 2278 - 2324.

Li, F. F., Karpathy, A. e Johnson, J. 2016. Convolutional Neural Networks. [Online] GitHub, 2016. [Citado em: 02 de 12 de 2018.] http://cs231n.github.io/convolutional-networks/.

Linsker, R. 1988. Self-organization in Perceptual Network. Computer. 1988, Vol. 3, 21, pp. 105 - 117.

Marr, D. e Poggio, T. 1976. Cooperative Computation of Stereo Disparity. Science. 1976, Vol. 194, 4262, pp. 283 - 287.

Maxout Networks. Goodfellow, I. J., et al. 2013. 2013. Proceedings of the 30th International Conference on Machine Learning. Vol. 28(3), pp. 1319 - 1327.

McCulloch, W. S. e Pitts, W. H. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics. 1943, Vol. 5, pp. 115 - 133.

Medler, D. A. 1998. A Brief History of Connectionism. Neural Computing Surveys. 1, 1998, pp. 61 - 101.

Mendel, J. M. e Mclaren, R. W. 1970. Reinforcement-learning Control and Pattern Recognition Systems. Adaptative, Learning and Patern Recognition Systems: Theory and Aplications. 1970, Vol. 66, pp. 287 - 318.

Minsky, M. e Papert, S. 1972. Perceptrons: An Introduction to Computational Geometry. Cambridge : MIT Press, 1972.

Reed, R. 1993. Pruning Algorithms - a Survey. IEEE Transactions on Neural Networks. 1993, Vol. 4, 5, pp. 740 - 749.

Rosenblatt, F. 1962. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, DC : Spartan Books, 1962.

__. 1958. The Perceptron: a Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review. 1958, Vol. 65, 6, pp. 386 - 408.

Rumelhart, D. E. e McClelland, J. L. 1986. Paralel Distributed Processing. Cambridge : MIT Press, 1986. Vol. 1.

Rumelhart, D. E., Hinton, G. E. e Williams, R. J. 1986. Learning Representations by Back-propagating Errors. Nature. 323, 1986, pp. 533 - 536.

Samer, C. H., Rishi, K. e Rowen, C. 2015. Using Convolutional Neural Networks for Image Recognition. Cadence IP. [Online] 2015. [Citado em: 28 de 11 de 2018.] https://ip.cadence.com/uploads/901/cnn_wp-pdf.

Sutton, R. S. 1988. Learning to Predict by Method of Temporal Differences. Machine Learning. 3, 1988, pp. 9 - 44.

Sutton, R. S., Barto, A. G. e Williams, J. 1992. Reinforcement Learning is Direct Adaptive Optimal Control. IEEE Control Systems Magazine. Abril de 1992, Vol. 12, 2, pp. 19 - 22.

Thorndike, E. L. 1911. Animal Inteligence. New York : Darien, 1911. Vapnik, V. N. 1995. The Nature of Statistical Learning Theory. New York : Spring-Verlag, 1995.

Widrow, B. e Lehr, M. A. 1990. 30 Years of Adaptative Neural Networks: Perceptron Madeline and Backpropagation. Proceedings of the IEEE. 1990, Vol. 78, 9, pp. 1416 - 1442.

Widrow, B. e Steams, S. D. 1985. Adaptative Signal Processing. Englewood Cliffs : Prentice-Hall, 1985.

Yuille, A. L., Kammen, D. M. e Cohen, D. S. 1989. Quadrature and Development of Orientation Selective Cortical Cells by Hebb Rules. Biological Cybernetics. 61, 1989, pp. 183 - 194.

Documentos relacionados