• Nenhum resultado encontrado

Apesar de apresentar resultados compatíveis ou mesmo superiores se comparados com outros trabalhos recentes no reconhecimento de expressões faciais, o método proposto tam- bém deve ser testado com outras bases de imagens e/ou vídeos. A utilização de outras bases para treinamento e testes é importante não apenas para comprovar a eficiência do

método proposto, mas também para analisar o seu desempenho em bases com característi- cas diferentes de iluminação, posição da câmera, expressões faciais, oclusões parciais, etc.

A combinação do extrator da característica movimento apresentado neste trabalho com outros extratores de diferentes características (textura e forma, por exemplo) também é uma proposta de trabalho futuro. A combinação de métodos de extração de características é uti- lizada em outros tipos de aplicações, como detecção de pedestres [99], por exemplo. A escolha das características mais apropriadas para o reconhecimento de expressões continua sendo um problema desafiador, visto que a acuracidade do reconhecimento depende princi- palmente das características que são usadas para representar as expressões. De forma intui- tiva, algumas características parecem mais apropriadas do que outras para a representação das expressões. Contudo, ainda não há uma consenso na literatura sobre quais as melhores características a serem combinadas, tanto no reconhecimento de expressões faciais quanto em outras aplicações. A hipótese a ser provada é que a combinação de extratores de dife- rentes características pode prover taxas ainda melhores de reconhecimento das expressões faciais.

Finalmente, o uso do sistema proposto no reconhecimento de outros tipos de movimen- tos também pode ser alvo de estudos futuros. As atividades humanas, por exemplo, tam- bém são uma forma bastante efetiva de comunicação não-verbal. O reconhecimento des- sas atividades é o processo de corretamente identificar as ações realizadas pelo indivíduo. Existem várias aplicações nesta área, tais como: vídeos de vigilância, interação homem- máquina (HCI - Human-Computer Interaction), análises estatísticas em esportes, cuidados médicos, etc. Em vigilância, é usada para monitorar as atividades em casas inteligentes e também para detectar atividades anormais e alertar as autoridades competentes. Similar- mente, em HCI, esse tipo de reconhecimento fornece um método mais natural de interagir com o computador do que os convencionais mouse e teclado. Em sistemas de cuidados mé- dicos, as atividades dos pacientes podem ser monitoradas para facilitar uma recuperação mais rápida. Devido à tamanha variedade de aplicações, o reconhecimento de atividades humanas se tornou um tópico importante na comunidade científica, com muitas pesquisas sendo realizadas em todo o mundo [100].

[1] Y. LeCun. (2016) Nips 2016 deep learning symposium. [Online]. Available: https://drive.google.com/file/d/0BxKBnD5y2M8NREZod0tVdW5FLTQ/view [2] M. J. Lyons, J. Budynek, and S. Akamatsu, “Automatic classification of single facial

images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 12, pp. 1357–1362, 1999.

[3] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE In-

ternational Conference on. IEEE, 2000, pp. 46–53.

[4] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The ex- tended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion- specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and

Pattern Recognition-Workshops. IEEE, 2010, pp. 94–101.

[5] M. Pantic, M. Valstar, R. Rademaker, and L. Maat, “Web-based database for facial ex- pression analysis,” in Multimedia and Expo, 2005. ICME 2005. IEEE International Confe-

rence on. IEEE, 2005, pp. 5–pp.

[6] T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) data- base,” in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE Internati-

onal Conference on. IEEE, 2002, pp. 46–51.

[7] H. Schneiderman and T. Kanade, “A statistical method for 3d object detection applied to faces and cars,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE

Conference on, vol. 1. IEEE, 2000, pp. 746–751.

[8] J. Ahlberg, “Candide-3-an updated parameterised face,” 2001.

[9] P. Viola and M. J. Jones, “Robust real-time face detection,” International journal of com-

puter vision, vol. 57, no. 2, pp. 137–154, 2004.

[10] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern

Recognition, 2015, pp. 5325–5334.

[11] H. Kobayashi and F. Hara, “Recognition of six basic facial expression and their strength by neural network,” in Robot and Human Communication, 1992. Proceedings.,

IEEE International Workshop on. IEEE, 1992, pp. 381–386.

[12] Y.-I. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 2, pp. 97–115, 2001.

[13] M. Pantic and L. J. Rothkrantz, “Facial action recognition for facial expression analysis from static face images,” IEEE Transactions on Systems, Man, and Cybernetics, Part B

(Cybernetics), vol. 34, no. 3, pp. 1449–1461, 2004.

[14] A. Koutlas and D. I. Fotiadis, “An automatic region based methodology for facial expression recognition,” in Systems, Man and Cybernetics, 2008. SMC 2008. IEEE Inter-

national Conference on. IEEE, 2008, pp. 662–666.

[15] J. Ou, X.-B. Bai, Y. Pei, L. Ma, and W. Liu, “Automatic facial expression recognition using gabor filter and expression analysis,” in Computer Modeling and Simulation, 2010.

ICCMS’10. Second International Conference on, vol. 2. IEEE, 2010, pp. 215–218.

[16] A. Jamshidnezhad and M. J. Nordin, “A classifier model based on the features quan- titative analysis for facial expression recognition,” International Journal on Advanced

Science, Engineering and Information Technology, vol. 1, no. 4, pp. 391–394, 2011.

[17] W. Zheng, “Multi-view facial expression recognition based on group sparse reduced- rank regression,” IEEE Transactions on Affective Computing, vol. 5, no. 1, pp. 71–85, 2014.

[18] W. Zheng, Y. Zong, X. Zhou, and M. Xin, “Cross-domain color facial expression re- cognition using transductive transfer subspace learning.”

[19] D.-T. Lin, “Facial expression classification using pca and hierarchical radial basis func- tion network,” Journal of information science and engineering, vol. 22, no. 5, pp. 1033– 1046, 2006.

[20] P. Yang, Q. Liu, and D. N. Metaxas, “Boosting coded dynamic features for facial action units and facial expression recognition,” in 2007 IEEE Conference on Computer Vision

and Pattern Recognition. IEEE, 2007, pp. 1–6.

[21] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009.

[22] L. H. Thai, N. D. T. Nguyen, and T. S. Hai, “A facial expression classification system integrating canny, principal component analysis and artificial neural network,” arXiv

preprint arXiv:1111.4052, 2011.

[23] J. A. R. Castillo, A. R. Rivera, and O. Chae, “Facial expression recognition based on local sign directional pattern,” in 2012 19th IEEE International Conference on Image Pro-

cessing. IEEE, 2012, pp. 2613–2616.

[24] S. Elaiwat, M. Bennamoun, F. Boussaid, and A. El-Sallam, “3-d face recognition using curvelet local features,” IEEE Signal processing letters, vol. 21, no. 2, pp. 172–175, 2014. [25] F. Ahmed, P. P. Paul, M. Gavrilova, and R. Alhajj, “Weighted fusion of bit plane- specific local image descriptors for facial expression recognition,” in 2015 IEEE Inter-

national Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2015, pp. 1852–1857. [26] M. S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Recog- nizing facial expression: machine learning and application to spontaneous behavior,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition

(CVPR’05), vol. 2. IEEE, 2005, pp. 568–573.

[27] D. S. Bolme, B. A. Draper, and J. R. Beveridge, “Average of synthetic exact filters,” in

Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 2105–2112.

[28] H.-S. Oh and H.-K. Lee, “Block-matching algorithm based on an adaptive reduction of the search area for motion estimation,” Real-Time Imaging, vol. 6, no. 5, pp. 407–414, 2000.

[29] A. Konar and A. Chakraborty, Emotion Recognition: A Pattern Analysis Approach. John Wiley & Sons, 2014.

[30] N. N. Khatri, Z. H. Shah, and S. A. Patel, “Facial expression recognition: A survey,”

IJCSIT) International Journal of Computer Science and Information Technologies, vol. 5, no. 1, pp. 149–152, 2014.

[31] K.-W. Wong, K.-M. Lam, and W.-C. Siu, “An efficient algorithm for human face de- tection and facial feature extraction under different conditions,” Pattern Recognition, vol. 34, no. 10, pp. 1993–2004, 2001.

[32] K. Karpouzis, G. Votsis, G. Moschovitis, and S. Kollias, “Emotion recognition using feature extraction and 3-d models,” Computational intelligence and applications. World

Scientific and Engineering Society Press, pp. 342–347, 1999.

[33] V. Vasudevan, “Face recognition system with various expression and occlusion ba- sed on a novel block matching algorithm and PCA,” International Journal of Computer

Applications, vol. 38, no. 11, pp. 27–34, 2012.

[34] J. Jain and A. Jain, “Displacement measurement and its application in interframe image coding,” IEEE Transactions on communications, vol. 29, no. 12, pp. 1799–1808, 1981.

[35] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.

[36] Y. Kim, H. Lee, and E. M. Provost, “Deep learning for robust feature generation in audiovisual emotion recognition,” in 2013 IEEE International Conference on Acoustics,

Speech and Signal Processing. IEEE, 2013, pp. 3687–3691.

[37] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.

[38] P. Ekman and E. L. Rosenberg, What the face reveals: Basic and applied studies of spontane-

ous expression using the Facial Action Coding System (FACS). Oxford University Press, USA, 1997.

[39] Z. Zhang, “Microsoft kinect sensor and its effect,” IEEE multimedia, vol. 19, no. 2, pp. 4–10, 2012.

[40] T. Schlömer, B. Poppinga, N. Henze, and S. Boll, “Gesture recognition with a wii con- troller,” in Proceedings of the 2nd international conference on Tangible and embedded inte-

raction. ACM, 2008, pp. 11–14.

[41] R. Grace and S. Steward, “Drowsy driver monitor and warning system,” in Internati-

onal driving symposium on human factors in driver assessment, training and vehicle design, vol. 8, 2001, pp. 201–208.

[42] S. Boucenna, P. Gaussier, P. Andry, and L. Hafemeister, “A robot learns the facial ex- pressions recognition and face/non-face discrimination through an imitation game,”

International Journal of Social Robotics, vol. 6, no. 4, pp. 633–652, 2014.

[43] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM computing surveys (CSUR), vol. 35, no. 4, pp. 399–458, 2003.

[44] G. McKeown, M. F. Valstar, R. Cowie, and M. Pantic, “The semaine corpus of emo- tionally coloured character interactions,” in Multimedia and Expo (ICME), 2010 IEEE

International Conference on. IEEE, 2010, pp. 1079–1084.

[45] R. A. Calvo and S. D’Mello, “Affect detection: An interdisciplinary review of models, methods, and their applications,” IEEE Transactions on affective computing, vol. 1, no. 1, pp. 18–37, 2010.

[46] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, “A survey of affect recognition methods: Audio, visual, and spontaneous expressions,” IEEE transactions on pattern

analysis and machine intelligence, vol. 31, no. 1, pp. 39–58, 2009.

[47] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The feret database and evaluation procedure for face-recognition algorithms,” Image and vision computing, vol. 16, no. 5, pp. 295–306, 1998.

[48] N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang, “Authentic facial expression analysis,” Image and Vision Computing, vol. 25, no. 12, pp. 1856–1863, 2007. [49] J. P. Maurya, A. A. Waoo, P. Patheja, and S. Sharma, “A survey on face recognition

techniques,” 2013.

[50] C. Tomasi and T. Kanade, Detection and tracking of point features. School of Computer Science, Carnegie Mellon Univ. Pittsburgh, 1991.

[51] B. D. Lucas, T. Kanade et al., “An iterative image registration technique with an appli- cation to stereo vision.” in IJCAI, vol. 81, no. 1, 1981, pp. 674–679.

[52] Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” in European conference on computational learning theory. Springer, 1995, pp. 23–37.

[53] P. Suri and E. A. Verma, “Robust face detection using circular multi block local binary pattern and integral haar features,” IJACSA) International Journal of Advanced Computer

Science and Applications, Special Issue on Artificial Intelligence, June 2010.

[54] A. Rathi and B. N. Shah, “Facial expression recognition survey,” (IRJET) International

Research Journal of Engineering and Technology, April 2016, vol. 3, no. 4, pp. 540–545. [55] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3d facial expression database

for facial behavior research,” in 7th international conference on automatic face and gesture

recognition (FGR06). IEEE, 2006, pp. 211–216.

[56] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of cognitive neuroscience, vol. 3, no. 1, pp. 71–86, 1991.

[57] D. Chakrabarti and D. Dutta, “Facial expression recognition using eigenspaces,” Pro-

cedia Technology, vol. 10, pp. 755–761, 2013.

[58] G. Murthy and R. Jadon, “Recognizing facial expressions using eigenspaces,” in 2007

IEEE International Conference on Computational Intelligence and Multimedia Applications, vol. 3. IEEE, 2007, pp. 201–207.

[59] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in 2005

IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 947–954.

[60] I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial expression recogni- tion from video sequences: temporal and static modeling,” Computer Vision and image

understanding, vol. 91, no. 1, pp. 160–187, 2003. [61] T. Mitchell, Machine Learning. McGraw Hill, 1997.

[62] L. R. Rabiner, “A tutorial on hidden markov models and selected applications in spe- ech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.

[63] Y. Saatci and C. Town, “Cascaded classification of gender and facial expression using active appearance models,” in 7th International Conference on Automatic Face and Ges-

ture Recognition (FGR06). IEEE, 2006, pp. 393–398.

[64] C. J. Wen and Y. Z. Zhan, “Hmm+ knn classifier for facial expression recognition,” in 2008 3rd IEEE Conference on Industrial Electronics and Applications. IEEE, 2008, pp. 260–263.

[65] H. Meng, B. Romera-Paredes, and N. Bianchi-Berthouze, “Emotion recognition by two view svm_2k classifier on dynamic facial expression features,” in Automatic Face

& Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, 2011, pp. 854–859.

[66] SSPNET. (2011) Fg 2011 facial expression recognition and analysis challenge (fera2011). [Online]. Available: http://sspnet.eu/fera2011/

[67] I. Song, H.-J. Kim, and P. B. Jeon, “Deep learning for real-time robust facial expres- sion recognition on a smartphone,” in 2014 IEEE International Conference on Consumer

Electronics (ICCE). IEEE, 2014, pp. 564–567.

[68] W. Li, M. Li, Z. Su, and Z. Zhu, “A deep-learning approach to facial expression re- cognition with candid images,” in Machine Vision Applications (MVA), 2015 14th IAPR

International Conference on. IEEE, 2015, pp. 279–282.

[69] H. Nomiya, S. Sakaue, and T. Hochin, “Recognition and intensity estimation of facial expression using ensemble classifiers,” in Computer and Information Science (ICIS), 2016

IEEE/ACIS 15th International Conference on. IEEE, 2016, pp. 1–6.

[70] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.

[71] G.-H. Chen, C.-L. Yang, and S.-L. Xie, “Gradient-based structural similarity for image quality assessment,” in Image Processing, 2006 IEEE International Conference on. IEEE, 2006, pp. 2929–2932.

[72] L.-x. Liu and Y.-q. Wang, “A mean-edge structural similarity for image quality asses- sment,” in Fuzzy Systems and Knowledge Discovery, 2009. FSKD’09. Sixth International

Conference on, vol. 5. IEEE, 2009, pp. 311–315.

[73] Q. Huynh-Thu and M. Ghanbari, “Scope of validity of psnr in image/video quality assessment,” Electronics letters, vol. 44, no. 13, pp. 800–801, 2008.

[74] D. M. Allen, “Mean square error of prediction as a criterion for selecting variables,”

Technometrics, vol. 13, no. 3, pp. 469–475, 1971.

[75] C.-H. Cheung and L.-M. Po, “A novel block motion estimation algorithm with con- trollable quality and searching speed,” in Circuits and Systems, 2002. ISCAS 2002. IEEE

International Symposium on, vol. 2. IEEE, 2002, pp. II–496.

[76] C.-K. Cheung and L.-M. Po, “Normalized partial distortion search algorithm for block motion estimation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, no. 3, pp. 417–422, 2000.

[77] T. Hastie, R. Tibshirani, and J. Friedman, “Unsupervised learning,” in The elements of

statistical learning. Springer, 2009, pp. 485–585.

[78] C. E. Rasmussen, “Gaussian processes for machine learning,” 2006.

[79] A. Ben-Hur, D. Horn, H. T. Siegelmann, and V. Vapnik, “Support vector clustering,”

Journal of machine learning research, vol. 2, no. Dec, pp. 125–137, 2001.

[80] J. C. Platt, “12 fast training of support vector machines using sequential minimal op- timization,” Advances in kernel methods, pp. 185–208, 1999.

[81] M. D. Buhmann, “Radial basis functions,” Acta Numerica 2000, vol. 9, pp. 1–38, 2000. [82] A. R. Rivera, J. R. Castillo, and O. O. Chae, “Local directional number pattern for

face analysis: Face and expression recognition,” IEEE transactions on image processing, vol. 22, no. 5, pp. 1740–1752, 2013.

[83] T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recognition based on local directional pattern,” ETRI journal, vol. 32, no. 5, pp. 784–794, 2010.

[84] M. S. Bartlett, G. Littlewort, I. Fasel, and J. R. Movellan, “Real time face detection and facial expression recognition: Development and applications to human computer

interaction.” in Computer Vision and Pattern Recognition Workshop, 2003. CVPRW’03.

Conference on, vol. 5. IEEE, 2003, pp. 53–53.

[85] A. R. Rivera, J. A. R. Castillo, and O. Chae, “Recognition of face expressions using local principal texture pattern,” in 2012 19th IEEE International Conference on Image

Processing. IEEE, 2012, pp. 2609–2612.

[86] M. Liu, S. Li, S. Shan, and X. Chen, “Au-aware deep networks for facial expression re- cognition,” in Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International

Conference and Workshops on. IEEE, 2013, pp. 1–6.

[87] S. W. Chew, P. Lucey, S. Lucey, J. Saragih, J. F. Cohn, and S. Sridharan, “Person- independent facial expression detection using constrained local models,” in Automatic

Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, 2011, pp. 915–920.

[88] L. A. Jeni, D. Takacs, and A. Lorincz, “High quality facial expression recognition in video streams using shape related information only,” in Computer Vision Workshops

(ICCV Workshops), 2011 IEEE International Conference on. IEEE, 2011, pp. 2168–2174. [89] S. Yang and B. Bhanu, “Understanding discrete facial expressions in video using an

emotion avatar image,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cy-

bernetics), vol. 42, no. 4, pp. 980–992, 2012.

[90] L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, and D. N. Metaxas, “Learning active facial patches for expression analysis,” in Computer Vision and Pattern Recognition (CVPR),

2012 IEEE Conference on. IEEE, 2012, pp. 2562–2569.

[91] T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary pat- terns: Application to face recognition,” IEEE transactions on pattern analysis and ma-

chine intelligence, vol. 28, no. 12, pp. 2037–2041, 2006.

[92] Z. Xie and G. Liu, “Weighted local binary pattern infrared face recognition based on weber’s law,” in Image and Graphics (ICIG), 2011 Sixth International Conference on. IEEE, 2011, pp. 429–433.

[93] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE transactions on image processing, vol. 19, no. 6, pp. 1635–1650, 2010.

[94] C. H. Chan, J. Kittler, N. Poh, T. Ahonen, and M. Pietikäinen, “(multiscale) local phase quantisation histogram discriminant analysis with score normalisation for robust face recognition,” in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th Interna-

tional Conference on. IEEE, 2009, pp. 633–640.

[95] P. Action. (2017) Intervalo de confiança. [Online]. Available: http://www. portalaction.com.br/inferencia/intervalo-de-confianca

[96] D. Le Gall, “Mpeg: A video compression standard for multimedia applications,” Com-

munications of the ACM, vol. 34, no. 4, pp. 46–58, 1991.

[97] H. da Cunha Santiago, T. I. Ren, and G. D. Cavalcanti, “Facial expression recogni- tion based on motion estimation,” in Neural Networks (IJCNN), 2016 International Joint

Conference on. IEEE, 2016, pp. 1617–1624.

[98] IJCNN. (2016) Ijcnn 2016 program. [Online]. Available: http://www.wcci2016.org/ document/ijcnn2016_4.pdf

[99] I. P. Alonso, D. F. Llorca, M. Á. Sotelo, L. M. Bergasa, P. R. de Toro, J. Nuevo, M. Ocaña, and M. Á. G. Garrido, “Combination of feature extraction methods for svm pedestrian detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 2, pp. 292– 307, 2007.

[100] J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: A review,” ACM Computing