• Nenhum resultado encontrado

Análise do ângulo de abertura θ

D.6 Diferença de cores através do HSB

D.6.2 Análise do ângulo de abertura θ

cos(θ) = ITl (t)Bl(t) ||Il(t)||||Bl(t)|| = (sI(t) + vI(t))T(sB(t) + vB(t)) ||sI(t) + vI(t)||||sB(t) + vB(t)|| = sTI(t)sB(t) + sTI(t)vB(t) + vTI(t)sB(t) + vTI(t)vB(t) q s2 I(t) + v2I(t) q s2 B(t) + v2B(t) . (D.8)

sTI(t)sB(t) = (sIcos(hI)ux+ sIsin(hI)uy)T(sBcos(hB)ux+ sBsin(hB)uy)

= sIcos(hI)uTx(sBcos(hB)ux+ sBsin(hB)uy) +

sIsin(hI)uTy(sBcos(hB)ux+ sBsin(hB)uy)

= sIsB



cos(hI) cos(hB)||ux||2+ sin(hI) sin(hB)||uy||2



+

sIsB(cos(hI) sin(hB) + sin(hI) cos(hB)) uTxuy

= sIsB(cos(hI) cos(hB) + sin(hI) sin(hB))

= sIsBcos(hI− hB), (D.9) sTI(t)vB(t) = (sIcos(hI)ux+ sIsin(hI)uy)TvBuv = sIvBcos(hI)uTxuv+ sIvBsin(hI)uTyuv = 0, (D.10) vTI(t)sB(t) = sTB(t)vI(t) = (sBcos(hB)ux+ sBsin(hB)uy)TvIuv = sBvIcos(hB)uTxuv + sBvIsin(hB)uTyuv = 0, (D.11) vTI(t)vB(t) = vIvBuTvuv = vIvB. (D.12) Substituindo (D.9)-(D.12) em (D.8) tem-se cos(θ) = sIsBcos(hI− hB) + vIvB q s2 I+ v2I q s2 B+ v2B =   sI q s2 I + v2I     sB q s2 B+ vB2  cos(hI− hB) +   vI q s2 I + v2I     vB q s2 B+ vB2  

= sin(ϕI) sin(ϕB) cos(hI − hB) + cos(ϕI) cos(ϕB)

= 12(cos(ϕI − ϕB) − cos(ϕI + ϕB)) cos(hI− hB) +

1

2(cos(ϕI + ϕB) + cos(ϕI− ϕB))

= 12(cos(ϕI − ϕB) cos(hI− hB) − cos(ϕI+ ϕB) cos(hI − hB) + cos(ϕI+ ϕB) + cos(ϕI − ϕB))

= 12(cos(ϕI − ϕB) (1 + cos(hI − hB)) + cos(ϕI+ ϕB) (1 − cos(hI− hB)))

= cos(ϕI− ϕB) cos2 hI− hB 2 ! + cos(ϕI+ ϕB) sin2 hI− hB 2 ! . (D.13)

Definindo as diferenças angulares ϕI − ϕB e hI− hB como

∆ϕ = ϕI − ϕB (D.14)

e substituindo-as em (D.13), obtém-se

cos(θ) = cos(∆ϕ) cos2 ∆h

2 ! + cos(2ϕB+ ∆ϕ) sin2 ∆h 2 ! . (D.16)

A Equação (D.16) depende das variáveis ϕB, ∆ϕ e ∆h, onde

• o valor que ∆ϕ assume deve-se: (a) apenas à variação da saturação (ver Figura D.8.a), (b) apenas à variação do brilho (ver Figura D.8.b) ou (c) à variação conjunta tanto da saturação como do brilho (ver Figura D.8.c);

• por outro lado, o valor que ∆h assume deve-se à variação das matizes. Portanto, esta variação será maior se a relação das cores dos píxeis Il(t) e Bl(t) for mais de caráter

complementar (ver Figura D.9.a) que análogo (ver Figura D.9.b); tendo um valor de zero se a relação é monocromática (ver Figura D.9.c).

(a) (b)

(c)

Figura D.8: Possíveis casos que dão origem ao ∆ϕ: (a) devido unicamente a uma variação da saturação;(b) devido unicamente a uma variação do brilho;(c)a uma variação conjunta tanto da saturação como do brilho.

Portanto, em termos gerais pode-se dizer que o cos(θ) depende das componentes de brilho e cromaticidade dos píxeis Il(t) e Bl(t). Agora se é considerada, em forma restrita, a

problemática da detecção de variações, onde para o caso de um pixel Il(t) pertencente

ao fundo ele fica numa pequena vizinhança centrada em Bl(t), como é representado na

(a) (b) (c)

Figura D.9: Relações dos matizes: (a) complementarias;(b) análogas;(c) monocromáticas.

(uma representação gráfica é apresentada na Figura D.10.b), e, assim, a Equação (D.16) é aproximada por cos(θ) ≈ cos2 ∆h 2 ! + cos(2ϕB) sin2 ∆h 2 ! = 1 − (1 − cos(2ϕB)) sin2 ∆h 2 ! = 1 + 4 sin(ϕB) sin ∆h 2 !!2 . (D.17)

A Equação (D.17) indica que para o caso de detecção de variações, e considerando um pixel

Bl(t) constante, o cos(θ) dependerá somente do valor da diferença de matizes, ∆h. Assim,

as componentes de cromaticidade de Il(t) e Bl(t), especificamente a diferença de matizes,

∆h, define o valor de cos(θ).

(a)

(b)

Figura D.10: Píxeis Il(t) e Bl(t) considerado a problemática da detecção de variações (a)

[1] CAVIAR Test Case Scenarios. "http://homepages.inf.ed.ac.uk/rbf/ CAVIARDATA1/", 2007.

[2] Aach, T., Dümbgen, L., and Mester, R. Bayesian illumination invariant change detection using a total least squares test statistic. In Actes/Proceedings 18e Colloque

GRETSI sur le Traitement du Signal et des Images, Toulouse, France (2001), pp. 587–

590.

[3] Aach, T., Dumbgen, L., Mester, R., and Toth, D. Bayesian illumination- invariant motion detection. In Image Processing, 2001. Proceedings. 2001 International

Conference on (2001), vol. 3, IEEE, pp. 640–643.

[4] Aach, T., and Kaup, A. Bayesian algorithms for adaptive change detection in image sequences using Markov random fields. Signal Processing: Image Communication 7, 2 (1995), 147–160.

[5] Black, J., Ellis, T., and Rosin, P. A novel method for video tracking performance evaluation. In In Joint IEEE Int. Workshop on Visual Surveillance and Performance

Evaluation of Tracking and Surveillance (VS-PETS (2003), pp. 125–132.

[6] Blackman, S. S. Multiple hypothesis tracking for multiple target tracking. Aerospace

and Electronic Systems Magazine, IEEE 19, 1 (2004), 5–18.

[7] Bouwmans, T., Baf, F. E., and Vachon, B. Background Modeling using Mixture of Gaussians for Foreground Detection - A Survey. Science 3 (2008), 219–237.

[8] Brown, L. M., Senior, A. W., li Tian, Y., Connell, J., Hampapur, A., fe Shu, C., Merkl, H., and Lu, M. Performance evaluation of surveillance systems under varying conditions. In In: Proceedings of IEEE PETS Workshop (2005), pp. 1–8. [9] Brutzer, S., Höferlin, B., and Heidemann, G. Evaluation of Background Sub- traction Techniques for Video Surveillance. In Computer Vision and Pattern Recogni-

[10] Cavallaro, A. Change detection based on color edges. , 2001. ISCAS 2001. The

2001 IEEE 2 (2001), 141–144.

[11] Chen, H., Liu, T., and Fuh, C. Probabilistic tracking with adaptive feature selec- tion. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International

Conference on (2004), vol. 2, IEEE, pp. 736–739.

[12] Cheung, S.-c. S. Robust techniques for background subtraction in urban traffic video.

Proceedings of SPIE (2004), 881–892.

[13] Collins, R., a.J. Lipton, and Kanade, T. Introduction to the special section on video surveillance. IEEE Transactions on Pattern Analysis and Machine Intelligence

22, 8 (aug 2000), 745–746.

[14] Collins, R. T., and Liu, Y. On-line selection of discriminative tracking features.

Proceedings Ninth IEEE International Conference on Computer Vision(2003), 346–352

vol.1.

[15] Correia, P., and Pereira, F. Change detection-based video segmentation for survillance applications, 2004.

[16] Cristani, M., Bicego, M., and Murino, V. Integrated region-and pixel-based approach to background modelling. In In Proc. of IEEE Workshop on Motion and

Video Computing (2002), pp. 3–8.

[17] Cristani, M., Farenzena, M., Bloisi, D., and Murino, V. Background sub- traction for automated multisensor surveillance: a comprehensive review. EURASIP

Journal on Advances in Signal Processing 2010 (2010), 43.

[18] Cucchiara, R., Grana, C., Piccardi, M., and Prati, a. Detecting moving objects, ghosts, and shadows in video streams. IEEE Transactions on Pattern Analysis

and Machine Intelligence 25, 10 (oct 2003), 1337–1342.

[19] Cutler, R., and Davis, L. View-based detection and analysis of periodic motion.

Proceedings. Fourteenth International Conference on Pattern Recognition 1 (1998),

495–500.

[20] Elgammal, A., Harwood, D., and Davis, L. Non-parametric model for back- ground subtraction. Computer (2000), 751–767.

[21] Elhabian, S., El-Sayed, K., and Ahmed, S. Moving object detection in spatial domain using background removal techniques-state-of-art. Recent patents on computer

[22] Feris, R., Hampapur, A., Zhai, Y., Bobbitt, R., Brown, L., Vaquero, D., Tian, Y., Liu, H., and Sun, M. Case Study: IBM Smart Surveillance System.

Intelligent Video Surveillance: System and Technology (2009), 47–76.

[23] Fisher, R. The PETS04 surveillance ground-truth data sets. Evaluation of Tracking

and Surveillance (2004), 1–5.

[24] Fisher, R. CAVIAR Test Case Scenarios. http://homepages.inf.ed.ac.uk/rbf/ CAVIARDATA1/, 2011.

[25] Förstner, W. 10 pros and cons against performance characterization of vision algo- rithms. . . . on Peformance Characteristics of Vision Algorithms (1996).

[26] Friedman, N., and Russell, S. Image segmentation in video sequences: A proba- bilistic approach. Proceedings of the Thirteenth conference on . . . (1997), 1–13.

[27] Gao, X., Boult, T., Coetzee, F., and Ramesh, V. Error analysis of background adaption. In Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Con-

ference on (2000), vol. 1, IEEE, pp. 503–510.

[28] Greenhill, S., Venkatesh, S., and West, G. Adaptive model for foreground ex- traction in adverse lighting conditions. PRICAI 2004: Trends in Artificial Intelligence (2004), 805–811.

[29] Hall, D., Pesnel, S., Emonet, R., Crowley, J., Nascimento, J., Ribeiro, P., Moreno, P., Victor, J., Andrade, E., List, T., and Fisher, R. Compa- rison of Target Detection Algorithms using Adaptive Background Models. 2005 IEEE

International Workshop on Visual Surveillance and Performance Evaluation of Trac- king and Surveillance, 0 (2005), 113–120.

[30] Haritaoglu, I., Cutler, R., Harwood, D., and Davis, L. S. Backpack: De- tection of People Carrying Objects Using Silhouettes. Computer Vision and Image

Understanding 81, 3 (Mar. 2001), 385–397.

[31] Haritaoglu, I., and Harwood, D. W4: Who, when, where, what: a real time system for detecting and tracking people. 3rd IEEE Int. Conf. Automatic Face and

Gesture Recognition, (1998), 877–892.

[32] Haritaoglu, I., Harwood, D., and Davis, L. W/sup 4/: real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine

Intelligence 22, 8 (2000), 809–830.

[33] Heikkila, J. A real-time system for monitoring of cyclists and pedestrians. Image

[34] Heikklä, M., and Pietikäinen, M. A texture-based method for modeling the background and detecting moving objects. IEEE transactions on pattern analysis and

machine intelligence 28, 4 (Apr. 2006), 657–62.

[35] Jonker, R., and Volgenant, A. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing 38 (November 1987), 325–340. [36] Kingsland, S. E. Modeling nature : episodes in the history of population ecology /

Sharon E. Kingsland. University of Chicago Press, Chicago, 1985.

[37] Klare, B., and Sarkar, S. Background subtraction in varying illuminations using an ensemble based on an enlarged feature set. Computer Vision and Pattern Recognition

. . . (June 2009), 66–73.

[38] Koller, D., Weber, J., Huang, T., Malik, J., Ogasawara, G., Rao, B., and Russell, S. Towards robust automatic traffic scene analysis in real-time. Proceedings

of 1994 33rd IEEE Conference on Decision and Control, 3776–3781.

[39] Lazarevic-McManus, N., Renno, J., and Jones, G. a. Performance evaluation in visual surveillance using the F-measure. Proceedings of the 4th ACM international

workshop on Video surveillance and sensor networks - VSSN ’06 (2006), 45.

[40] Leibe, B., Seemann, E., and Schiele, B. Pedestrian detection in crowded scenes.

2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) 1 (2005), 878–885.

[41] Lien, C. Targets Tracking in the Crowd. cdn.intechopen.com (2011).

[42] Martinez-Contreras, F., Orrite-Urunuela, C., Herrero-Jaraba, E., Ragheb, H., and Velastin, S. a. Recognizing Human Actions Using Silhouette- based HMM. 2009 Sixth IEEE International Conference on Advanced Video and Signal

Based Surveillance (sep 2009), 43–48.

[43] Mason, M., and Duric, Z. Using histograms to detect and track objects in color video. In Applied Imagery Pattern Recognition Workshop, AIPR 2001 30th (2001), IEEE, pp. 154–159.

[44] McFarlane, N., and Schofield, C. Segmentation and tracking of piglets in images.

Machine Vision and Applications 8, 3 (May 1995), 187–193.

[45] Mittal, A., and Paragios, N. Motion-based background subtraction using adaptive kernel density estimation. . . . , 2004. CVPR 2004. Proceedings of the . . . (2004).

[46] Nascimento, J., and Marques, J. New performance evaluation metrics for object detection algorithms. In IEEE Workshop on Performance Analysis of Video Surveil-

lance and Tracking (PETS 2004) (2004).

[47] Nascimento, J., and Marques, J. Performance evaluation of object detection algorithms for video surveillance. IEEE Transactions on Multimedia 8, 4 (aug 2006), 761–774.

[48] Nguyen, H. T., and Smeulders, A. W. Robust tracking using foreground- background texture discrimination. Int. J. Comput. Vision 69 (September 2006), 277– 293.

[49] Noriega, P., and Bernier, O. Real time illumination invariant background sub- traction using local kernel histograms. British Machine Vision Association (BMVC) (2006), 1–10.

[50] Oberti, F., Teschioni, a., and Regazzoni, C. ROC curves for performance eva- luation of video sequences processing systems for surveillance applications. Proceedings

1999 International Conference on Image Processing (Cat. 99CH36348) 2, 949–953.

[51] Ojala, T., Pietikainen, M., and Harwood, D. Performance evaluation of tex- ture measures with classification based on kullback discrimination of distributions. In

Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision Image Processing., Proceedings of the 12th IAPR International Conference on (oct 1994), vol. 1, pp. 582

–585 vol.1.

[52] Oliver, N. M., Rosario, B., and Pentland, A. P. A bayesian computer vision system for modeling human interactions. IEEE TRANSACTIONS ON PATTERN

ANALYSIS AND MACHINE INTELLIGENCE 22, 8 (2000), 831–843.

[53] Parks, D. H., and Fels, S. S. Evaluation of Background Subtraction Algorithms with Post-Processing. 2008 IEEE Fifth International Conference on Advanced Video

and Signal Based Surveillance (sep 2008), 192–199.

[54] Pathan, S., Al-Hamadi, A., and Michaelis, B. Intelligent feature-guided multi- object tracking using Kalman filter. In Computer, Control and Communication, 2009.

IC4 2009. 2nd International Conference on (2009), IEEE, pp. 1–6.

[55] Piccardi, M. Background subtraction techniques: a review. 2004 IEEE International

Conference on Systems Man and Cybernetics IEEE 4, C (2004), 3099–3104.

[56] Radke, R. J., Andra, S., Al-Kofahi, O., and Roysam, B. Image change detection algorithms: a systematic survey. IEEE transactions on image processing : a

[57] Rosin, P. L., and Ioannidis, E. Evaluation of global image thresholding for change detection. Pattern Recognition Letters 24, 14 (Oct. 2003), 2345–2356.

[58] Russell, B. C., Torralba, A., Murphy, K. P., and Freeman, W. T. Labelme: A database and web-based tool for image annotation. Int. J. Comput. Vision 77, 1-3 (May 2008), 157–173.

[59] Schlogl, T., Beleznai, C., Winter, M., and Bischof, H. Performance evalu- ation metrics for motion detection and tracking. In Pattern Recognition, 2004. ICPR

2004. Proceedings of the 17th International Conference on (aug. 2004), vol. 4, pp. 519

– 522 Vol.4.

[60] Shimkin, N. Kinematic Models for Target Tracking. Israel, 2009.

[61] Sobral, A., Oliveira, L., Schnitman, L., and Souza, F. D. Highway traffic congestion classification using holistic properties. 10th IASTED International Confe-

rence on Signal Processing, Pattern Recognition and Applications (SPPRA’2013) (fev.

2013).

[62] Song, K., and Tai, J. Real-time background estimation of traffic imagery using group-based histogram. Journal of Information Science and Engineering 423 (2008), 411–423.

[63] Stauffer, C., and Grimson, W. Adaptive background mixture models for real-time tracking. Proceedings. 1999 IEEE Computer Society Conference on Computer Vision

and Pattern Recognition (Cat. No PR00149), 246–252.

[64] Stefano, L. D., and Neri, G. Analysis of pixel-level algorithms for video surveil- lance applications. Image Analysis and (2001).

[65] Stenger, B., Ramesh, V., Paragios, N., Coetzee, F., and Buhmann, J. Topology free hidden Markov models: application to background modeling. Proceedings

Eighth IEEE International Conference on Computer Vision. ICCV 2001 1, 294–301.

[66] Tai, J. Background segmentation and its application to traffic monitoring using mo- dified histogram. Networking, Sensing and Control, 2004 (2004), 13–18.

[67] Toyama, K., Krumm, J., Brumitt, B., and Meyers, B. Wallflower: principles and practice of background maintenance. Proceedings of the Seventh IEEE Internatio-

nal Conference on Computer Vision, September (1999), 255–261 vol.1.

[68] Villegas, P., and Marichal, X. Perceptually-weighted evaluation criteria for segmentation masks in video sequences. IEEE transactions on image processing : a

[69] Vlahos, J. Surveillance society: New high-tech cameras are watching you. Popular

Mechanics (January 2008), 64–69.

[70] Wallis, S. Competition between choices over time. Tech. rep., University College London, 2010.

[71] Wang, H., and Suter, D. A re-evaluation of mixture of Gaussian background modeling. Acoustics, Speech, and Signal Processing, . . . 2 (2005), 1017–1020.

[72] Wang, H., and Suter, D. Background subtraction based on a robust consensus method. In Proceedings of the 18th International Conference on Pattern Recognition -

Volume 01 (Washington, DC, USA, 2006), ICPR ’06, IEEE Computer Society, pp. 223–

226.

[73] Wauthier, F. Motion Tracking Project Synopsis. http://www.cs.berkeley.edu/ flw/- tracker/, 2010.

[74] Wesolkowski, S., and Jernigan, E. Color edge detection in RGB using jointly euclidean distance and vector angle. In Vision interface (1999), vol. 99, pp. 19–21. [75] Wren, C., Azarbayejani, A., Darrell, T., and Pentland, A. Pfinder: Real-

time tracking of the human body. Pattern Analysis and Machine Intelligence, IEEE

Transactions on 19, 7 (1997), 780–785.

[76] Wu, H., and Zheng, Q. Self-evaluation for video tracking systems. In Proceedings

of the 24th Army Science Conference (november. 2004).

[77] Xiong, Q., and Jaynes, C. Multi-resolution background modeling of dynamic scenes using weighted match filters. In Proceedings of the ACM 2nd international workshop on

Video surveillance & sensor networks (New York, NY, USA, 2004), VSSN ’04, ACM,

pp. 88–96.

[78] Yao, J., and Odobez, J.-M. Multi-Layer Background Subtraction Based on Color and Texture. 2007 IEEE Conference on Computer Vision and Pattern Recognition (June 2007), 1–8.

[79] Yilmaz, A., Javed, O., and Shah, M. Object tracking: A Survey. ACM Computing

Surveys 38, 4 (dec 2006), 1–44.

[80] Zhang, Y.-J. A review of recent evaluation methods for image segmentation. Proce-

edings of the Sixth International Symposium on Signal Processing and its Applications (Cat.No.01EX467) 1 (2001), 148–151.

[81] Zhang, Y.-J. A Summary of Recent Progresses for Segmentation Evaluation. Advan-

[82] Zhang, Y.-J., Fritts, J. E., and Goldman, S. a. Image segmentation evaluation: A survey of unsupervised methods. Computer Vision and Image Understanding 110, 2 (May 2008), 260–280.

[83] Zheng, J., Wang, Y., Nihan, N., and Hallenbeck, M. Extracting roadway background image: Mode-based approach. Transportation Research Record: Journal of