• Nenhum resultado encontrado

3.4 Simula¸c˜ oes

3.4.1 Resultados

A Figura 3.10 mostra a distor¸c˜ao espectral m´edia tanto para o Isomap quanto para o PCA. Note que a abordagem proposta usando o Isomap como t´ecnica de redu¸c˜ao de dimensionalidade fornece melhores resultados do que o PCA. Al´em disso, o Isomap gera menos distor¸c˜ao inclusive com menos dimens˜oes do que o PCA. Na Tabela 3.1 mostra-se o intervalo de confian¸ca (±2σ, 95%) para algumas dire¸c˜oes, onde observa-se que o Isomap apresenta menos variabilidade do que o PCA.

Por outro lado, como em outros estudos [69], a distor¸c˜ao espectral aumenta em altas frequˆencias devido `a complexidade do pavilh˜ao da orelha (veja a Figura 3.11). No entanto, na nossa abordagem a distor¸c˜ao fica, grosso modo, abaixo de 5 dB.

Tabela 3.1: Intervalo de confian¸ca (±2σ, 95%) da distor¸c˜ao espectral m´edia para v´arios azimutes.

Azimute Distor¸c˜ao Espectral M´edia (dB) Isomap

Distor¸c˜ao Espectral M´edia (dB) PCA -180 4.5 ± 1.1 6.7 ± 1.4 -100 4.7 ± 0.7 5.7 ± 3.2 0 5.0 ± 1.0 5.7 ± 3.0 100 4.9 ± 1.0 6.3 ± 3.7 145 4.7 ± 1.1 6.0 ± 2.3

Cap´ıtulo 3. Personaliza¸c˜ao de HRTFs usando Isomap no plano horizontal 38 x1 x2 x3 x4 d1 d2 d4 d3 x1: Largura da cabeça x2: Profundidade da cabeça x3: Largura do pescoço x4: Largura dos ombros

d1: Largura do pavilhão da orelha d2: Altura do pavilhão da orelha d3: Largura da cavum concha d4: Altura da cavum concha

Cap´ıtulo 3. Personaliza¸c˜ao de HRTFs usando Isomap no plano horizontal 39 −150° −100° −50° 0° 50° 100° 150° 4. 5 5 5. 5 6 6 .5 7 7 .5 Azimute D ist o rçã o Esp e ct ra l Mé d ia (d B) ISOMAP PCA

Figura 3.10: Distor¸c˜ao espectral m´edia em fun¸c˜ao do azimute.

10 2 4 6 Frequência KHz D ist o rçã o Esp e ct ra l (d B) ISOMAP PCA 1 0.2 5

Cap´ıtulo

4

Conclus˜oes e Perspectivas

Conforme estudado nos dois primeiros cap´ıtulos, o problema do ´audio espacial envolve parˆametros tanto de engenharia quanto psicoac´usticos, sendo que os elementos principais da an´alise e s´ıntese do ´audio 3D s˜ao as HRTFs. Como as HRTFs variam amplamente entre indiv´ıduos, ´e necess´ario personaliz´a-las.

Com esse intuito, no Cap´ıtulo 3, propˆos-se um novo m´etodo para personalizar HRTFs no plano horizontal a partir de caracter´ısticas antropom´etricas. Al´em de usar o Isomap como m´etodo de redu¸c˜ao de dimensionalidade n˜ao linear, a principal contribui¸c˜ao deste trabalho ´e uma nova t´ecnica para construir o grafo do Isomap que incorpora informa¸c˜ao pr´evia importante sobre as HRTFs. Conforme mostrado nos resultados, o fato de incorporar conhecimento pr´evio na sele¸c˜ao dos vizinhos no Isomap pode levar a uma melhor representa¸c˜ao da variedade (i.e. manifold ) das HRTFs. Al´em disso, as simula¸c˜oes mostram que a abordagem proposta tem um desempenho melhor do que o PCA e confirmam que o Isomap ´e uma t´ecnica de redu¸c˜ao promissora para an´alise e s´ıntese de HRTFs, capaz de descobrir as rela¸c˜oes n˜ao lineares subjacentes da percep¸c˜ao auditiva.

Em pesquisas anteriores, com o intuito de ressaltar a individualidade das HRTFs ao longo das dire¸c˜oes e ouvidos, aplicaram-se m´etodos de redu¸c˜ao de dimensionalidade separadamente para cada dire¸c˜ao e ouvido. A desvantagem de aplicar m´etodos de redu¸c˜ao de dimensionalidade da forma mencionada, ´e que se perdem as rela¸c˜oes existentes ao longo das diferentes dire¸c˜oes e ouvidos. Assim, ´e importante ressaltar que na abordagem proposta nesta disserta¸c˜ao, para preservar essas rela¸c˜oes aplicou-se o m´etodo de redu¸c˜ao de dimensionalidade uma vez sobre todo o conjunto de dados de HRTFs para todos os indiv´ıduos, dire¸c˜oes e ouvidos no plano horizontal.

´

E importante tamb´em mencionar que, inicialmente, tentou-se usar o m´etodo de redu¸c˜ao de dimensionalidade LLE (Locally Linear Embedding) em vez do Isomap. Embora o LLE e o Isomap baseiem-se na constru¸c˜ao de um grafo para obter uma variedade, apenas a variedade obtida do Isomap conseguiu representar de forma adequada o conhecimento pr´evio que foi incorporado no grafo.

Cap´ıtulo 4. Conclus˜oes e Perspectivas 41

4.1

Perspectivas

O m´etodo de personaliza¸c˜ao de HRTFs proposto nesta disserta¸c˜ao precisa ser avaliado mediante experimentos psicoac´usticos. Estes experimentos dever˜ao analisar o desempenho da nossa abordagem na redu¸c˜ao de revers˜oes frente-tr´as e acima-abaixo assim como o erro de localiza¸c˜ao.

Outro aspecto relevante para futuras pesquisas ´e estender a abordagem proposta de persona- liza¸c˜ao de HRTFs al´em do plano horizontal. Conforme estudado no Cap´ıtulo 1, o ouvido humano ´

e capaz de estimar melhor a dire¸c˜ao de uma fonte no plano horizontal do que no plano mediano. Em geral, nossa precis˜ao na localiza¸c˜ao de fontes sonoras fora do plano horizontal diminui. Em virtude disso, um aspecto importante para futuros trabalhos ser´a procurar incorporar informa¸c˜oes relevantes da nossa percep¸c˜ao vertical na constru¸c˜ao do grafo do Isomap.

Como qualquer outra t´ecnica de redu¸c˜ao de dimensionalidade, o Isomap tem as suas desvan- tagens. Uma desvantagem do Isomap, ao contr´ario do PCA, ´e que ele gera um espa¸co de baixa dimensionalidade sem gerar uma fun¸c˜ao de mapeamento expl´ıcito. Assim, torna-se necess´ario usar algum tipo de aproxima¸c˜ao como a reconstru¸c˜ao baseada na vizinhan¸ca para reconstruir as amostras no espa¸co de alta dimensionalidade. Al´em disso, a falta de uma fun¸c˜ao de mapeamento expl´ıcito faz com que seja poss´ıvel projetar novas amostras (i.e. amostras fora da base de dados) no espa¸co de baixa dimensionalidade apenas atrav´es de aproxima¸c˜oes (e.g. a aproxima¸c˜ao de Nystrom [86]). No caso da abordagem proposta nesta disserta¸c˜ao, a aproxima¸c˜ao de Nystrom n˜ao foi capaz de projetar novas amostras de maneira adequada. Por conseguinte, um estudo futuro poderia realizar experimentos com t´ecnicas de redu¸c˜ao de dimensionalidade que forne¸cam uma fun¸c˜ao de mapeamento expl´ıcita.

Uma outra limita¸c˜ao da abordagem proposta ´e que a constru¸c˜ao do grafo do Isomap n˜ao leva em conta o conhecimento pr´evio das caracter´ısticas em frequˆencia das HRTFs. Esse fato, juntamente com a dificuldade de predizer as caracter´ısticas do pavilh˜ao da orelha, explica o incremento da distor¸c˜ao espectral em altas frequˆencias. Conforme estudado nos dois primeiros cap´ıtulos, sabe-se que os fatores que determinam a nossa percep¸c˜ao auditiva (e.g. IID, ITD, fatores espectrais) trabalham em diferentes faixas de frequˆencia. Portanto, fica para um trabalho futuro encontrar uma representa¸c˜ao que incorpore esse conhecimento pr´evio do comportamento espectral das HRTFs usando um banco de filtros.

Um dos inconvenientes na s´ıntese de ´audio espacial ´e que, devido `a dificuldade de medir as HRTFs, a quantidade dispon´ıvel de HRTFs medidas para indiv´ıduos diferentes ´e pouca. Al´em disso, a resolu¸c˜ao e as condi¸c˜oes de medi¸c˜ao variam ao longo das diferentes bases de dados de HRTFs. Por exemplo, a baixa resolu¸c˜ao espacial em dire¸c˜oes laterais da base de dados de HRTFs CIPIC afetou a constru¸c˜ao da variedade na abordagem proposta (i.e. os agrupamentos de HRTFs n˜ao est˜ao uniformemente distribu´ıdos na variedade da Figura 3.6b). Como trabalho futuro, seria interessante estudar a possibilidade de fazer a fus˜ao de v´arias bases de dados de HRTFs que foram medidas sob diferentes condi¸c˜oes e resolu¸c˜ao espacial.

Por outro lado, o m´etodo de regress˜ao usando redes neurais demonstrou ser capaz de predizer de forma correta as caracter´ısticas espectrais das HRTF a partir dos parˆametros antropom´etricos. Por´em, ainda ´e necess´ario diminuir a distor¸c˜ao espectral nas HRTFs preditas pela rede neural. Portanto, fica para um futuro estudo a an´alise de outras t´ecnicas de aprendizado de m´aquina.

Cap´ıtulo 4. Conclus˜oes e Perspectivas 42

Finalmente, fica para um trabalho futuro analisar a constru¸c˜ao da variedade das HRTFs de campo pr´oximo (i.e. HRTFs que dependem da dire¸c˜ao e da distˆancia) usando Isomap com o intuito de determinar se o Isomap ´e capaz de identificar os fatores de percep¸c˜ao de distˆancia. Nesse caso, o desafio seria introduzir o conhecimento pr´evio da nossa percep¸c˜ao de distˆancia na constru¸c˜ao do grafo do Isomap.

Publica¸c˜oes

Felipe Grijalva, Luiz Martini, Siome Goldenstein, and Dinei Florencio. Anthropometric based customization of Head-Related Transfer Functions using Isomap in the horizontal plane. In 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 4493-4497, Florence, Italy, May 2014.

Bibliografia

[1] R Preibisch-Effenberger, Die Schallokalisationsf¨ahigkeit des Menschen und ihre audiome- trische Verwendung zur klinischen Diagnostik [The human faculty of sound localization and its audiometric application to clinical diagnostics], Ph.D. thesis, Dissertation, Technische Universit¨at, Dresden, 1966.

[2] Haustein and Schirmer, “Messeinrichtung zur Untersuchung des Richtungslokalisationsver- mogens,” Hochfrequenztech und Elektroakustik, , no. 79, pp. 96–101, 1970.

[3] Jens Blauert, Spatial hearing: the psychophysics of human sound localization, MIT press, revised edition, 1997.

[4] P. Damaske and B. Wagener, “Richtungshorversuche uber einen nachgebildeten Kopf,” Acustica, vol. 21, no. 30, pp. 118, 1969.

[5] Pavel Zahorik, “Auditory display of sound source distance,” in Proc. Int. Conf. Audit. Disp., 2002, pp. 2–5.

[6] Wahidin Wahab and Dadang Gunawan, “Enhanced Individualization of Head-Related Impulse Response Model in Horizontal Plane Based on Multiple Regression Analysis,” in Int. Conf. Comput. Eng. Appl. 2010, vol. 2, pp. 226–230, IEEE.

[7] V Algazi, R Duda, D Thompson, and C Avendano, “The CIPIC HRTF database,” in Work. Appl. Signal Process. to Audio Acoust., New Platz, NY, USA, 2001, pp. 99–102, IEEE. [8] Robert S Woodworth and Harold Schlosberg, Experimental psychology, Oxford and IBH

Publishing, 1954.

[9] Ronald Azuma, Mike Daily, and Jimmy Krozel, “Advanced human-computer interfaces for air traffic management and simulation,” in Flight Simul. Technol. Conf., Reston, Virigina, July 1996, American Institute of Aeronautics and Astronautics.

[10] Bruce N. Walker and Jeffrey Lindsay, “Navigation Performance With a Virtual Auditory Display: Effects of Beacon Sound, Capture Radius, and Practice,” Hum. Factors J. Hum. Factors Ergon. Soc., vol. 48, no. 2, pp. 265–278, June 2006.

Bibliografia 44

[11] C Frauenberger and M Noisternig, “3D audio interfaces for the blind,” in Proc. 2003 Int. Conf. Audit. Disp., 2003, pp. 1–4.

[12] Amandine Afonso, Brian F G Katz, Alan Blum, Christian Jacquemin, and Michel Denis, “A study of spatial cognition in an immersive virtual audio environment: Comparing blind

and blindfolded individuals,” in Proc. 10th Meet. Int. Conf. Audit. Disp., 2005, pp. 1–8. [13] Gareth R. White, Geraldine Fitzpatrick, and Graham McAllister, “Toward accessible 3D

virtual environments for the blind and visually impaired,” in Proc. 3rd Int. Conf. Digit. Interact. Media Entertain. Arts - DIMEA ’08, New York, USA, Sept. 2008, p. 134, ACM Press.

[14] Patrick Roth, Lori Petrucci, Thierry Pun, and Andr´e Assimacopoulos, “Auditory browser for blind and visually impaired users,” in CHI ’99 Ext. Abstr. Hum. factors Comput. Syst., New York, New York, USA, 1999, pp. 218–219, ACM Press.

[15] Stuart Goose and C M¨oller, “A 3D audio only interactive Web browser: using spatialization to convey hypermedia document structure,” in Proc. 7th ACM Int. Conf. Multimed. (Part 1), 1999, pp. 363–371.

[16] Jack M Loomis, Reginald G Golledge, and Roberta L Klatzky, “Navigation system for the blind: Auditory display modes and guidance,” Presence Teleoperators Virtual Environ., vol. 7, no. 2, pp. 193–203, 1998.

[17] Anatole L´ecuyer, Pascal Mobuchon, Christine M´egard, J´erˆome Perret, Claude Andriot, and J-P Colinot, “HOMERE: a multimodal system for visually impaired people to explore virtual environments,” in Virtual Reality, 2003. Proceedings. IEEE. IEEE, 2003, pp. 251–258, IEEE Comput. Soc.

[18] Yoshihiro Kawai and Fumiaki Tomita, “A Support System for Visually Impaired Persons Using Acoustic Interface–Recognition of 3-D Spatial Information–,” in Proc. 16th Intl. Conf. Pattern Recognit., 2001, pp. 974–977.

[19] Shraga Shoval, Iwan Ulrich, and Johann Borenstein, “NavBelt and the Guide-Cane [obstacle- avoidance systems for the blind and visually impaired],” Robot. Autom. Mag. IEEE, vol. 10, no. 1, pp. 9–20, 2003.

[20] Brian F. G. Katz, Slim Kammoun, Ga¨etan Parseihian, Olivier Gutierrez, Adrien Brilhault, Malika Auvray, Philippe Truillet, Michel Denis, Simon Thorpe, and Christophe Jouffrais, “NAVIG: augmented reality guidance system for the visually impaired,” Virtual Real., vol.

16, no. 4, pp. 253–269, June 2012.

[21] Zhengyou Zhang, “Microsoft Kinect Sensor and Its Effect,” IEEE Multimed., vol. 19, no. 2, pp. 4–10, Feb. 2012.

[22] Durand R. Begault, 3D Sound for Virtual Reality and Multimedia, AP Professional, Cambridge, 1994.

Bibliografia 45

[23] Donald Mershon and L King, “Intensity and reverberation as factors in the auditory perception of egocentric distance,” Attention, Perception, Psychophys., vol. 18, no. 6, pp. 409–415, 1975.

[24] Lord Rayleigh, “On our perception of sound direction,” Philos. Mag. Ser. 6, vol. 13, no. 74, pp. 214–232, Feb. 1907.

[25] George F. Kuhn, “Model for the interaural time differences in the azimuthal plane,” J. Acoust. Soc. Am., vol. 62, no. 1, pp. 157, July 1977.

[26] B. McA. Sayers, “Acoustic-Image Lateralization Judgments with Binaural Tones,” J. Acoust. Soc. Am., vol. 36, no. 5, pp. 923, May 1964.

[27] S S Stevens and E B Newman, “The Localization of Actual Sources of Sound,” Am. J. Psychol., vol. 48, no. 2, pp. pp. 297–306, 1936.

[28] W. E. Feddersen, “Localization of High-Frequency Tones,” J. Acoust. Soc. Am., vol. 29, no. 9, pp. 988, Sept. 1957.

[29] J C Middlebrooks and D M Greenhaw, “Sound localization by human listeners.,” Annu. Rev. Psychol., vol. 42, no. 1, pp. 135, 1991.

[30] Hans Wallach, “The role of head movements and vestibular and visual cues in sound localization,” J. Exp. Psychol., vol. 27, no. 4, pp. 339–368, 1940.

[31] Willard R. Thurlow, “Effect of Induced Head Movements on Localization of Direction of Sounds,” J. Acoust. Soc. Am., vol. 42, no. 2, pp. 480, Aug. 1967.

[32] V Ralph Algazi, Carlos Avendano, and Richard O Duda, “Elevation localization and head-related transfer function analysis at low frequencies,” J. Acoust. Soc. Am., vol. 109, no. 3, pp. 1110–1122, 2001.

[33] Kanji Watanabe, Kenji Ozawa, Yukio Iwaya, Yo Iti Suzuki, and Kenji Aso, “Estimation of interaural level difference based on anthropometry and its effect on sound localization.,” J. Acoust. Soc. Am., vol. 122, no. 5, pp. 2832–41, Nov. 2007.

[34] Elizabeth M. Wenzel, Marianne Arruda, Doris J Kistler, and Frederic L Wightman, “Locali- zation using nonindividualized head-related transfer functions,” J. Acoust. Soc. Am., vol. 94, no. 1, pp. 111–123, July 1993.

[35] Paul D. Coleman, “An analysis of cues to auditory depth perception in free space.,” Psychol. Bull., vol. 60, no. 3, pp. 302, 1963.

[36] DS Brungart and WM Rabinowitz, “Auditory localization of nearby sources. Head-related transfer functions,” J. Acoust. Soc. Am., vol. 106, no. September, pp. 1465–1479, 1999. [37] B G Shinn-Cunningham, “Localizing sound in rooms,” ACM/SIGGRAPH Eurographics

Bibliografia 46

[38] G H Recanzone, “Rapidly induced auditory plasticity: the ventriloquism aftereffect.,” in Proc. Natl. Acad. Sci. U. S. A., Feb. 1998, number 3, pp. 869–75.

[39] Patrick M Zurek, “The precedence effect,” in Dir. Hear., pp. 85–105. Springer, 1987. [40] Helmut Haas, “The influence of a single echo on the audibility of speech,” J. Audio Eng.

Soc., vol. 20, no. 2, pp. 146–159, 1972.

[41] Heinrich Kuttruff, Room acoustics, CRC Press, 2000.

[42] JM Loomis and RG Golledge, “Personal guidance system for the visually impaired,” in Proc. first Annu. ACM Conf. Assist. Technol. 1994, ACM.

[43] Myung-Suk Song, Cha Zhang, D Florencio, and Hong-Goo Kang, “An Interactive 3-D Audio System With Loudspeakers,” Multimedia, IEEE Trans., vol. 13, no. 5, pp. 844–855, 2011. [44] Henrik M¨oller, “Fundamentals of binaural technology,” Appl. Acoust., vol. 36, no. 3-4, pp.

171–218, 1992.

[45] Daniele Pralong, “The role of individualized headphone calibration for the generation of high fidelity virtual auditory space,” J. Acoust. Soc. Am., vol. 100, no. 6, pp. 3785, Dec. 1996.

[46] David Schonstein, Laurent Ferr´e, and Brian F Katz, “Comparison of headphones and equalization for virtual auditory source localization,” J. Acoust. Soc. Am., vol. 123, no. 5, pp. 3724, 2008.

[47] Durand R. Begault, “Perceptual Effects of Synthetic Reverberation on Three-Dimensional Audio Systems,” J. Audio Eng. Soc., vol. 40, no. 11, pp. 895–904, Nov. 1992.

[48] M R Schroeder and B S Atal, “Computer simulation of sound transmission in rooms,” Proc. IEEE, vol. 51, no. 3, pp. 536–537, 1963.

[49] Jerry Bauck and Duane H Cooper, “Generalized transaural stereo and applications,” J. Audio Eng. Soc., vol. 44, no. 9, pp. 683–705, 1996.

[50] S Carlile, C Jin, and V Van Raad, “Continuous virtual auditory space using HRTF interpolation: Acoustic and psychophysical errors,” in Proc. First IEEE Pacific-Rim Conf. Multimed., 2000, pp. 220–223.

[51] Guy-Bart Stan, Jean-Jacques Embrechts, and Dominique Archambeau, “Comparison of different impulse response measurement techniques,” J. Audio Eng. Soc., vol. 50, no. 4, pp. 249–262, 2002.

[52] Bill Gardner, Keith Martin, and Others, “HRTF measurements of a KEMAR dummy-head microphone,” Massachusetts Inst. Technol., vol. 280, no. 280, pp. 1–7, 1994.

[53] BoSun Xie, “ERRATA Chapter 5, HRTF filter modeles,” in Head-Related Transf. Funct. Virtual Audit. Disp. 2013.

Bibliografia 47

[54] K Genuit and N Xiang, “Measurements of artificial head transfer functions for auralization and virtual auditory environment,” Proc. 15th ICA, Trondheim, vol. 2, pp. 469–472, 1995. [55] Bjarke P Bovbjerg, Flemming Christensen, Pauli Minnaar, and Xiaoping Chen, “Measu- ring the Head-Related Transfer Functions of an Artificial Head with a High-Directional Resolution,” in Audio Eng. Soc. Conv. 109. Audio Engineering Society, 2000.

[56] BoSun Xie, XiaoLi Zhong, Dan Rao, and ZhiQiang Liang, “Head-related transfer function database and its analyses,” Sci. China Ser. G Physics, Mech. Astron., vol. 50, no. 3, pp. 267–280, June 2007.

[57] H Mertens, “Directional hearing in stereophony theory and experimental verification,” EBU Rev., vol. 92, pp. 146–158, 1965.

[58] K A J Riederer, “Head-related transfer function measurements,” Master’s Thesis. Helsinki Univ. Technol. Finl., 1998.

[59] Zhong Xiaoli, “Criterion selection in the leading-edge method for evaluating interaural time differ-ence,” Audio Eng., vol. 31, no. 9, pp. 47–52, 2007.

[60] Jack Hebrank, “Spectral cues used in the localization of sound sources on the median plane,” J. Acoust. Soc. Am., vol. 56, no. 6, pp. 1829, Aug. 1974.

[61] Alan V Oppenheim, Ronald W Schafer, John R Buck, and Others, Discrete-time signal processing, vol. 2, Prentice-hall Englewood Cliffs, 1989.

[62] D J Kistler and F L Wightman, “A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction.,” J. Acoust. Soc. Am., vol. 91, no. 3, pp. 1637–47, Mar. 1992.

[63] A. Kulkarni, S.K. Isabelle, and H.S. Colburn, “On the minimum-phase approximation of head-related transfer functions,” in Work. Appl. Signal Process. to Audio Accoustics. 1995, pp. 84–87, IEEE.

[64] Aki H¨arm¨a, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, Jarmo Hiipakka, and Ga¨etan Lorho, “Augmented Reality Audio for Mobile and Wearable Appliances,” J. Audio Eng. Soc., vol. 52, no. 6, pp. 618–639, June 2004.

[65] HG Fisher and SJ Freedman, “The role of the pinna in auditory localization.,” J. Aud. Res., 1968.

[66] PMC Morse, Theoretical acoustics, McGraw-Hill, New York, USA, 1986.

[67] V. Ralph Algazi, Richard O. Duda, Ramani Duraiswami, Nail A. Gumerov, and Zhihui Tang, “Approximating the head-related transfer function using simple geometric models of the head and torso,” J. Acoust. Soc. Am., vol. 112, no. 5, pp. 2053, Oct. 2002.

Bibliografia 48

[68] Makoto Otani and Shiro Ise, “Fast calculation system specialized for head-related transfer function based on boundary element method,” J. Acoust. Soc. Am., vol. 119, no. 5, pp. 2589, May 2006.

[69] Takanori Nishino, Kazuhiro Iida, Naoya Inoue, Kazuya Takeda, and Fumitada Itakura, “Estimation of HRTFs on the horizontal plane using physical features,” Appl. Acoust., vol.

68, no. 8, pp. 897–908, 2007.

[70] Graham Grindlay and M. Alex O. Vasilescu, “A Multilinear (Tensor) Framework for HRTF Analysis and Synthesis,” in Acoust. Speech Signal Process. (ICASSP), 2007 IEEE Int. Conf. 2007, vol. 1, pp. 161–164, IEEE.

[71] Lin Li and Qinghua Huang, “HRTF Personalization Modeling based on RBF Neural Network,” in Acoust. Speech Signal Process. (ICASSP), 2013 IEEE Int. Conf. 2013, pp. 3707–3710, IEEE.

[72] Q Huang and Y Fang, “Modeling personalized head-related impulse response using support vector regression,” J. Shanghai Univ., vol. 13, no. 6, pp. 428–432, 2009.

[73] Hongmei Hu, Lin Zhou, Hao Ma, and Zhenyang Wu, “HRTF personalization based on artificial neural network in individual virtual auditory space,” Appl. Acoust., vol. 69, no. 2, pp. 163–172, Feb. 2008.

[74] R. Raykar, VC Duraiswami, “The Manifolds of Spatial Hearing,” in Acoust. Speech Signal Process. (ICASSP), 2005 IEEE Int. Conf. 2005, vol. 3, pp. 285–288, IEEE.

[75] Bill Kapralos and Nathan Mekuz, “Application of dimensionality reduction techniques to HRTFs for interactive virtual environments,” in Int. Conf. Adv. Comput. Entertain. Technol. 2007, pp. 256–257, ACM.

[76] Bill Kapralos, Nathan Mekuz, Agnieszka Kopinska, and Saad Khattak, “Dimensionality reduced HRTFs: a comparative study,” in Int. Conf. Adv. Comput. Entertain. Technol., New York, New York, USA, Dec. 2008, p. 59, ACM.

[77] Inc. Wolfram Research, “Wolfram MathWorld,” 2014.

[78] HS Seung and DD Lee, “The manifold ways of perception,” Science (80-. )., vol. 290, pp. 2268–2269, 2000.

[79] V. Ralph Algazi, Carlos Avendano, and Richard O. Duda, “Estimation of a Spherical-Head Model from Anthropometry,” J. Audio Eng. Soc., vol. 49, no. 6, pp. 472–479, June 2001. [80] J B Tenenbaum, V de Silva, and J C Langford, “A global geometric framework for nonlinear

dimensionality reduction.,” Science (80-. )., vol. 290, pp. 2319–23, Dec. 2000.

[81] W Michael Brown, Shawn Martin, Sara N Pollock, Evangelos A Coutsias, and Jean-Paul Watson, “Algorithmic dimensionality reduction for molecular structure analysis.,” J. Chem. Phys., vol. 129, no. 6, pp. 064118, Aug. 2008.

Bibliografia 49

[82] K Saul Lawrence and T Roweis Sam, “Think globally, fit locally: Unsupervised learning of nonlinear manifolds,” J. Mach. Learn. Res., vol. 4, pp. 119–155, 2002.

[83] BoSun Xie, XiaoLi Zhong, Dan Rao, and ZhiQiang Liang, “Head-related transfer function database and its analyses,” Sci. China Ser. G Physics, Mech. Astron., vol. 50, no. 3, pp. 267–280, June 2007.

[84] Laurens van der Maaten, Eric Postma, and Jaap van den Herik, “Dimensionality reduction: A comparative review,” J. Mach. Learn. Res., vol. 10, pp. 1–41, 2009.

[85] M. Zhang, R. A. Kennedy, T. D. Abhayapala, and W. Zhang, “Statistical method to identify

Documentos relacionados