• Grason Stadler TympStar Middle Ear Analizer re- lease 2 immittancemeter – microprocessed and equipped with three tone frequencies probe: 226, 678 and 1000 Hz. Tympanometric measures were carried out automatically by the device at 50 decaPascals per second (daPa/s). Results were plotted in a graph and printed out. The analysis of ipsilateral acoustic reflexes was done with stimuli calibrated in dBHL, played on a loudspeaker as- signed exclusively to the ipsilateral mode. The signal was digitally multiplexed, thus allowing the probe tone (226 Hz) to be separated from that of the stimulus avoiding overlapping waves and consequent generation of artifact. The equipment used to analyze ipsilateral acousticreflex delivers a maximum output of 110 dBHL. It was calibrated for São Paulo city altitude, and all measures were taken from the standpoint of electrical installation so as to meet the manufacturer’s technical specifications. (Grason-Sta- dler, 2001) 20
We know that that any pathological agent, specially liquid in the middle ear cavity causes negative acousticreflex; so negative acousticreflex along with tympanogram type B shows serous otitis media. In other cases negative acousticreflex along with tympanogram type C or even A with positive clinical symptoms during examination is also serous otitis media. Patients with chronic otitis media (COM), Down syndrome, cleft palate, eardrum rupture and OME were excluded from the study.
We had 17 Moebiüs Sequence patients (diagnosed by the genetic test - genotype) come to the Speech and Hearing Therapy School of the Catholic University of Per- nambuco (UNICAP), 11 females and 6 males, with average age of 6 years and 8 months, age ranging between 3 and 13 years, without anatomical alterations of the external ear that would prevent the immittance exam. However, only 13 participants went through the immittance test, which consisted of the tympanometry and the acousticreflex study by means of an AZ 7 Interacaoustics immittance meter. Before each exam, the patients’ guardians signed an informed consent form with the study objectives and the other necessary information. After that, the participants underwent an otoscopic exam followed by an immittance exam. The procedure used for the tympanometry was based on descriptions 15,16,19,18 and their results reported
M uch has been studied on the role of the acousticreflex in the communication process. Aim: To examine the responses of the contralateral acousticreflex in children with normal hearing and phonological disorders. To investigate the relationship of the level of severity of phonological disorder. To measure the chances of it affecting all the frequencies tested. Materials and Methods: The study was based on the analysis of medical charts from 70 children with phonological disorders, and 24 females and 46 males, aged between 5 and 7 years. Audiological tests were analyzed to exclude children with hearing loss, evaluation of the contralateral acousticreflex and the level of severity of phonological disorder. Study Design: Prospective. Results: All children showed change in the contralateral acousticreflex. There was no significant relationship between the level of severity of phonological disorders and changes in the acousticreflex for both genders. Female children showed no statistically significant value in the relationship between the frequencies, except at 500 Hz. Male children had more significant relationship in the association between changes in frequencies tested. Conclusion: It is believed that children with phonological disorders exhibit changes in the contralateral acousticreflex.
We observed that the average of the contralateral acousticreflex thresholds from 500 to 4000Hz of group A individuals were very similar for the right and left ears (Table 2). The mean values for the frequencies of 500 to 4000Hz are lower than the results found by Silverman, Silman and Miller (1983) who found an average for both ears of 95,1dBHL, 93,4dBHL, 95,9dBHL and 95,7dBHL in the frequencies of 500 to 4000Hz. On the other hand, our results are similar to the ones obtained by Hall and Weaver (1979), where the mean values were 90dBHL, 89dBHL, 89dBHL and 90dBHL for both ears. The mean value of the acousticreflex threshold in group B varied from 99,7dBHL to 114,7dBHL (Table 3).Several authors agree that the mean value of the pure tone hearing threshold is 85dBHL (Borg et al., 1990; Northern and Gabbard, 1994). In hearing disabled populations, Higson et al., (1996) found a mean value of the acousticreflex threshold of 83dBHL against 83,4dBHL of the control group.
There was a statistically significant difference be- tween the CQ and SQ groups with regards to the presence of contralateral acoustic reflexes at 4000 Hz in left ears (p=0,0094). Absence of contralateral acoustic reflexes predominated at other frequencies in the CQ group (Table 2). These results are similar to those in other papers in that when comparing groups with and without altered auditory processing, the mean values require to evoke the contralateral acousticreflex in study groups were higher in both ears than those in control groups, especially at 4000 Hz. 9 We did not find any explanation in the litera-
Background: comparative study between the level of discomfort and the acousticreflex in workers. Aim: to observe the hearing behavior, through the assessment of the contraction activity of the stapedius muscle and the level of discomfort, of individuals who are and are not exposed to occupational noise, with the aim of identifying the influence of noise in the behavior of the contraction of the stapedius muscle and in the sensibility of hearing. Method: this study was developed at the Serviço Social da Indústria - SESI - Ce. A hundred and three adults with normal hearing, male and female, with ages varying from 18 to 45 years were divided in three groups: G1 with 41 adults exposed to noise and who used AIPE; G2 with 32 adults exposed to noise and who did not use AIPE; G3 with 30 adults who were not exposed to noise. Participants were submitted to audiologic evaluation, including the analysis of the acousticreflex level (ARL) and discomfort level (DL) at the frequencies of 500 HZ, 1000Hz, 2000Hz, 3000Hz, 4000Hz and WN. For the statistical analysis the tests of Mann Whitney, Wilcoxon and Kruskal, with significance levels of 5%, were used. Results: no statistically significant difference was identified for the ARL between the three groups, with mean values ranging from 93 to 103dBHL; the ARL was significantly smaller than the DL, with the mean values of DL varying from 111 to 119 dBHL for G1, from 113 to 120dBHL for G2 and from 106 to 114dBHL for G3; the DL is higher in individuals of G1 followed by individuals of G2 and G3. Conclusion: the exposure to noise does not determine changes in the behavior of the ARL; the DL rises with the exposure to occupational noise; the DL is higher than the ARL in 10 to 25dB.
altered auditory processing. Study group values, however, were higher than control group values at the frequencies that were investigated (except at 500 Hz). As acousticreflex sensitization is related with the efferent auditory system, and that one of the functions of this system is cochlear protection against intense sound, it may be inferred that this system is more effective (increased inhibitory effect) in subjects with altered auditory processing, which would in- terfere with speech understanding in the presence of com- peting sounds. Furthermore, efferent auditory pathways are activated in the presence of loud sound, altering the cochlear mechanism by means of the external hairy cells, which would decrease the traveling wave magnitude. 1
The significant association between the phone- mes /S/, /Z/ and /L/ and altered acousticreflex should be noted. The first two are fricative sounds and the latter is a lateral liquid; all are more complex sounds acquired later, and all share the [-ant] distinction. It is possible that altered acoustic reflexes are an obstacle against perfectly identifying the components of sound and assimilating these sounds into the phonological system.
Acoustic plane waves are the simplest type of propagating waves through the luid medium. The characteristic property of such waves is that parameters such as acoustic pressure, particle displacements etc., have the same amplitude on all points of any plane, perpendicular to the direction of propagation. As an example, the propagating waves in a conined luid, through a rigid tube, generated by a vibrating piston, positioned at one of the edges of the tube. Any divergent type of wave, in a homogeneous medium, also assumes the characteristics of a plane wave, when it propagates at long distances from its source (Gerges, 1994). An important remark about the acoustic plane waves is that they have characteristics similar to those presented by the longitudinal waves propagating in a bar. Consequently, it is possible to deduce the wave equation through a luid media, in which it is admitted being conined in a rigid tube, with constant transversal section (Burnley and Culick, 1997).
4. Middle ear reflectance assessment in three steps: (A) Obtaining the reflectance curve in the frequency range 200---6000 Hz at an intensity of 60 dB SPL. Each stimu- lus lasted 0.1---10 s per point. Collection was carried out with the chirp acoustic stimulus. (B) Retest to confirm the obtained reflectance curve. (C) The procedure was repeated, with the simultaneous presence of contralat- eral noise through insertion phones at 30 dBNS in relation to the white noise threshold. In the end, three measures were obtained in each ear. Based on the three measures, the difference between the response levels collected with and without contralateral noise was calculated.
The use of acoustic barriers depends on the hearing capabilities of target species. Fish present a continuum of hearing capabilities associated with the evolution of hearing structures (Popper & Fay, 2011). Fish that present morphological specializations connecting air- filled cavities, such as the swimbladder, to the inner ear have enhanced hearing capabilities and can detect sound pressure in addi- tion to the kinetic component of sounds, i.e. particle motion. The Cyprinidae, are notably sensitive to sound and can detect a wide fre- quency range (up to thousands of Hz) (Popper & Fay, 2011, Popper & Schilt, 2008). Species with no hearing specializations, such as Salmoni- dae, are only able to detect particle motion and their hearing sensitiv- ity is restricted to low frequency sounds of up to a few hundred Hertz; e.g.,400 Hz in the Atlantic salmon Salmo salar L. 1758 (see fig- ure 2.1 in Popper & Schilt, 2008). Due to the large variation in hearing structures, different species may react differently to an acoustic bar- rier, emphasizing the need for species-dedicated behavioural evalua- tion tests.
natural requirement of higher data rates has pushed the market to higher frequency ranges and to coherent communication schemes. This shift revealed the shortcomings of the actual knowledge on acoustic propagation at frequencies say above 2 khz, and the effects of random environmental fluctuations that strongly affect signal coherence in the frequency band of interest: above 10 khz. There was a large body of work carried out during the last decade devoted to improve coherent com- munications. To name just a few initiatives: effort was spent on a variety of channel diversity combiners aiming at enhancing signal-to-noise ratio by adding closely located receivers; brute force multichannel differential feedback equalizers (M-DFE) were developed for optimally matching the acoustic channel at each receiver and time-reversal based techniques were used to obtain intersymbol interference reduction. In parallel, work was devoted to understand environmental effects on the acous- tic communication signal, which may lead to fundamental results at the long run. From there, an obvious evolution would be to extend P2P communication to structured networks of underwater acoustic modems opening up a whole range of new possibilities for message transmission using, for example routing and multi-hopping in a dense network of nodes.
Previous studies indicated that the primary neural pathways mediating PPI is in the brain stem and that the inferior colliculus (IC) was crucial [14,22]. The central nucleus of the IC receives audi- tory input, which is relayed to the external nucleus of the IC before going to the middle layers of the superior colliculus (SC). In turn, the SC sends bilateral projections to the pedunculopontine tegmental nucleus (PPTg) [1,8,10]. The transient activation of these midbrain nuclei by the prepulse is converted into long-lasting inhibition of the giant neurons of the caudal pontine reticular nucleus (PnC) thus reducing startle response . Large lesions of the IC eliminated the inhibition of acoustic startle by auditory but not by visual prepulses [8,13,20]. Electrical stimulation of the IC before the acoustic star- tle stimulus attenuated PPI in rats without major effects on startle amplitude [14,22]. Therefore, the IC is a critical part of the auditory pathway mediating acoustic PPI [8,10].
104 Fado singers participated on this study: 47 males, 57 females; 90 amateur and 14 professional, with ages from [18–67]. Fado singers produced spoken tasks consisting on sustained [a,i,u,s,z] plus reading aloud and sung tasks consisting on sustained [a,i,u] of the song “Nem ´as paredes confesso”. Acoustic voice parameters were compared between males vs. females, professionals vs. amateurs and young vs. older voices a two independent t-test with an α at .05.
Underwater Acoustic Sensor Network (UASN): A collection of sensor nodes which communicates among them through the emerging underwater acoustic communication technology is known as UASN. Acoustic communication technology is the best choice when compare to the radio waves and optical waves that is why it was chosen for the communication in underwater. Underwater acoustic network gaining attention due to their importance in underwater applications for military and commercial purpose.
Immune-mediated inner ear disease (IMIED) produces neu- rosensorial deafness. The patients complain about the reduction of acoustic acuity or reduction of sound discrimination. In general, it is bilateral, rapidLy progressive. It may be associated with vestibular symptoms. The syndrome mechanism is not completely clear, but it is accepted as an immunologic nature. It is known that the endolymphatic sac is an immunocompetent organ, and circulating antibodies, against antigens of the inner ear, as well as viral endolymph antigens, are found in this con- dition. However, specificity and sensibility and the role of those autoantibodies in the pathologic process are poorly explained. Generally, early use and high doses of corticotherapy solve or reduce the problem. In case of persistence of manifestations resistant to corticotherapy, another option should be metho- trexate, with a good, regular and, sometimes, inefficient result, according to several papers published.
Furthermore, refinement of interior noise in today’s vehicles is a very challenging process largely due to the fact that the modern passenger cabin is high refined in terms of its vibro-acoustic behavior. The acoustic signatures of primary noise sources such as engine, tire, and exhaust, together with their transmission paths and their related phenomena, have been well understood and controlled. Consequently, the relative contribution of secondary-level noise sources, such as vibrating structural surfaces like door panels, provoked by structure-borne components like wheel suspension inputs caused by undulations and road surface irregularities has become the limiting factor in the quest to improve passenger comfort. The resulting structural door panel vibrations with their subsequent deformations are hypothesized to be a secondary cause of passenger cabin noise.
While speech is effectively used for an intellectual commun- ion, music is a powerful catalyst and medium to communi- cate emotion. Singing is a marriage of speech and music that optimizes the animating and commissioning power of both music and speech while the moments of acoustic silence provide the ambience for in depth contemplation  . The musical quality of a melody rendered by the human whistle is a symbolic representation of the human vocal sing- ing. The results presented here, assess the subjective acoustic impact of a tune from sacred music rendered by a human whistle in comparison with the subjective acoustic effect of musical instruments (such as cello, clarinet, violins and en- semble) from different source locations (namely, the nave of the church and the choir loft of the church). The comparative subjective religious comfort triggered by the acoustic effect is assessed through a derived Acoustic Comfort Impression Index (ACII)  and several Acoustic Worship Indices (AWI), namely, Subjective Sacred Factor (SSaF), Subjective Intelligibility Factor (SInF) and Subjective Silence Factor (SSiF) .