Home Journal
E-mail Print
Acoustics Australia  Logo (3032 bytes)

Vol 42 No 2

CONTENTS

August 2014


LETTERS

Infrasound Sensitivity and Harmonic Distortion
Henri Boutin, John Smith and Joe Wolfe
PDF Full Paper

ARTICLES

Native and Non-Native Speech Perception
Daniel Williams and Paola Escudero.
PDF Full Paper

In Thrall to the Vocabulary
Anne Cutler
PDF Full Paper

Active Listening: Speech Intelligibility in Noisy Environments
Simon Carlile
PDF Full Paper

Auditory Grammar
Yoshitaka Nakajima, Takayuki Sasaki, Kazuo Ueda and Gerard B. Remijn
PDF Full Paper

Low Frequency Spatialization in Electro-Acoustic Music and Performance: Composition Meets Perception
Roger T. Dean
PDF Full Paper

Neuroscientific Investigations of Musical Rhythm
Daniel J. Cameron and Jessica A. Grahn
PDF Full Paper

Music Training: Lifelong Investment to Protect the Brain From Aging and Hearing Loss
Nina Kraus and Travis White-Schwoch
PDF Full Paper

Pitch Processing in Music and Speech
Barbara Tillmann
PDF Full Paper

Cochlear Implants Can Talk But Cannot Sing In Tune
Jeremy Marozeau, Ninia Simon and Hamish Innes-Brown
PDF Full Paper

Acoustics Forum

Introducing the Noise Database
Elizabeth Beach
PDF Full Paper

News
Workplace Health and Safety News
AAS News
New Products
Future Conference
Sustaining Members
Diary
Advertisers Index


Native and Non-Native Speech Perception

Daniel Williams1 and Paola Escudero2
1
Institute of Humanities & Creative Arts, University of Worcester, Worcester, WR2 6AJ, United Kingdom
2 MARCS Institute, University of Western Sydney, Sydney, Australia This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 79 - 83 (2014)
ABSTRACT: This review examines research on speech perception by both native and non-native listeners. The development of speech perception in infancy is first considered and a theoretical model that accounts for this is introduced. A brief overview then follows of several research areas under the umbrella of non-native speech perception, namely cross-dialect, cross-language and second-language speech perception. It is shown that non-native and native speech perception is critically shaped by the specific ways in which speakers use acoustic cues in speech production.

In Thrall to the Vocabulary

Anne Cutler
The MARCS Institute, University of Western Sydney, Sydney, Australia This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 84 - 89 (2014)
ABSTRACT: Vocabularies contain hundreds of thousands of words built from only a handful of phonemes; longer words inevitably tend to contain shorter ones. Recognising speech thus requires distinguishing intended words from accidentally present ones. Acoustic information in speech is used wherever it contributes significantly to this process; but as this review shows, its contribution differs across languages, with the consequences of this including: identical and equivalently present information distinguishing the same phonemes being used in Polish but not in German, or in English but not in Italian; identical stress cues being used in Dutch but not in English; expectations about likely embedding patterns differing across English, French, Japanese.

Active Listening: Speech Intelligibility in Noisy Environments

Simon Carlile
School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, Australia This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 90 - 96 (2014)
ABSTRACT: Attention plays a central role in the problem of informational masking, a key element of the cocktail party problem, itself described more than 60 years ago. This review considers recent research that has illuminated how attention operates, not only on the auditory objects of perception, but on the processes of grouping and streaming that give rise to those objects. Competition between endogenous and exogenous attention, the acoustic and informational separability of the objects making up an auditory scene and their interaction with the task requirement of the listener all paint a picture of a complex heterarchy of functions.

Auditory Grammar

Yoshitaka Nakajima1, Takayuki Sasaki2, Kazuo Ueda1, and Gerard B. Remijn1
1
Department of Human Science/Research Center for Applied Perceptual Science, Kyushu University, Fukuoka 815-8540, Japan 2 Department of Psychological and Behavioral Science, Miyagi Gakuin Women?s University, Sendai 981-8557, Japan This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 97 - 101 (2014)
ABSTRACT: Auditory streams are considered basic units of auditory percepts, and an auditory stream is a concatenation of auditory events and silences. In our recent book, we proposed a theoretical framework in which auditory units equal to or smaller than auditory events, i.e., auditory subevents, are integrated linearly to form auditory streams. A simple grammar, Auditory Grammar, was introduced to avoid nonsense chains of subevents, e.g., a silence succeeded immediately by an offset (a termination); a silence represents a state without a sound, and to put an offset, i.e., the end of a sound, immediately after that should be prohibited as ungrammatical. By assuming a few gestalt principles including the proximity principle and this grammar, we are able to interpret or reinterpret some auditory phenomena from a unified viewpoint, such as the gap transfer illusion, the split-off phenomenon, the auditory continuity effect, and perceptual extraction of a melody in a very reverberant room.

Audio File can be downloaded from:

  1. Figure 2a
  2. Figure 2b
  3. Figure 3
  4. Figure 4
  5. Figure 5

Low Frequency Spatialization in Electro-Acoustic Music and Performance: Composition Meets Perception

Roger T. Dean austraLYSIS, Sydney, Australia
MARCS Institute, University of Western Sydney, Sydney, Australia This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 102 - 110 (2014)
ABSTRACT: The article takes the perspectives of an electro-acoustic musician and an auditory psychologist to consider detection of localization and movement of low frequency sounds in reverberant performance environments. The considerable literature on low frequency localization perception in free field, non-reverberant environments is contrasted with the sparser work on reverberant spaces. A difference of opinion about reverberant environments has developed between on the one hand, audio engineers and many musicians (broadly believing that low frequency localization capacities are essentially negligible), and on the other, psychoacousticians (broadly believing those capacities are limited but significant). An exploratory auditory psychology experiment is presented which supports the view that detection of both localization and movement in low frequency sounds in ecological performance studio conditions is good. This supports the growing enthusiasm of electro- acoustic musicians for sound performance using several sub-woofers.

Neuroscientific Investigations of Musical Rhythm

Daniel J. Cameron1 and Jessica A. Grahn1,2
1
Brain and Mind Institute, Western University, London, Canada
2 Dept. of Psychology, Western University, London, Canada This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 111 - 116 (2014)
ABSTRACT: Music occurs in every human society, unfolds over time, and enables synchronized movements. The neural mechanisms underlying the perception, cognition, and production of musical rhythm have been investigated using a variety of methods. FMRI studies in particular have shown that the motor system is crucially involved in rhythm and beat perception. Studies using other methods demonstrate that oscillatory neural activity entrains to regularities in musical rhythm, and that motor system excitability is modulated by listening to musical rhythm. This review paper describes some of the recent neuroscientific findings regarding musical rhythm, and especially the perception of a regular beat.

Music Training: Lifelong Investment to Protect the Brain From Aging and Hearing Loss

Nina Kraus1,2 & Travis White-Schwoch1
1
Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, and Department of Communication Sciences, Northwestern University, Evanston, IL, USA
2 Department of Neurobiology & Physiology, Northwestern University, Evanston, IL, USA and Department of Otolaryngology, Northwestern University, Chicago, IL, USA This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 117 - 123 (2014)
ABSTRACT: Age-related declines in the auditory system contribute strongly to older adults? communication difficulties, especially understanding speech in noisy environments. With the aging population growing rapidly there is an expanding need to discover means to offset or remediate these declines. Music training has emerged as a potential tool to set up the brain for healthy aging. Due to the overlap between neural circuits dedicated to speech and music, and the strong engagement of cognitive, sensorimotor, and reward circuits during music making, music training is thought to be a strong driver of neural plasticity. Comparisons of musicians and non-musicians across the lifespan have revealed that musicians have stronger neural processing of speech across timescales, ranging from the sentence and word level to consonant features on a millisecond level. These advantages are also present in older adult musicians, and they generalise to advantages in memory, attention, speed of processing, and understanding speech in noise. Excitingly, even older adult musicians with hearing loss maintain these neurophysiological and behavioural advantages, outperforming non-musicians with normal hearing on many auditory tasks. Delineating the neurophysiological and behavioural advantages associated with music experience in older adults, both with normal hearing and hearing loss, can inform the development of auditory training strategies to mitigate age-related declines in neural processing. These prospective enhancements can provide viable strategies to mitigate older adults? challenges with everyday communication.

Pitch Processing in Music and Speech

Barbara Tillmann1,2,3
1 CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics team, Lyon, F-69000, France
2 University of Lyon 1, Lyon, F-69000, France
3 MARCS Institute, University of Western Sydney This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 124 - 130 (2014)
ABSTRACT: The present paper proposes an overview of research that investigates pitch processing by considering cognitive processes (related to context, learning, memory and/or knowledge) for both music and language materials. Research investigating cross-domain influences of expertise (either in music or tone languages) and deficits (as in congenital amusia), referred to as positive and negative transfer effects, also contributes to our understanding of domain-specificity or ?generality of mechanisms involved in pitch processing.

Cochlear Implants Can Talk But Cannot Sing In Tune

Jeremy Marozeau, Ninia Simon and Hamish Innes-Brown The Bionics Institute, East Melbourne, Australia This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Vol. 42, No. 2 pp 131 - 135 (2014)
ABSTRACT: The cochlear implant is rightfully considered as one of the greatest success stories in Australian biomedical research and development. It provides sound sensation to hundreds of thousands of people around the world, many of whom are able to understand and produce speech. The device was developed in order to optimize speech perception, and parameters such as the choice of frequency bands and signal processing used were chosen in order to maximise perceptual differences between speech vowels. However, these settings are far from being suited for the perception of music, which might partly explain why many cochlear implant recipients cannot enjoy music through their implant.

 

Newsflash

PROPOSED INTERNATIONAL YEAR OF SOUND 2019

Let's make 2019 the International Year of Sound!

Click here to see draft prospectus. Suggestions for major activities that would be truly international to strengthen the application are welcomed.

 

ACOUSTICS 2017

Perth, Western Australia 19-22 November 2017