Testing listening effort for speech comprehension using the individuals’ cognitive spare capacity
AbstractMost hearing aid fittings today are almost solely based on the patient’s audiogram. Although the loss of gain in the cochlea is important, for a more optimal fitting, more individual parameters of the patient’s cochlear loss together with the patient's cognitive abilities to process the auditory signal are required (Stenfelt & Rönnberg, 2009; Edwards, 2007). Moreover, the evaluation of the fitting is often based on a speech in noise task and the aim is to improve the individual patient’s signal to noise ratio (SNR) thresholds. As a consequence, hearing aid fitting may be seen as a process aimed to improve the patient’s SNR threshold rather than to improve communication ability. However, subsequent to a hearing aid fitting, there can be great differences in SNR improvement between patients that have identical hearing impairment in terms of threshold data (the audiogram). The reasons are certainly complex but one contributing factor may be the individual differences in cognitive capacity and associated listening effort. Another way to think about amplified hearing is to ease a subject’s listening effort (Sarampalis, et al., 2009). When the speech signal is degraded by noise or by a hearing impairment, more high-order cognitive or top-down processes are required to perceive and understand the signal, and listening is therefore more effortful. It is assumed that a hearing aid would ease the listening effort for a hearing impaired person. However, it is not clear how to measure the listening effort. We here present a test that will tap into the different cognitive aspects of listening effort, the Auditory Inference Span Test (AIST). The AIST is a dual task hearing in noise test, that combines auditory and memory processing and is well suited as a clinical test for listening effort.
PlumX Metrics provide insights into the ways people interact with individual pieces of research output (articles, conference proceedings, book chapters, and many more) in the online environment. Examples include, when research is mentioned in the news or is tweeted about. Collectively known as PlumX Metrics, these metrics are divided into five categories to help make sense of the huge amounts of data involved and to enable analysis by comparing like with like.
Copyright (c) 2011 N. Rönnberg, S. Stenfelt, M. Rudner
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.