Skip to content

Release of version presented at ICA 2019 in Aachen

Compare
Choose a tag to compare
@m-r-s m-r-s released this 31 Mar 09:11
· 2 commits to master since this release

This code was used for preparing the poster with the title
"Spatial Speech Intelligibility Map Rendering for Hearing Device Users with TASCAR, openMHA, and FADE"
presented at ICA 2019 in Aachen.

Abstract:
Listeners interact with a surrounding sound field by turning their heads to optimize its perception according to complex criteria. With impaired hearing, they perceive the sound field differently. In addition, hearing devices might continuously take decisions for their users to assure sufficient speech recognition performance by providing level and spatially dependent amplification. To build a model of this complex situation and better understand the interactions between the involved entities, sound fields are modeled with the Toolbox for Acoustic Scene Creation And Rendering (TASCAR), hearing devices with the open Master Hearing Aid (openMHA) and listeners with the Framework for Auditory Discrimination Experiments (FADE). The model simulates the transmission of speech information though sound field, hearing device, and impaired auditory system with the German matrix sentence test and an automatic speech recognizer. For a given position and orientation of a listener, speech reception thresholds (SRTs) for virtual talkers at arbitrary positions in a room are predicted and depicted as a map. The current implementation shows better-ear binaural integration patterns. The results suggest to interpret the SRTs relative to speech levels that normal-hearing interlocutors would choose in conversations, which in turn depend on the sound level at the position of the interlocutor.