DETERMINING PERSONALIZED HEAD RELATED TRANSFER FUNCTIONS USING AURALIZATION
by Shubham Kumar
Abstract – This project aims to explore the basis of an enhanced way to deliver spatial audio in an observer’s horizon using acoustic modeling in place of low-pass filtering, interaural time differences, and simple attenuation functions. The approach described in this paper determines two interpolating functions, used as head related transfer functions (“HRTFs”) which provide the illusion of spatial audio. The HRTF is obtained by solving the wave and Helmholtz partial differential equations. The parameters for the aforementioned equations can be adjusted to account for any source and observer orientation in any anechoic space. In addition, the shape of the observer’s head can be modeled to create a personalized HRTF, tailored to the specific observer instead of a standard one. The model was able to successfully attenuate different frequency intervals and phase shift the audio signal for each channel to provide sufficient binaural cues to localize sound. Results from this implementation, although obtained through computationally intensive processes, could potentially be extended to virtual reality-based systems which would, in theory, provide for a more realistic audio experience, at a much lower cost.
Video not yet available.