Member-only story

Hearing for robots and AI

Keno Leon
8 min readJul 10, 2020

Teaching computers to hear like humans do.

Along with vision, hearing is one of the most beloved and useful sensory experiences we have but what would we need to do to grant this sense to a computer or Ai program. While we already have plenty of ways to interact with sound on computers I am more interested in the natural way we perceive sound and how that can be written as a software program, it’s just a fun topic.

Game plan: We will first talk about sound in general so we have something to work with and briefly delve into the biology of sound as related to us humans; we will then implement a basic digital version of whatever we found in a second part.

Sound & Hearing

Sound in its simplest form is what happens when energy travels through a physical medium like air and water creating acoustic vibrations we later perceive as sound.

Sound is usually described in wave notation which is useful to better understand and communicate concepts, let’s examine a simple beep sound and how it is represented in notation:

(1) Let's say there is a box that emits a beep ( by rapidly vibrating a speaker cone) (2) Those vibrations travel in all directions from the box, (3) picking just one in horizontal axis (front view) we can then analyze it further...Auditory vibrations (and waves in general) can be described via their cyclical components, amplitude/volume and shape ( here a sine wave ).We can then describe sound based on the number of cycles per second, in this case 20 which equates to a 20Hz(hertz) 1 second tone. Note: A 20Hz tone is normally out of our hearing range, I am using it here for simplicity.

Going back to how we perceive sound, after entering your ear sound waves get amplified and eventually get picked up by hair cells inside the cochlea ( at the organ of corti/basilar membrane), these sound receptors are…

--

--

Keno Leon
Keno Leon

No responses yet

Write a response