Abe Davis is finding new ways to use video by using the vibrations in it to reconstruct audio.
No sound? No problem. Abe Davis and a team of researchers from MIT, Microsoft, and Adobe developed an algorithm that can extract audio from silent videos by analyzing the tiny vibrations of the objects as captured by a camera.
In one experiment, the team filmed earbuds playing a song with no discernible sound. The vibrations of the earbuds in the video was enough to recreate a song identifiable by the app Shazam. When the team tried the experiment using an everyday point-and-shoot camera, as opposed to an expensive high-speed version, the vibrations were still able to reconstruct the sound. Davis presented these findings in a paper for Siggraph, a computer-graphics conference, and gave a TED talk where he demoed the visual microphone. And there’s more to come: The latest research from Davis and fellow graduate student Katie Bouman will be out this summer.
Davis is a doctoral student at MIT.