Our large team of nine people allowed the team to put together a stunning VR world with custom designed environment and virtual painting from painter Nathan DiPietro. We got custom sonic bone wand controller design from Mike Chokran, and professional Unity development expertise from Cory Robertson.
While the VR world was being built, the rest of the team discussed different options for mapping bodily expression to sound. We settled on the idea of painting lines in space to record the voice, with expressive playback being controlled by interacting with the lines using the sonic bone wand.
The Vox Augmento VR experience relies on two software systems working in coordination.
1. Unity powers the visuals and interactions such as painting lines or colliding with lines.
2. Max powers the audio engine, recording vocal samples, playing them back, and applying effects based on controller data sent from Unity.
Awarded “Best Technical Achievement”, Seattle VR Hackathon, Fall 2017
Philip Kobernik (initial idea, audio programming)
Andrew Luck (audio design + implementation)
Cory Robertson (Unity programming)
Nathan DiPietro (set design, implementation, 3d painting)
Mike Chokran (3d modeling)
Arunahb Satpathy (ux, graphic design)
Daniel Nelson (ux/pm)
Michael Wolf (ux/pm)
Christopher Nguyen (ux)
Following the Hackathon, I continued working on the audio engine to improve sample playback. Andrew Luck and I submitted Vox Augmento to NIME 2018 and were invited to demo at the conference.
Vox Augmento travelled to Virginia Tech for NIME 2018
Vox Augmento Demo w/ Scrubbing
This demo showcases the kind of sample exploration and performance that is possible with Vox Augmento.
- quickly creating sample banks in VR space
- non-linear sample access
- granular-synthesis-enabled “scrubbing” playback
- squiggly line vibrato pitch modulation
- dramatic pitch modulation
- creating and performing pitched “notes.”