My capture system consists of a Microsoft Kinect v2 and Max patch to record OSC data streaming from the Kinect.
For development purposes, climber skeleton articulation data was captured to a text file. This replay-able articulation data functions as a mock climber input source to facilitate synth engine development without a climber present.
Once the synth engine is developed, Climber Synth will be able to offer real-time sonic feedback to climbers performing on the tread wall or on any climbing route.
1. Develop synth engine based on skeleton articulation data
Pertinent questions for research:
1. How could sonic feedback facilitate performance on the climbing wall? What kind of performative movement will be inspired?
2. How could sonic feedback provide real-time “coaching” for a climber’s technical form?
Climber Synth Wekinator Neural Network Jan 2019
In this video I demonstrate the results of integrating Wekinator to my climber synth patch to manage the many-to-many mapping of climber skeleton input data to sonic parameter output data.
For the input data, I am calculating the difference between the each limb endpoint and the torso: [hand/foot position] - [torso position]. This difference is visualized as a teal sphere. These four pairs of relative offsets are my input features derived from the Kinect Skeleton data.
I used Wekinator to train a neural net with 700 training examples associating various input sets with various output sets. After training, this model can be enabled to react in realtime to incoming input data.
For the sonic output in this example I am using [blotar~] from the Percolate package originally by Dan Trueman and Luke Dubois. My 5 output parameters are mapped to note, vibrato frequency, filter ratio, pluck amplitude, and distortion gain.