Sonify your work by triggering, generating, and processing sonic textures or musical forms through interactivity. Learn several basic programming techniques for including sound in your projects through input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances. Please download the demo files below:
My background in music technology, new media, and software development have led me to adopt music, sound, moving images, and interactivity as the primary modalities in my performances, artistic practice, and research. This includes building and using software and hardware based instruments that respond to touch, light, or video signals. With these tools I compose music, and produce interactive performances and installations. Originally from the United Kingdom, I'm am currently based in Minneapolis, Minnesota where he I am a professor of interactive media at St. Thomas University.
Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques. For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.
Triggering sound is the most basic technique for the sonification of interactive media. Generally this implies triggering or looping digitally sampled audio files, but it might also involve analogue electronics including: tape players, record players, or electronic instruments like synthesizers or drum machines. In addition, kintetic devices, such as solonoids, may be programmed with a microcontroller to strike physical objects.
The Personality Translator by Ai Minnesota Students and John Keston
Generating sound electronically is usually more complex than triggering samples because the waveforms must be modelled and manipulated using algorithms. This process can be simplified by using hardware or software synthesizers. Max and Pd streamline sound synthesis by including hundreds of objects that produce and process sound waves. Libraries like Minim also include classes to generate rudimentary waveforms (sine, triangle, sawtooth, and pulse).
GMS (Gestural Music Sequencer) is software written in Processing that generates music in the form of MIDI notes that are based on the analysis of video signals in real-time. The dyanmics and pitch of each note are determined by the position of the brightest pixel in the frame. The pitches may be selected from a scale or manipulated through adjustable probability distributions. Durations are either based on video analysis or a separate set of probability distributions.
Trackpad Theremin Demo
Video Camera Theremin Demo
Continuous Controller (CC) Theremin Demo
Tone Generator Example
Using kslider and MIDI to Frequency
Simple MIDI Controlled Monosynth
Step Sequencer Demo
Electronically amplified sound is processed by nature. The speakers, amplifiers, mixers, EQs, and gain stages all tailor the original signal to an extent. DSP or digital signal processing continually evolves providing new ways of manipulating sounds, but analogue techniques are still used and often preferred by artists and engineers. The convenience of DSP allows it to be used to simulate analogue techniques as well as produce entirely new and distinct effects.
Isikles by John Keston and Lister Rossel
Post-prepared Pianom by John Keston and Piotr Szyhalski
Using MIDI CC data
Applying amplitude to radius
Lowpass Resonant Filter Demo
Bandpass Resonant Filter Demo
Dry/Wet Reverberation Demo
Delay (Echo) Demo
Making Music by Dennis DeSantis
The Sound Book by Trevor Cox
Music with Context: Audiovisual Scores for Improvising Musicians by John Keston
Sonic Experience by Jean-François Augoyard