INTERACTIVITY SONIFIED
instint.johnkeston.com
  • WELCOME!

    Sonify your work by triggering, generating, and processing sonic textures or musical forms through interactivity. Learn several basic programming techniques for including sound in your projects through input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances. Please download the demo files below:

    Processing.org files
    Cycling '74 Max files
    The Missing P5 Sketches

  • INTRODUCTION

    My background in music technology, new media, and software development have led me to adopt music, sound, moving images, and interactivity as the primary modalities in my performances, artistic practice, and research. This includes building and using software and hardware based instruments that respond to touch, light, or video signals. With these tools I compose music, and produce interactive performances and installations. Originally from the United Kingdom, I'm am currently based in Minneapolis, Minnesota where he I am a professor of interactive media at St. Thomas University.

  • WHAT IS SONIFICATION?

    Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques. For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.
    — Wikipedia

  • WHERE IS IT USED?

    • Geiger Counters
    • Clocks (Ticks and Chimes)
    • Medical Devices (EKGs, etc.)
    • Sonar and Radar
    • Seismometers
    • Software (Interface Sounds)
    • Data Sonification
    • Installations (Sound Art)
  • LANGUAGES / IDEs

    PROTOCOLS

    • MIDI (Musical Instruments)
    • OSC (Open Sound Control)
    • HTTP (Hypertext Transfer Protocol)
    • USB (Universal Serial Bus)

    PLATFORMS

  • TRIGGERING SOUNDS

    Triggering sound is the most basic technique for the sonification of interactive media. Generally this implies triggering or looping digitally sampled audio files, but it might also involve analogue electronics including: tape players, record players, or electronic instruments like synthesizers or drum machines. In addition, kintetic devices, such as solonoids, may be programmed with a microcontroller to strike physical objects.

  • The Personality Translator by Ai Minnesota Students and John Keston

  • Felix's Machines from Felix Thorn on Vimeo.

  • TRIGGERING SOUND WITH PROCESSING

    Using Mouse Coordinates
    Using Keyboard Controls
    Triggering Sound with Moving Objects
    Leap Motion Demo

    TRIGGERING SOUND WITH MAX

    Triggering and Looping Samples
    Recording and Playing Back Sound
    MSP Sampling Tutorial 1
    MSP Sampling Tutorial 2

  • GENERATING SOUNDS

    Generating sound electronically is usually more complex than triggering samples because the waveforms must be modelled and manipulated using algorithms. This process can be simplified by using hardware or software synthesizers. Max and Pd streamline sound synthesis by including hundreds of objects that produce and process sound waves. Libraries like Minim also include classes to generate rudimentary waveforms (sine, triangle, sawtooth, and pulse).

  • GENERATING SOUND EXAMPLES

    GMS (Gestural Music Sequencer) is software written in Processing that generates music in the form of MIDI notes that are based on the analysis of video signals in real-time. The dyanmics and pitch of each note are determined by the position of the brightest pixel in the frame. The pitches may be selected from a scale or manipulated through adjustable probability distributions. Durations are either based on video analysis or a separate set of probability distributions.

    TX81Z Patch Degrader
    Vocalise Sintetica at ECHOFLUXX 14, Prague, CZ
    Duets with the Singing Ringing Tree

  • GENERATING SOUND WITH PROCESSING

    Trackpad Theremin Demo
    Video Camera Theremin Demo
    Continuous Controller (CC) Theremin Demo
    Tone Generator Example

    GENERATING SOUND WITH MAX

    Using kslider and MIDI to Frequency
    Simple MIDI Controlled Monosynth
    Step Sequencer Demo
    Polysynth Demo

  • SIGNAL PROCESSING

    Electronically amplified sound is processed by nature. The speakers, amplifiers, mixers, EQs, and gain stages all tailor the original signal to an extent. DSP or digital signal processing continually evolves providing new ways of manipulating sounds, but analogue techniques are still used and often preferred by artists and engineers. The convenience of DSP allows it to be used to simulate analogue techniques as well as produce entirely new and distinct effects.

  • TYPES OF SIGNAL PROCESSING

    • Change frequency (pitch)
    • Change amplitude (loudness)
    • Reverse playback
    • Filtering (lowpass, highpass, bandpass, state variable)
    • Delay (echo, loopers)
    • Reverberation (reverb)
    • Time stretch (lengthen without changing frequency)
    • Granulate (granular synthesis)
    • Modulation (Low Frequency Oscillators)
  • Isikles by John Keston and Lister Rossel

  • Post-prepared Pianom by John Keston and Piotr Szyhalski

  • SIGNAL PROCESSING WITH PROCESSING

    Using MIDI CC data
    Applying amplitude to radius

    SIGNAL PROCESSING WITH MAX

    Lowpass Resonant Filter Demo
    Bandpass Resonant Filter Demo
    Dry/Wet Reverberation Demo
    Delay (Echo) Demo