Bitte aktualisieren Sie Ihren Browser zur korrekten Anzeige dieser Webseite.

VOGST

Voice Gesture Sketching Tool

For most designers and artists sketching interactive sound concepts is a hard task due to the limitations of their technical knowledge. Even the sound synthesis experts and researchers in musical gesture have not yet developed a quick way to communicate their sonic ideas. The first problem is related to the fact that the self-produced sound is heard differently by the one that produces the sound by the use of the voice and the sound heard by the others. The source of the sound generation situated in the body ie. vocal cords create vibrations and resonances that modify our perception. This phenomenon, named ergo-audition by Michel Chion, may be an obstacle in communicating the sound that we desire to communicate. Everyone have experienced a surprised when hearing ones voice played back. Another problem is the recoding and further sketching with the voice sound. As one sketches an object or a building, the lines can be redrawn, corrected and changed until the right form is found. But how can this be done by sound?

The goal of the VOGST project is to develop a tool for sketching and improvising sonic interaction through voice and gesture. As the core material of sonic interaction design are the relations between sound, artefact and gesture, the major efforts of this project is undertaken in the direction of facilitating the design of the relationships between gesture and sound. To do so simple abstract object called VOGST with embedded computing and sound technology will be developed. It will have the capability of recording the voice through a microphone as well as capturing the gesture performed that was coupled to the sound. The gesture can be recorded, replayed and manipulated through the VOGST, but also via an interface developed in MAXmsp software. The tool will be tested with interaction designers in the workshop setting, in order to evaluate the design problems and specify prototype iteration. This project builds on findings from previous research by the author in the area of Sonic Interaction Design.

Project lead: Karmen Franinović

Partners: Frédéric Bevilacqua (Ircam, Paris) and Michel Rinott (IDHO, HIT Holon)

Funding: the European Science Fundation


Research