The Sound

Once the tracking system has obtained the ID, and the subject is recognized, the gathered information will be used to create and trigger a personal representation of the subject. This would be represented as auditory and visual events. The visual representation is explained in Visual Objects. The sound would be generated by a set of acoustic musical instruments inside a metalic resonator placed inside the circular structures that will be placed in the Space.

picts/waves.png

The instruments will be designed and built specially for the space. The types of sound produced will be divided into

- percussive: This will be simmilar to wooden blocks, marimba-like sounds and possibly membraphone.

- melodic: Looking for a sinusodial sound with a minimum (but some) partials. A modified theremin controlled by a robot has the sound we are looking for.

- Low frequency, sustained sounds: To be used as a pedal, just to fill space when needed. Even though it would be better to have different frequencies available, one would be enough.

picts/flow.png