Acoustic mirrors (AM), also known as sound mirrors, are monolithic concrete structures designed by William Henry Bragg and Lawrence Bragg. They were created to utilise sound for detection and localisation purposes. Their initial application was in the military field, serving as early warning
systems for approaching enemy aircraft. However, they quickly became obsolete with the advent of the first radars. This interactive installation aims to explore acoustic mirrors by encouraging the audience to move around the room in order to both compose and interpret their past compositions. In this immersive and sensorial environment, the inaccurate and delayed detection of acoustic mirrors is portrayed by the human’s flawed capacity to use sound as a spatialisation tool. We use generative sound synthesis visual feedback to enhance the experience.
This interactive installation emerges as the result of a master’s thesis (Design and Multimedia, at the University of Coimbra) that uses the AM (Acoustic Mirrors) as a catalyst for exploring the relationship between computing, sound, human perception, and aesthetics. In investigating the AM, we encountered various dualities that inspired the developed experience. Their passive operation, the sea that typically surrounds them, and the lack of significant technological resources contrast with the structural strength that characterizes them, the wartime context in which they were placed, and also with the
agents they aimed to identify. The detection of sound before the visual announcement of these agents easily translates into “sound before image.” We sought to illustrate this phrase and the delay between stimulus and perception it suggests through a reactive interaction but with the use of asynchronous performances by the audience.
The system supporting this installation is subdivided into three sectors: 1. detection of bodies and their movement in the room; 2. sound composition and reproduction; 3. construction and visual representation (if feasible). Upon starting the experience, the audience’s path through the room is recorded. As a way of becoming familiar with the designed parameterization (resulting from their position on two axes, the speed at which they move, etc.), the system reactively provides a sound output. After this exploration, the audience is asked to reproduce previously recorded paths with the assistance of a composition based on the record of a prior interaction and their memory regarding the sound feedback initially presented to them. The results contribute to a set of outputs that show
trends and reveal the effectiveness or ineffectiveness of the system in integrating the human into algorithmic processes.
The visual capture and system responsiveness are managed by a Kinect 2 and a Processing application (Java) communicating with a SuperCollider patch, responsible for the composition and generative sound synthesis.
In a room of desirably 16m², the installation will include: 1. a camera capturing the audience’s movement; 2. a video projection of the interaction and outputs; 3. four speakers; 4. a computer executing the system.