Architecture is a field that is highly dependent on a system of visual orders. In line with this point of view, we start by considering if the visual appeal of architecture has overshadowed other qualities and criteria by which architectural design may be experienced. One such undervalued and often overlooked criteria is sound. With this project, we would like to explore the power of sound within the spaces we design. By learning from an extensive set of well-executed acoustic spaces can we train a neural network to produce novel acoustic solutions for a variety of sites? Can AI play a role in deciphering the relationship between architecture and sound? And furthermore, create new possibilities and inspirations for architects? By concentrating our research and methods on the specific typology of the concert hall, we can begin to address these questions between architecture, sound, and artificial intelligence.
With our outcome, we hope to achieve a platform for architectural-acoustic experimentation. Based on an initial data set of 2000 concert hall interiors, our neural network will be trained to generate its own interpretation of acoustic spaces through the adaptation of existing generic volumes into an acoustic forms. Our AI-driven device can help influence early stages of design for concert halls and acoustic music spaces. We can imagine the outcomes of expanding or altering our data set. Though, however self-driven our process may become, we are nevertheless poised to take into consideration the opportunities for collaboration and shared authorship between ourselves and the AI-driven platform.