Audio Metaphor

A multi-agent system for generating soundscape by sonfiying text.

2013
Software, sound
Exhibited: ISEA 2013, Sydney Australia

Audio Metaphor, an autonomous system for creating novel soundscape compositions. Audio Metaphor blends concepts from natural language queries derived from Twitter with semantically linked sound recordings from online  audio databases. Text analysis used to create audio file search queries. An audio file segmentation algorithm based on general soundscape composition categories is used to cut up the results. A multi-agent composition engine processes and combines cut up audio files for representations of natural language queries.

Examples:

a quenching rain drenched my burning head (2 min 12 sec)


children were in the garden playing games (1 min 05 sec)


the city in the bush (29 sec)

Additional System Details:

Audio Metaphor creates unique soundscape compositions that represent the words in an NLP using a process pipeline as follows:

* Transforms a natural language request into a audio file search queries using a simple word feature extraction algorithm.
* Search online for audio file recommendations from the Freesound database.
* Segment audio file recommendations into soundscape regions using a SVM classifier and heuristic for joining  adjacent segments similarly labelled.
* Process and combine audio segments for soundscape composition using a multi-agent approach. Each agent is responsible for one segmented audio file, and uses a heuristic modelled after production notes from Canadian composer Barry Truax to process the different regions.

Links to Documentation:

http://audiometaphor.ca/aume/

Technical Details:

black box space (or similar)
high speed internet connection w/ ethernet conn.
powered yamaha hm50 monitors (or similar)
sub-woofer
mixer (4 in 4 out min)
4 ch eq
4 ch compressor
2000+ lumen projector (pref. short throw)
projection wall or screen
Mac Computer w/ OS X 10.6 or above. i5 or greater chipset. 8GB RAM or greater