Mediascape

Mediascape uses a recombinant generative computational system to sequence and display video clips from nature and landscape in an ever-changing visual flow.

2014
Sound, music, video
Exhibited: Expressive Conference 2014, Vancouver
Working with: Jim Bizzochi, Arne Eigenfeldt

During an installation of the work, audience members were able to select from different audio tracks (on headphones) while watching the work. This new work builds on earlier creative experiments in metacreation and generative video by members of the Generative Media Project. The group has come together to produce an engaging work of audio-visual art in the ambient video genre.

The ambient video aesthetic echoes Brian Eno’s phrase about ambient music: “Ambient Music must be able to accommodate many levels of listening attention without enforcing one in particular; it must be as ignorable as it is interesting” [Eno, Music for Airports, PVC 7908 (AMB 001) album liner notes – 1978].

Sample Output: Blurred Lines (Descriptive) from Jim Bizzocchi on Vimeo.

The video clips have been shot specifically for this project, processed to optimize the visual style preferred by Bizzocchi for ambient video, and gathered into the database which the system uses. Creative use of text-based tags (selected by the artist) for the video clips drives the selection and sequencing of the visuals, as well as providing triggers for the selection, processing and playing of the music and soundscapes. Soundscapes and musical compositions are generated by two additional computational systems to augment the image.

MediaScapeCom

Soundscape compositions are generated from the AUME system, which uses semantic and sentiment analysis of tags for retrieving environmental sound recordings from a database that are segmented, and categorized by the system for recombination to accompany the visuals. Music scores are generated by PAT, which generates complete compositions derived from analysis of a musical corpus. At the exhibition, viewers are presented with 4 headsets, each of which corresponds to a different audio track for the video: Descriptive, Metaphoric, Contrapuntal and Music.

As a collective, we are inspired and intrigued with the emergent behaviours occurring with both the separate and integrated audio/video systems. While the individual systems have different levels of autonomy and perceived creative abilities, our goal is to create a work that the audience responds to as they would to any work of art. The response will be subjective. It may be emotional, critical, or intellectual. We hope it will be an engaging experience that does not rely merely on the novelty of, or explanation of, our computer-generated system.

Developing the computational elements of the systems should be seen as a creative act as much as technical skill – and as such one that can be appreciated. One of the artist’s roles is to effectively implement the desired aesthetic goals for the work within a logical computational framework. The framework in this case is multi-layered and complex, involving systems that analyse and respond to media sources directly as well as in combination with artist-generated cues (including tags or algorithmic processes). The computational systems will have impact beyond their contribution to the work itself. To the extent that the resulting work surprises (for better or for worse) the artists’ expectations, these systems may inspire a new aesthetic perspective or insight that will enhance the work as it evolves, or lead to a new direction for the artists.

VideoOSCEvent