(WIP) MAIAP: Multimodal AI-based Interface for Audiovisual Performance
Date: July 2024-
Categories: AI Art, Audiovisual Interface, Audiovisual Performance, Generative
Categories: AI Art, Audiovisual Interface, Audiovisual Performance, Generative
I have been working on designing a novel interface for utilizing image and sound AI models in audiovisual performance. As an early experiment, I created a prototype that generates images and sounds with pre-trained models in response to real-time webcam and mic input. The prototype was demonstrated in one of Exit Points' performances on August 30, 2024 at Arraymusic, which was organized by Michael Palumbo.
The prototype uses SpecVQGAN for sound generation and StreamDiffusion with Stable Diffusion XL Turbo for image generation, both with modifications.
Exit Points #52 Ensemble 2 Performance Footage
My Sound Generated
Exit Points #52 Swtchemups Session (Visualization Only)
Credit for the performance
- MAYSUN: Percussion, Electronics
- Danika Lorén: Opera
- Emmanuel Lacopo: Electric Guitar
- Michael Palumbo: Modular Synth
- Sihwa Park: AI-generated Sound, Visualization
- Video and audio recordings for the first video were provided courtesy of Arraymusic and Michael Palumbo.
This project is supported by the Connected Minds program and the Canada First Research Excellence Fund.