CoDi: Any-to-Any Generation via Composable Diffusion

1University of North Carolina at Chapel Hill, 2Microsoft Azure Cognitive Services Research * Work done at Microsoft internship and UNC. Corresponding Authors
NeurIPS 2023

Abstract

We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis.

Model Architecture

Composable diffusion uses a multi-stage training scheme to be able to train on only a linear number of tasks but inference on all combinations of input and output modalities.

Multi-Outputs Joint Generation

Model takes in single or multiple prompts including video, image, text, or audio to generate multiple aligned outputs like video with accompanying sound.


Text + Image + Audio → Video + Audio

"Teddy bear on a skateboard, 4k, high resolution"


Text + Audio + Image → Text + Image

"Teddy bear on a skateboard, 4k, high resolution"

"A toy on the street sitting on a board"


Audio + Image → Text + Image

"Playing piano in a forest."


Text + Image → Text + Image

"Cyberpunk vibe."

"Cyberpunk, city, movie scene, retro ambience."


Text → Video + Audio

"Fireworks in the sky."


Text → Video + Audio

"Dive in coral reef."


Text → Video + Audio

"Train coming into station."


Text → Text + Audio + Image

"Sea shore sound ambience."

"Wave crashes the shore, sea gulls."


Text → Text + Audio + Image

"Street ambience."

"Noisy street, cars, traffics.."


Multiple Conditioning

Model takes in multiple inputs including video, image, text, or audio to generate outputs.


Text + Audio → Image

"Oil painting, cosmic horror painting, elegant intricate artstation concept art by craig mullins detailed"


Text + Image → Image

"Gently flowers in a vase, still life, by Albert Williams"


Text + Audio → Video

"Forward moving camera view."


Text + Image → Video

"Red gorgonian and tropical fish."


Text + Image → Video

"Eating on a coffee table."


Video + Audio → Text

"Panda eating bamboo, people laughing."


Image + Audio → Audio


Text + Image → Audio

"Horn, blow whistle"


Single-to-Single Generation

Model takes in a single prompt including video, image, text, or audio to generate a single output.


Text → Image

"Concept art by sylvain sarrailh of a haunted japan temple in a forest"


Audio → Image


Image → Video


Image → Audio


Audio → Text

"A magical sound, game."


Image → Text

"Mountain view, sunset."


BibTeX


@inproceedings{
	tang2023anytoany,
	title={Any-to-Any Generation via Composable Diffusion},
	author={Zineng Tang and Ziyi Yang and Chenguang Zhu and Michael Zeng and Mohit Bansal},
	booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
	year={2023},
	url={https://openreview.net/forum?id=2EDqbSCnmF}
}