Neural network architecture design requires making many crucial decisions. The common desiderata is that similar decisions, with little modifications, can be reused in a variety of tasks and applications. To satisfy that, architectures must provide promising latency and performance trade-offs, support a variety of tasks, scale efficiently with respect to the amounts of data and compute, leverage available data from other tasks, and efficiently support various hardware. To this end, we introduce AsCAN---a hybrid architecture, combining both convolutional and transformer blocks. We revisit the key design principles of hybrid architectures and propose a simple and effective \emph{asymmetric} architecture, where the distribution of convolutional and transformer blocks is \emph{asymmetric}, containing more convolutional blocks in the earlier stages, followed by more transformer blocks in later stages. AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation, and features a superior trade-off between performance and latency. We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance compared to the most recent public and commercial models. Notably, even without any computation optimization for transformer blocks, our models still yield faster inference speed than existing works featuring efficient attention mechanisms, highlighting the advantages and the value of our approach.
AsCAN architectures for Image Classification & Text-to-Image Generation. (a) The architecture for the image classification and details of the convolutional (C) and transformer blocks (T). AsCAN includes Stem (consisting of convolutional layers) and four stages followed by pooling and classifier. (b) The UNet architecture for the image generation. The Down blocks (the first three blocks starting from left) have the reverted reflection as the Up blocks (the first three blocks starting from right). (c) The details for C and T used in UNet. For the T that performs the cross attention between latent image features and textual embedding, the $Q$ matrix comes from the textural embedding. Note that, compared to image classification, the C and T blocks for image generation only adds extra components to incorporate the input time-step and textual embeddings.
If you find our work useful, please consider citing
@inproceedings{kag2024ascan,
title={Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation},
author={Kag, Anil and Coskun, Huseyin and Chen, Jierun and Cao, Junli
and Menapace, Willi and Siarohin, Aliaksandr and Tulyakov, Sergey and Ren, Jian},
booktitle = {Advances in Neural Information Processing Systems},
year={2024}
}