invictus717 / MetaTransformer
- среда, 26 июля 2023 г. в 00:00:08
Meta-Transformer for Unified Multimodal Learning
As a foundation model, Meta-Transformer can handle data from 12 modalities, which determines that it can support a wide range of applications. As shown in this figure, Meta-Transformer can provide services for downstream tasks including stock analysis
Table 1: Meta-Transformer is capable of handling up to 12 modalities, including natural language , RGB images
, point clouds
, audios
, videos
, tabular data
, graph
, time series data
, hyper-spectral images
, IMU
, medical images
, and infrared images
.
This repository is built to explore the potential and extensibility of transformers for multimodal learning. We utilize the advantages of Transformers to deal with length-variant sequences. Then we propose the Data-to-Sequence tokenization following a meta-scheme, then we apply it to 12 modalities including text, image, point cloud, audio, video, infrared, hyper-spectral, X-Ray, tabular, graph, time-series, and Inertial Measurement Unit (IMU) data.
After obtaining the token sequence, we employ a modality-shared encoder to extract representation across different modalities. With task-specific heads, Meta-Transformer can handle various tasks on the different modalities, such as: classification, detection, and segmentation.
Model | Pretraining | Scale | #Param | Download |
---|---|---|---|---|
Meta-Transformer-B16 | LAION-2B | Base | 85M | ckpt |
Meta-Transformer-L14 | LAION-2B | Large | 302M | ckpt |
from timm.models.vision_transformer import Block
ckpt = torch.load("Meta-Transformer_base_patch16_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=768,
num_heads=12,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(12)])
encoder.load_state_dict(ckpt,strict=True)
To contact us, never hestitate to send an email to yiyuanzhang.ai@gmail.com
,kaixionggong@gmail.com
, zhangkaipeng@pjlab.org.cn
, or xyyue@ie.cuhk.edu.hk
!
If the code and paper help your research, please kindly cite:
@article{zhang2023metatransformer,
title={Meta-Transformer: A Unified Framework for Multimodal Learning},
author={Zhang, Yiyuan and Gong, Kaixiong and Zhang, Kaipeng and Li, Hongsheng and Qiao, Yu and Ouyang, Wanli and Yue, Xiangyu},
year={2023},
journal={arXiv preprint arXiv:2307.10802},
}
This project is released under the Apache 2.0 license.
This code is developed based on excellent open-sourced projects including MMClassification, MMDetection, MMsegmentation, OpenPoints, Time-Series-Library, Graphomer, SpectralFormer, and ViT-Adapter.