OpenDriveLab / UniAD
- ััะฑะฑะพัะฐ, 24 ะธัะฝั 2023โฏะณ. ะฒ 00:00:05
[CVPR 2023 Best Paper] Planning-oriented Autonomous Driving
Paper Title Change
: To avoid confusion with the "goal-point" navigation in Robotics, we change the title from "Goal-oriented" to "Planning-oriented" suggested by Reviewers. Thank you!
[2023/06/12] Bugfix [Ref: #21]: Previously, the performance of the stage1 model (track_map) could not be replicated when trained from scratch, due to mistakenly adding loss_past_traj
and freezing img_neck
and BN
. By removing loss_past_traj
and unfreezing img_neck
and BN
in training, the reported results could be reproduced (AMOTA: 0.393, stage1_train_log).
[2023/04/18] New feature: You can replace BEVFormer with other BEV Encoding methods, e.g., LSS, as long as you provide the bev_embed
and bev_pos
in track_train and track_inference. Make sure your bevs and ours are of the same shape.
[2023/04/18] Base-model checkpoints are released.
[2023/03/29] Code & model initial release v1.0
[2023/03/21]
[2022/12/21] UniAD paper is available on arXiv.
UniAD is trained in two stages. Pretrained checkpoints of both stages will be released and the results of each model are listed in the following tables.
We first train the perception modules (i.e., track and map) to obtain a stable weight initlization for the next stage. BEV features are aggregated with 5 frames (queue_length = 5).
Method | Encoder | Tracking AMOTA |
Mapping IoU-lane |
config | Download |
---|---|---|---|---|---|
UniAD-B | R101 | 0.390 | 0.297 | base-stage1 | base-stage1 |
We optimize all task modules together, including track, map, motion, occupancy and planning. BEV features are aggregated with 3 frames (queue_length = 3).
Method | Encoder | Tracking AMOTA |
Mapping IoU-lane |
Motion minADE |
Occupancy IoU-n. |
Planning avg.Col. |
config | Download |
---|---|---|---|---|---|---|---|---|
UniAD-B | R101 | 0.358 | 0.317 | 0.709 | 64.1 | 0.25 | base-stage2 | base-stage2 |
UniAD/ckpts/
directory.evaluation
section in TRAIN_EVAL.md.load_from
field to path/of/ckpt
in the config and follow the train
section in TRAIN_EVAL.md to start training.The overall pipeline of UniAD is controlled by uniad_e2e.py which coordinates all the task modules in UniAD/projects/mmdet3d_plugin/uniad/dense_heads
. If you are interested in the implementation of a specific task module, please refer to its corresponding file, e.g., motion_head.
All assets and code are under the Apache 2.0 license unless specified otherwise.
Please consider citing our paper if the project helps your research with the following BibTex:
@inproceedings{hu2023_uniad,
title={Planning-oriented Autonomous Driving},
author={Yihan Hu and Jiazhi Yang and Li Chen and Keyu Li and Chonghao Sima and Xizhou Zhu and Siqi Chai and Senyao Du and Tianwei Lin and Wenhai Wang and Lewei Lu and Xiaosong Jia and Qiang Liu and Jifeng Dai and Yu Qiao and Hongyang Li},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023},
}