Developing 3D-VL generalists capable of understanding 3D scenes and following natural language instructions
to perform a wide range of tasks has been a long-standing goal in the 3D-VL community.
Despite recent progress, 3D-VL models still lag behind their 2D counterparts in capability and robustness,
falling short of the generalist standard. A key obstacle to developing 3D-VL generalists lies in data scalability,
hindered by the lack of an efficient scene representation.
We propose LEO-VL, a 3D-VL model built upon condensed feature grid (CFG),
an efficient scene representation that bridges 2D perception and 3D spatial structure while significantly reducing token overhead.
This efficiency unlocks large-scale training towards 3D-VL generalist,
for which we curate over 700k high-quality 3D-VL data spanning four domains of real-world indoor scenes
and five tasks such as captioning and dialogue.
LEO-VL achieves state-of-the-art performance on a variety of 3D QA benchmarks,
including SQA3D, MSQA, and Beacon3D. Ablation studies confirm the efficiency of our representation,
the importance of task and scene diversity, and the validity of our data curation principle.
Furthermore, we introduce SceneDPO, a novel post-training objective that enhances the robustness of 3D-VL models.
We hope our findings contribute to the advancement of scalable and robust 3D-VL generalists.