MemoryVLA | arxiv 2025.08.26 | Paper Reading
$\pi_{0.5}$ | arxiv 2025.04.22 | Paper Reading

$\pi_{0.5}$ | arxiv 2025.04.22 | Paper Reading

π0.5 : a Vision-Language-Action Model with Open-World Generalization

这篇文章提出了一个基础的VLA模型主要通过真机数据训练旨在让机器人可以适应家用场景。

Read more
InternVLA M1 | arxiv 2025.10.15 | Paper Reading
SP-VLA | arxiv 2025.10.03 | Paper Reading

SP-VLA | arxiv 2025.10.03 | Paper Reading

SP-VLA: A JOINT MODEL SCHEDULING AND TOKEN PRUNING APPROACH FOR VLA MODEL ACCELERATION

这篇文章通过现有双系统进行动态调整以及动态剪枝来达到减低参数同时提升模型精度。

Read more
OpenVLA-OFT | RSS 2025 | Paper Reading
OpenVLA | CoRL 2024 | Paper Reading
FiSVLA | arxiv 2025.6.02 | Paper Reading
CoA-TAM | NIPS 2025 | Paper Reading
BridgeVLA | NIPS 2025 | Paper Reading

BridgeVLA | NIPS 2025 | Paper Reading

BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models

这篇文章基于OpenVLA将嵌入空间投影到共享空间从而进行更好学习。

Read more
SimpleVLA | arxiv 2025.9.11 | Paper Reading
WeChatQQGoogle scholarDailyLogRSS