-
microsoft/bitnet-b1.58-2B-4T
Text Generation • 0.8B • Updated • 5.71k • 1.24k -
M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
Paper • 2504.10449 • Published • 15 -
nvidia/Llama-3.1-Nemotron-8B-UltraLong-2M-Instruct
Text Generation • 8B • Updated • 358 • 15 -
ReTool: Reinforcement Learning for Strategic Tool Use in LLMs
Paper • 2504.11536 • Published • 63
Collections
Discover the best community collections!
Collections including paper arxiv:2510.25889
-
Don't Blind Your VLA: Aligning Visual Representations for OOD Generalization
Paper • 2510.25616 • Published • 96 -
π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models
Paper • 2510.25889 • Published • 65 -
EBT-Policy: Energy Unlocks Emergent Physical Reasoning Capabilities
Paper • 2510.27545 • Published • 48
-
π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models
Paper • 2510.25889 • Published • 65 -
Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model
Paper • 2510.27607 • Published • 8 -
A Survey on Efficient Vision-Language-Action Models
Paper • 2510.24795 • Published • 5 -
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach
Paper • 2512.02834 • Published • 40
-
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
Paper • 2509.09372 • Published • 243 -
VLA-R1: Enhancing Reasoning in Vision-Language-Action Models
Paper • 2510.01623 • Published • 10 -
The Landscape of Agentic Reinforcement Learning for LLMs: A Survey
Paper • 2509.02547 • Published • 228 -
WMPO: World Model-based Policy Optimization for Vision-Language-Action Models
Paper • 2511.09515 • Published • 18
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
ARE: Scaling Up Agent Environments and Evaluations
Paper • 2509.17158 • Published • 35 -
ARTDECO: Towards Efficient and High-Fidelity On-the-Fly 3D Reconstruction with Structured Scene Representation
Paper • 2510.08551 • Published • 33 -
Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention
Paper • 2510.04212 • Published • 23 -
ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement Learning
Paper • 2510.12693 • Published • 27
-
microsoft/bitnet-b1.58-2B-4T
Text Generation • 0.8B • Updated • 5.71k • 1.24k -
M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
Paper • 2504.10449 • Published • 15 -
nvidia/Llama-3.1-Nemotron-8B-UltraLong-2M-Instruct
Text Generation • 8B • Updated • 358 • 15 -
ReTool: Reinforcement Learning for Strategic Tool Use in LLMs
Paper • 2504.11536 • Published • 63
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
Don't Blind Your VLA: Aligning Visual Representations for OOD Generalization
Paper • 2510.25616 • Published • 96 -
π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models
Paper • 2510.25889 • Published • 65 -
EBT-Policy: Energy Unlocks Emergent Physical Reasoning Capabilities
Paper • 2510.27545 • Published • 48
-
π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models
Paper • 2510.25889 • Published • 65 -
Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model
Paper • 2510.27607 • Published • 8 -
A Survey on Efficient Vision-Language-Action Models
Paper • 2510.24795 • Published • 5 -
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach
Paper • 2512.02834 • Published • 40
-
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
Paper • 2509.09372 • Published • 243 -
VLA-R1: Enhancing Reasoning in Vision-Language-Action Models
Paper • 2510.01623 • Published • 10 -
The Landscape of Agentic Reinforcement Learning for LLMs: A Survey
Paper • 2509.02547 • Published • 228 -
WMPO: World Model-based Policy Optimization for Vision-Language-Action Models
Paper • 2511.09515 • Published • 18
-
ARE: Scaling Up Agent Environments and Evaluations
Paper • 2509.17158 • Published • 35 -
ARTDECO: Towards Efficient and High-Fidelity On-the-Fly 3D Reconstruction with Structured Scene Representation
Paper • 2510.08551 • Published • 33 -
Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention
Paper • 2510.04212 • Published • 23 -
ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement Learning
Paper • 2510.12693 • Published • 27