
ROLL: Reinforcement Learning Optimization for Large-Scale Learning
🚀 An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models 🚀
ROLL is an efficient and user-friendly RL library designed for Large Language Models (LLMs) utilizing Large Scale GPU resources. It significantly enhances LLM performance in key areas such as human preference alignment, complex reasoning, and multi-turn agentic interaction scenarios.
Leveraging a multi-role distributed architecture with Ray for flexible resource allocation and heterogeneous task scheduling, ROLL integrates cutting-edge technologies like Megatron-Core, SGLang and vLLM to accelerate model training and inference.
[08/11/2025] 🎉 Our Paper released, see Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning. [06/09/2025] 🎉 ROLL tech report is now available! Access the report here.