VRBench: A Benchmark for Multi-Step
Reasoning in Long Narrative Videos

ICCV 2025
(* equal contributions, † corresponding authors)
1 Shanghai Artificial Intelligence Laboratory    2 Nanjing University   
3 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences   
Teaser.

Overview of VRBench. We present VRBench, a long narrative video benchmark for multi-step reasoning. VRBench includes 1,010 manual-filtered narrative videos, covering 8 languages and 7 video categories that are suitable for reasoning about temporal relations. We also provide high-quality stepwise annotations for reasoning, which are labeled and reviewed by human experts. Each video incorporates 8-10 complex question-answer pairs, multi-step reasoning chain, and fine-grained timestamps. To fully evaluate the capability of models in multi-step reasoning, we propose a multi-phase evaluation pipeline that assesses model results both from the process- and outcome-level. Our VRBench is the first video reasoning benchmark that both supports multi-step annotation and evaluation.

Abstract

We propose VRBench, the first long narrative video benchmark crafted for evaluating large models' multi-step reasoning capabilities, addressing limitations in existing evaluations that overlook temporal reasoning and procedural validity. It comprises 1,010 long videos (with an average duration of 1.6 hours), along with 9,468 human-labeled multi-step question-answering pairs and 30,292 reasoning steps with timestamps. These videos are curated via a multi-stage filtering process including expert inter-rater reviewing to prioritize plot coherence. We develop a human-AI collaborative framework that generates coherent reasoning chains, each requiring multiple temporally grounded steps, spanning seven types (e.g., event attribution, implicit inference). VRBench designs a multi-phase evaluation pipeline that assesses models at both the outcome and process levels. Apart from the MCQs for the final results, we propose a progress-level LLM-guided scoring metric to evaluate the quality of the reasoning chain from multiple dimensions comprehensively. Through extensive evaluations of 12 LLMs and 16 VLMs on VRBench, we undertake a thorough analysis and provide valuable insights that advance the field of multi-step reasoning.

VRBench Evaluation Results

Model Overall Results by Metric Results by Taxonomy
MCQ
(Outcome)
OpenEnd
(Process)
Event
Attribution
Counting
Problems
Hypothetical
Reasoning
Implicit
Inferences
Information
Synopsis
Event
Prediction
Logical
Linkage
LLMs
Proprietary Models
GPT-4o 59.34 64.10 54.57 54.25 34.00 67.92 63.07 74.83 65.64 70.09
o1-preview 63.64 70.02 57.25 58.67 34.01 73.40 68.71 72.43 67.06 72.69
Gemini-2.0-Flash-Thinking 63.79 69.38 58.20 63.56 35.78 72.24 71.89 75.60 66.66 69.99
Claude-3.7-Sonnet 61.50 67.39 55.60 59.23 36.92 66.51 68.33 69.81 62.58 69.03
Open-Source Models
DeepSeek-V3 60.40 65.14 55.65 55.86 37.35 68.72 65.40 70.89 65.95 69.46
DeepSeek-R1 63.75 70.26 57.24 59.00 39.20 74.04 67.10 70.05 70.44 70.73
Qwen2.5-7B-Instruct 50.70 50.40 50.99 47.60 35.41 59.58 56.14 63.17 56.99 60.58
Qwen2.5-72B-Instruct 56.18 58.44 53.92 53.39 36.61 64.87 61.13 69.71 60.49 66.93
QwQ-32B-preview 34.87 18.41 51.33 35.50 38.50 44.64 42.03 57.38 41.42 42.08
QwQ-32B 55.96 56.14 55.77 54.14 36.93 62.89 61.19 74.96 62.19 63.54
InternLM3-8B-Instruct 48.79 44.97 52.61 45.65 35.65 57/12 54.75 60.58 53.82 59.38
Llama3.3-70B-Instruct 62.16 69.68 54.64 62.20 37.54 71.21 66.98 78.04 67.90 70.39
VLMs
Proprietary Models
GPT-4o 70.68 83.25 58.10 68.63 38.52 78.68 72.45 80.01 74.01 75.19
Gemini-2.0-Pro 76.61 85.32 67.90 73.11 65.23 83.02 77.74 89.14 79.94 77.90
Claude-3.7-Sonnet 70.17 82.10 58.23 65.13 34.98 74.61 73.14 77.38 73.34 72.29
Open-Source Models
Qwen2-VL-7B 60.75 84.61 36.88 59.52 28.88 69.14 66.66 85.29 67.34 71.74
Qwen2.5-VL-7B 63.06 82.63 43.49 60.94 34.14 69.69 67.30 83.93 67.23 71.26
Qwen2.5-VL-72B 69.23 82.48 55.98 65.81 41.61 74.54 69.03 90.04 71.89 77.88
DeepSeek-VL2 24.57 19.65 29.49 20.04 17.69 23.43 26.00 30.02 23.87 25.07
InternVL2.5-8B 56.11 80.61 31.61 58.79 31.40 62.92 63.04 85.99 62.32 68.19
InternVL2.5-78B 66.10 84.82 47.37 66.45 34.48 74.62 70.87 87.54 71.43 74.68
Phi-3.5-Vision 48.02 52.04 44.00 43.47 30.23 50.27 50.18 71.42 56.16 46.99
Aria 60.27 84.07 36.47 59.84 32.88 70.55 65.25 86.21 67.59 70.39
H2OVL Mississippi-2B 32.61 23.43 41.79 31.63 29.72 38.83 38.40 60.05 39.64 41.12
VideoChat-Flash-7B 57.30 84.81 29.79 60.88 33.45 63.67 63.36 79.36 63.87 64.97
InternVideo2.5 56.89 85.35 28.46 60.54 33.16 64.50 63.83 84.59 62.42 64.58
LongVA-7B 56.57 80.74 32.40 56.63 28.06 63.33 63.33 76.93 62.63 66.80
LongVA-7B-DPO 58.79 80.93 36.65 57.10 27.42 65.79 65.21 79.37 64.67 68.56

Annotation Examples in VRBench

Teaser.

BibTeX

If you find our work useful, please consider citing our paper:

@article{yu2025vrbench,
      title={VRBench: A Benchmark for Multi-Step Reasoning in Long Narrative Videos},
      author={Yu, Jiashuo and Wu, Yue and Chu, Meng and Ren, Zhifei and Huang, Zizheng and Chu, Pei and Zhang, Ruijie and He, Yinan and Li, Qirui and Li, Songze and others},
      journal={arXiv preprint arXiv:2506.10857},
      year={2025}
    }