System 2 Reasoning At Scale
December 2025, NeurIPS Workshop, TBD
System 2 Reasoning At Scale is a one-day workshop that focuses on improving reasoning in neural networks, particularly the challenges and strategies for achieving System-2 reasoning in transformer-like models. The workshop addresses issues like distinguishing memorization from rule-based learning, understanding, syntactic generalization, and compositionality. The workshop also covers the importance of understanding how systematic models are in their decisions for AI safety, integrating neural networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities.
Tentative Schedule
Coming soon!
Speakers
TBD
Panelists
TBD
Organizers

Stanford

TogetherAI

Stanford

AI2

MIT

MIT

Stanford
Call for Papers
The authors are welcome to submit a 4-page or 8-page (short/long) paper based on in-progress work, or a relevant paper being presented at the main conference, that aims to answer the following questions:
- What do we need to imbue language models with System-2 reasoning capabilities?
- Do we need this kind of capability?
- Are scale and the "bitter lesson" going to dictate how the future of AI technology will unfold?
- Do we need a different mechanism for implementing System-2 reasoning, or should it be a property that emerges from a possibly different training method?
- Where should a system like this be implemented? Implicitly inside the model, or explicitly in some engineered system around the model, like search or graph of thought?
- How do we benchmark System-2-like generalization? How do we avoid data contamination?
We welcome review and positional papers that may foster discussions. Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations. Accepted papers will be made publicly available as *non-archival* reports, allowing future submissions to archival conferences or journals.
Submission Guidelines
TBD
Sponsors
TBD