System 2 Reasoning At Scale

December 15, 2024. NeurIPS Workshop, Vancouver, Canada


System 2 Reasoning At Scale is a one-day workshop that focuses on improving reasoning in neural networks, particularly the challenges and strategies for achieving System-2 reasoning in transformer-like models. The workshop addresses issues like distinguishing memorization from rule-based learning, understanding, syntactic generalization, and compositionality. The workshop also covers the importance of understanding how systematic models are in their decisions for AI safety, integrating neural networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities.



Call for Papers

The authors are welcome to submit a 4-page or 8-page (short/long) paper based on in-progress work, or a relevant paper being presented at the main conference, that aims to answer the following questions:

  • What do we need to imbue language models with System-2 reasoning capabilities?
  • Do we need this kind of capability?
  • Are scale and the “bitter lesson” going to dictate how the future of AI technology will unfold?
  • Do we need a different mechanism for implementing System-2 reasoning, or should it be a property that emerges from a possibly different training method?
  • Where should a system like this be implemented? Implicitly inside the model, or explicitly in some engineered system around the model, like search or graph of thought?
  • How do we benchmark System-2-like generalization? How do we avoid data contamination?

We welcome review and positional papers that may foster discussions. Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations. Accepted papers will be made publicly available as *non-archival* reports, allowing future submissions to archival conferences or journals.


Submission Guidelines

Please upload submissions on OpenReview.

All submissions must be in PDF format. The submissions must be formated using the NeurIPS 2024 LaTeX style file. We accept both short (4 pages) and long papers (8 pages). Short papers generally have a point that can be described in a few pages. For example, it can be a small, focused contribution, a negative result, an opinion piece, or an interesting application nugget. The page limit includes all figures and tables; additional pages containing statements of acknowledgments, funding disclosures, and references are allowed. The OpenReview-based review process will be double-blind to avoid potential conflicts of interest.

Arbitrary-length appendices are allowed at the end of the paper. Note that reviewers are encouraged but not required to review them.

Submission deadline: Sept 23, 2024

Notification: Oct 9, 2024

In case of any issues, feel free to email the workshop organizers at: s2-reasoning-neurips2024@googlegroups.com.


Speakers

We already have an exciting lineup of speakers who will be discussing the challenges of integrating nerual networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities. Stay tuned for more to come soon!

Melanie Mitchell
Santa Fe Institute
Joshua Tenenbaum
Massachusetts Institute of Technology
Tal Linzen
NYU/Google
Jason Weston
Meta/NYU

Panelists

We will have a panel discussion on the challenges of integrating neural networks with symbolic reasoning and developing better AI models! More to confirm!

Melanie Mitchell
Santa Fe Institute
Joshua Tenenbaum
Massachusetts Institute of Technology
Dzmitry Bahdanau
Service Now
Tal Linzen
NYU/Google

Tentative Schedule

Time Event
8:55 - 9:00 Opening Remarks
9:00 - 9:50 Keynote 1: Tal Linzen
9:50 - 10:40 Keynote 2: Joshua Tenenbaum
10:40 - 11:00 Coffee Break
11:00 - 11:50 Keynote 3: Melanie Mitchell
11:50 - 12:50 Poster Session
12:50 - 1:50 Lunch Break
1:50 - 2:40 Keynote 4: Jason Weston
2:40 - 3:00 Break
3:00 - 3:50 Keynote 5: François Chollet
3:50 - 4:00 Closing Remarks
4:00 - 5:30 Panel

Organizers

Shikhar Murty
Stanford
Federico Bianchi
OpenEvidence
Shunyu Yao
Princeton
Yejin Choi
University of Washington / AI2