System 2 Reasoning At Scale

December 15, 2024, West Ballroom B. NeurIPS Workshop, Vancouver, Canada


System 2 Reasoning At Scale is a one-day workshop that focuses on improving reasoning in neural networks, particularly the challenges and strategies for achieving System-2 reasoning in transformer-like models. The workshop addresses issues like distinguishing memorization from rule-based learning, understanding, syntactic generalization, and compositionality. The workshop also covers the importance of understanding how systematic models are in their decisions for AI safety, integrating neural networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities.



Tentative Schedule

Time Event
9:00 - 9:15 Poster Setup
9:15 - 9:20 Opening Remarks
9:20 - 9:30 Lightning Talk: softmax is not enough (for sharp out-of-distribution)
9:30 - 9:40 Lightning Talk: Compositional Generalization Across Distributional Shifts with Sparse Tree Operations
9:40 - 9:50 Lightning Talk: System 1.5: Designing Metacognition in Artificial Intelligence
9:50 - 10:00 Lightning Talk: Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
10:00 - 10:35 Keynote 1: Joshua Tenenbaum
10:35 - 10:55 Coffee break & Poster
10:55 - 11:30 Keynote 2: Melanie Mitchell: On Understanding and Abstraction in Humans and AI Systems
11:30 - 1:00 Poster Session
1:00 - 2:00 Lunch Break
2:00 - 2:35 Keynote 3: Jason Weston: Self-Training Methods for System 2 Reasoning
2:35 - 2:45 Basis
2:45 - 3:00 Break & Poster
3:00 - 3:35 Keynote 4: François Chollet: ARC Prize 2024: What we learned
3:35 - 5:00 Panel
5:00 - 5:30 Poster & Social

Speakers

We already have an exciting lineup of speakers who will be discussing the challenges of integrating nerual networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities. Stay tuned for more to come soon!

Melanie Mitchell
Santa Fe Institute
Joshua Tenenbaum
Massachusetts Institute of Technology
Jason Weston
Meta/NYU

Panelists

We will have a panel discussion on the challenges of integrating neural networks with symbolic reasoning and developing better AI models! More to confirm!

Melanie Mitchell
Santa Fe Institute
Joshua Tenenbaum
Massachusetts Institute of Technology
Dzmitry Bahdanau
Service Now
Jason Weston
Meta/NYU

Organizers

Shikhar Murty
Stanford
Federico Bianchi
Formerly Stanford
Shunyu Yao
Princeton
Yejin Choi
University of Washington / AI2

Call for Papers

The authors are welcome to submit a 4-page or 8-page (short/long) paper based on in-progress work, or a relevant paper being presented at the main conference, that aims to answer the following questions:

  • What do we need to imbue language models with System-2 reasoning capabilities?
  • Do we need this kind of capability?
  • Are scale and the “bitter lesson” going to dictate how the future of AI technology will unfold?
  • Do we need a different mechanism for implementing System-2 reasoning, or should it be a property that emerges from a possibly different training method?
  • Where should a system like this be implemented? Implicitly inside the model, or explicitly in some engineered system around the model, like search or graph of thought?
  • How do we benchmark System-2-like generalization? How do we avoid data contamination?

We welcome review and positional papers that may foster discussions. Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations. Accepted papers will be made publicly available as *non-archival* reports, allowing future submissions to archival conferences or journals.


Submission Guidelines

Please upload submissions on OpenReview.

All submissions must be in PDF format. The submissions must be formated using the NeurIPS 2024 LaTeX style file. We accept both short (4 pages) and long papers (8 pages). Short papers generally have a point that can be described in a few pages. For example, it can be a small, focused contribution, a negative result, an opinion piece, or an interesting application nugget. The page limit includes all figures and tables; additional pages containing statements of acknowledgments, funding disclosures, and references are allowed. The OpenReview-based review process will be double-blind to avoid potential conflicts of interest.

Arbitrary-length appendices are allowed at the end of the paper. Note that reviewers are encouraged but not required to review them.

Submission deadline: Sept 26, 2024 (extended)

Notification: Oct 9, 2024

In case of any issues, feel free to email the workshop organizers at: s2-reasoning-neurips2024@googlegroups.com.


Sponsors