System 2 Reasoning At Scale
December 15, 2024. NeurIPS Workshop, Vancouver, Canada
System 2 Reasoning At Scale is a one-day workshop that focuses on improving reasoning in neural networks, particularly the challenges and strategies for achieving System-2 reasoning in transformer-like models. The workshop addresses issues like distinguishing memorization from rule-based learning, understanding, syntactic generalization, and compositionality. The workshop also covers the importance of understanding how systematic models are in their decisions for AI safety, integrating neural networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities.
Tentative Schedule
Time | Event |
---|---|
8:55 - 9:00 | Opening Remarks |
9:00 - 9:30 | Keynote 1: Tal Linzen |
9:30 - 10:00 | Lightening Talks (4 papers x 5) |
10:00 - 10:35 | Keynote 2: Joshua Tenenbaum |
10:35 - 10:55 | Coffee break & Poster |
10:55 - 11:30 | Keynote 3: Melanie Mitchell |
11:30 - 1:00 | Poster Session |
1:00 - 2:00 | Lunch Break |
2:00 - 2:35 | Keynote 4: Jason Weston |
2:35 - 2:45 | ARC Basis |
2:45 - 3:00 | Break & Poster |
3:00 - 3:35 | Keynote 5: François Chollet |
3:35 - 5:00 | Panel |
5:00 - 5:30 | Poster & Social |
Speakers
We already have an exciting lineup of speakers who will be discussing the challenges of integrating nerual networks with symbolic reasoning, and developing new architectures for enhanced reasoning capabilities. Stay tuned for more to come soon!
Santa Fe Institute
Massachusetts Institute of Technology
NYU/Google
Meta/NYU
Panelists
We will have a panel discussion on the challenges of integrating neural networks with symbolic reasoning and developing better AI models! More to confirm!
Santa Fe Institute
Massachusetts Institute of Technology
Service Now
NYU/Google
Meta/NYU
Organizers
Stanford
OpenEvidence
Stanford
AI2
MIT
Princeton
Stanford
University of Washington / AI2
Call for Papers
The authors are welcome to submit a 4-page or 8-page (short/long) paper based on in-progress work, or a relevant paper being presented at the main conference, that aims to answer the following questions:
- What do we need to imbue language models with System-2 reasoning capabilities?
- Do we need this kind of capability?
- Are scale and the “bitter lesson” going to dictate how the future of AI technology will unfold?
- Do we need a different mechanism for implementing System-2 reasoning, or should it be a property that emerges from a possibly different training method?
- Where should a system like this be implemented? Implicitly inside the model, or explicitly in some engineered system around the model, like search or graph of thought?
- How do we benchmark System-2-like generalization? How do we avoid data contamination?
We welcome review and positional papers that may foster discussions. Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations. Accepted papers will be made publicly available as *non-archival* reports, allowing future submissions to archival conferences or journals.
Submission Guidelines
Please upload submissions on OpenReview.
All submissions must be in PDF format. The submissions must be formated using the NeurIPS 2024 LaTeX style file. We accept both short (4 pages) and long papers (8 pages). Short papers generally have a point that can be described in a few pages. For example, it can be a small, focused contribution, a negative result, an opinion piece, or an interesting application nugget. The page limit includes all figures and tables; additional pages containing statements of acknowledgments, funding disclosures, and references are allowed. The OpenReview-based review process will be double-blind to avoid potential conflicts of interest.
Arbitrary-length appendices are allowed at the end of the paper. Note that reviewers are encouraged but not required to review them.
Submission deadline: Sept 26, 2024 (extended)
Notification: Oct 9, 2024
In case of any issues, feel free to email the workshop organizers at: s2-reasoning-neurips2024@googlegroups.com.