![]() |
|||||||||||||||||||||||||
|
Speakers: Please contact us about file upload for your slides.
Time | Speaker | Title |
11/14, 10:00am | All Participants | Welcome |
11/14, 10:30am | Ila Fiete MIT |
New models of content-addressable memory from biology to transformers[Video][CC] |
11/14, 11:15am | Sho Yaida META |
Effective Theory of Transformers[Slides][Video][CC] |
11/16, 9:45am | Guillaume Lajoie Univ de Montreal - Mila, Quebec AI Institute |
Rich and Lazy neurons: network connectivity structure and the double implications of feature learning for generalization[Video][CC] |
11/16, 10:30am | Dmitry Krotov MIT-IBM Watson AI Lab & IBM Research |
Dense Associative Memory for novel Transformer architectures[Video][CC] |
11/16, 11:15am | Yue Lu Harvard |
Understanding the Universality Phenomenon in High-Dimensional Estimation and Learning: Some Recent Progress[Video][CC] |
11/17, 11:00am | Sho Yaida META |
Tutorial on Transformers with Indices[Slides][Video][CC] |
11/20, 12:15pm | Haim Sompolinsky Hebrew Univ. |
Statistical Mechanics of Deep Learning
[Video]
KITP Blackboard Lunch |
11/21, 9:45am | Tankut Can Institute for Advanced Study |
LLM-assisted study of human memory for meaningful narratives[Video][CC] |
11/21, 10:30am | Gautam Reddy Princeton |
Data dependence and abrupt transitions during in-context learning[Video][CC] |
11/21, 11:15am | Dan Lee Cornell Tech |
Perceptrons Revisited[Video][CC] |
11/22, 9:45am | Qianyi Li Harvard |
Beyond the Kernel Regime: Analytical Approaches to Single and Sequential Task Learning[Video][CC] |
11/22, 10:30am | Bruno Olshausen UC Berkeley |
On incorporating mathematical and biological structure into neural network models[Video][CC] |
11/22, 11:15am | Francesca Mignacco Princeton & CUNY |
Statistical physical insights into the dynamics of learning algorithms[Slides][Video][CC] |
11/22, 3:00pm | Jamie Simon UC Berkeley |
Tutorial on "kernel/lazy" vs "rich/feature-learning" regimes in wide nets[Video] |
11/28, 9:45am | Mikhail Belkin UCSD |
Toward a practical theory of deep learning: feature learning in deep neural networks and backpropagation-free algorithms that learn features[Embargoed] |
11/28, 10:30am | Dmitri Chklovskii Flatiron Institute |
Reimagining the neuron as a controller: A novel model for Neuroscience and AI[Video][CC] |
11/28, 11:15am | Mate Lengyel University of Cambridge |
Continual learning - the biological way[Slides][Video][CC] |
11/29, 11:00am | Cathy Chen (UC Berkeley) Ariel Goldstein (Hebrew University) |
Discussion Session on NLP, LLMs and the human brain |
11/30, 9:45am | Brice Menard Johns Hopkins University |
Insights into how/what CNNs learn |
11/30, 10:30am | David Klindt Stanford |
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems[Video] |
11/30, 11:15am | Michael Bonner Johns Hopkins University |
A high-dimensional view of computational neuroscience[Video][CC] |
12/01, 10:00am | Alexander Van Meegen (Harvard) Blake Bordelon (Harvard) Francesca Mignacco (Princeton) Stefano Sarao Mannelli (UCL) |
Dynamical Mean Field Theory for Neural Networks |
12/01, 1:00pm | Mikhail Belkin UCSD |
Backpropagation-free algorithms that learn features |
12/05, 9:45am | Andrew Saxe UCL |
The Neural Race Reduction: Feature learning dynamics in deep architectures[Video][CC] |
12/05, 10:30am | Cengiz Pehlevan Harvard |
Translating Theory to Practical Deep Learning: Depthwise Hyperparameter Transfer[Video][CC] |
12/05, 11:15am | Elad Schneidman Weizmann Institute of Science |
Learning the code of large neural populations with shallow networks and homeostatic random projections[Video][CC] |
12/06, 1:30pm | Jascha Sohl Dickstein |
Question and answer session on AI |
12/07, 9:45am | Alex Koulakov CSHL |
Brain evolution as a machine learning algorithm[Video][CC] |
12/07, 10:30am | Matthieu Wyart EPFL |
Role of compositionality of data on supervised and unsupervised learning[Embargoed] |
12/07, 11:15am | Gabriel Kreiman Harvard |
Some ideas and speculation about robustness, development. and learning in brains and artificial neural networks[Video] |
12/08, 1:30pm | Alex Atanasov Harvard |
Tutorial on Neural Scaling Laws[Video][CC] |
12/11-12/13 | Particle Theory Initiative: | PTI23 x DEEPLEARNING23 |
12/12, 10:30am | Mackenzie Mathis EPFL |
Foundation Models for Neuroscience: a case study with animal pose estimation |
12/12, 11:15am | Inbar Seroussi Tel-Aviv University |
Can neural network training be thermodynamically optimal?[Video][CC] |
12/14, 9:45am | Sara Solla Northwestern University |
From Bayes to Gibbs: a thermodynamic theory of learning[Video][CC] |
12/14, 10:30am | Ard Louis University of Oxford |
Inductive bias towards simplicity and feature learning in DNNs[Video][CC] |
12/14, 11:15am | Yuhai Tu IBM |
Understanding Deep-Learning as a physicist: what would Einstein do?[Video][CC] |
12/19, 9:45am | Stefano Fusi Columbia |
The geometry of abstraction in brain and machines[Video][CC] |
12/19, 10:30am | Bruno Loureiro ENS Paris |
How two-layer neural networks learn, one (giant) step at a time[Video][CC] |
12/19, 11:15am | Yasaman Bahri Google DeepMind |
A taxonomy for scaling laws in deep neural networks[Video][CC] |
12/21, 9:45am | Hidenori Tanaka Harvard / NTT |
Experimental Physics of AI: Can Generative Models Imagine?[Video][CC] |
12/21, 10:30am | Alexander Mathis EPFL |
Perspectives on deep learning and motor neuroscience[Video][CC] |
12/21, 11:15am | SueYeon Chung NYU |
Theory of Neural Manifolds: A Multi-Level Probe of Representations in Biological and Artificial Neural Networks |