Rich Sutton’s 2019 “Bitter Lesson” paper, written on the eve of the LLM explosion, proposes that general computational methods will outperform specialized models. In just a few years we moved from “foundation models” for science to adopting agentic AI patterns. How will scientists work with general-purpose AI models in the future? In this talk, I will share recent work from Google Research in Science AI. Scientific rigor is of utmost importance. In a domain-specific example, we built and evaluated a retrieval augmented generation (RAG) system to answer detailed questions in High-Temperature Superconductivity. In the general case, we deployed a system that evolves “scorable” scientific problems. Based on evolving computational notebooks, this system evolves predictive models. We find that it generalizes across diverse scientific domains such as epidemiology, seismology, and applied mathematics. I will share researchers’ experiences using the system and how their expectations of Science AI have evolved.
| 11:45am - 12:15pm: | Food and community socializing. |
| 12:15pm - 1:15pm: | Presentation with Q&A. Available hybrid via Zoom. |
| 1:30pm - 2:15pm: | Student meeting with speaker, held in the same location. |
Dan Liebling (he/him) is a Staff Research Engineer at Google Research in the Science AI group. His work brings a human-centered approach to scientific computing in areas such as computational notebooks and AI-assisted writing. Prior to his work in Science AI, Dan led a research group in human-centered machine translation at Google Research and was a member of the cross-department UW Translation Studies Advisory Board. Before Google, Dan worked on HCI + Information Retrieval at Microsoft Research. Dan has a M.S. in Computer Science from the Paul G. Allen School of Computer Science and Engineering, and a B.S. in Engineering and Applied Science from Caltech, where he studied Computational Neuroscience.