Optimised for desktop or laptop viewing.
You may also download or print a PDF copy of the full programme below:
🖨️ View / Print Full Programme (PDF)
Date: 8 – 11 July 2025
Venue: NUS University Town
Daily Programme Schedule
Morning Sessions
All morning sessions are to be held at Auditorium 1
| 8:00am | Registration | |||
| 8:50am | Opening Remarks | |||
| Session Chair: Kristin Persson | ||||
| Time | Speaker | Title | Abstract | Topic |
|---|---|---|---|---|
| 9:00–9:40am | Gábor Csányi | Machine learning force fields shows extreme generalisation |
I will introduce the general problem of first principles force fields: creating surrogate models for quantum mechanics that yield the energy of a configuration of atoms in 3D space, as we would find them in materials or molecules. Over the last decade significant advances were made in the attainable accuracy, and today we can model materials and molecules with a per-atom energy accuracy of up to 1 part in 10,000 with a speedup of over a million or more compared to the explicit quantum mechanical calculation, enabling molecular dynamics on large length and time scales. The most surprising aspect of the best model is its extreme generalisation: fitted only on small periodic crystals, it shows stable trajectories on arbitrary chemical systems, from water to nanoparticles and proteins. I will show some of the technical details behind the success of our models: equivariant many-body graph polynomials with very few and weak nonlinearities. The relationship between the architectural elements and the extreme generalisation is still largely a mystery. The locality of the graph structure is key to its success, as well as high body order and message passing. The force fields get significantly better with more data, yet model size and complexity can remain largely the same. Integrating explicit long range electrostatics with such general "foundational" force fields is a challenge, as well as combining large datasets for materials and organic molecules, due to the incompatibility of leading DFT approximations.
|
AI for Chemistry |
| 9:40–10:00am | Rafael-Gomez Bombarelli | ML Gradients for Computational Chemistry |
In the physical sciences, ML has found great synergy with physics-based simulations. On the one hand, simulations can provide abundant, reproducible and scalable training data. Machine learning models can act as surrogates of expensive simulators, allowing improvements in speed and accessible length- and time-scales. In this talk, we describe recent work in the use of differentiable programming and deep learning models to fuse optimization and learning in molecular simulations, including the use of differentiable uncertainty to power active learning, alchemical ML potentials that capture atomic disorder in solids, learning collective variables or transport operators for accelerated simulations, and using discrete generative models and reinforcement learning for surface reconstruction. Lastly, we encourage a discussion around the need to relate (ML-accelerated) simulations to tangible impact in chemicals and materials.
|
AI for Chemistry |
| 10:00–10:20am | Boris Kozinsky | Physics-informed digital twins of materials systems |
Discovery and understanding of next-generation materials requires a challenging combination of the high accuracy of first-principles calculations with the ability to reach large size and time scales. We pursue a multi-tier development strategy in which machine learning algorithms are combined with exact physical symmetries and constraints to significantly accelerate computations of electronic structure and atomistic dynamics. First, current DFT approximations fall short of the required accuracy and efficiency for predictive calculations of defect properties, band gaps, stability and electrochemical potentials of materials for energy storage and conversion. To advance the capability of DFT we introduce non-local charge density descriptors that satisfy exact constraints and learn exchange-correlation functionals called CIDER. These models are orders of magnitude faster in self-consistent calculations for solids than hybrid functionals but similar in accuracy. On a different level, we introduced equivariant neural network interatomic potentials (examples include NequIP, Allegro, SevenNet, GNoME, MACE) that are transforming how MD simulations are used to describe and design complex and reactive systems. We developed machine learning models for generalized potential and coarse-grained free energy functions with arbitrary dependence on external fields and temperature. We apply and demonstrate these methods via first principles ML MD simulations of dynamics of phase transformations, heterogeneous reactions, ferroelectric transitions, nuclear quantum effects, and soft materials.
|
AI for Chemistry |
| Tea Break | ||||
| Session Chair: Rafael Gómez-Bombarelli | ||||
| 10:40–11:20am | Weinan E | Building AI-Powered Infrastructure for Scientific Research |
I will discuss the progress we have made towards building a new generation of AI-powered infrastructure for scientific research.
|
AI for Science |
| 11:20–11:40am | Marin Soljačić | Novel computing paradigms |
Certain novel schemes for computing that use photons (instead of electrons).
|
Unconventional Computing |
| 11:40am–12:00pm | Richard Parker | AI for MRO ( Maintenance, Repair and Overhaul) |
N/A
|
AI for Science |
| Break | ||||
| Session Chair: Boris Kozinsky | 12:20–1:00pm | Gerbrand Ceder | AI and autonomous laboratories for materials synthesis |
Computational materials science has seen tremendous progress since the early days of Density Functional Theories. Stable algorithms enabled high-throughput computing which in turn enabled machine-learned potentials (MLP). Though far from perfect at this point, MLPs hold tremendous promise for accelerating materials simulation and discovery. Such progress is not parallelled on the experimental side, making it the gating factor in materials development. In response we built A-lab, an autonomous facility for the closed-loop synthesis of inorganic materials from powder precursors. All synthesis and characterization actions in A-lab, including powder mixing and grinding, firing, characterization by XRD and SEM, and all sample transfers between them are fully automated, leading to a lab that can synthesize and structurally characterize compounds within 10–20 hours of initiation. The A-lab leverages ab-initio computations through an API with the Materials Project, historical data sets that are text-mined from the literature, machine learning for optimization of synthesis routes and interpretation of characterization data, and active learning to plan and interpret the outcomes of experiments performed using robotics. The automation of synthesis and analysis can be further integrated into scientific workflows similar to computational workflows.
|
Self-Driving Labs |
| 1:00–1:40pm | Kristin Persson | Fueling The Era of Data-Driven Materials Design and Synthesis |
Fueled by increased availability of materials data, machine learning is poised to revolutionize materials science by enabling accelerated discovery, design, and optimization of materials. As one of the first and most visible of materials data providers, the Materials Project (www.materialsproject.org) uses supercomputing and an industry-standard software infrastructure together with state-of-the-art quantum mechanical theory to compute the properties of all known inorganic materials and beyond. The data, currently covering over 160,000 materials and millions of properties, is offered for free to the community together with online analysis and design algorithms. Serving a rapidly expanding community of more than 600,000 registered users, the Materials Project delivers millions of data records daily through its API, fostering data-rich research across materials science. This wealth of data is inspiring the development of machine learning algorithms aimed at predicting material properties, characteristics, and synthesizability. However, we note that truly accelerating materials innovation also requires rapid synthesis, testing and feedback, seamlessly coupled to existing data-driven predictions and computations. The ability to devise data-driven methodologies to guide synthesis efforts is needed as well as rapid interrogation and recording of results – including ‘non-successful’ ones. This talk will outline the rise of data-driven materials design, predictive synthesis and showcase successes as well as comment on current pitfalls and future directions.
|
AI for Chemistry |
| Lunch | ||||
Afternoon Sessions
(Parallel Sessions)
Morning Sessions
All morning sessions are to be held at Auditorium 1
| Time | Speaker | Title | Abstract | Topic |
|---|---|---|---|---|
| Session Chair: Zhang Yang | ||||
| 9:00–9:40am | Klaus Robert Müller | Explainable AI for the Sciences |
In recent years, machine learning (ML) and artificial intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry. In particular, the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities. In his talk, Müller will touch upon the topic of ML applications in the sciences, in particular in medicine, physics, chemistry. He will focus on techniques from explainable AI and their use for extracting information from machine learning models in order to further our understanding by explaining nonlinear ML models. Finally, Müller will briefly discuss perspectives and limitations.
|
AI for Science |
| 9:40–10:00am | Jack Wells | NVIDIA’s Role in Supporting Scientific Computing Infrastructure |
NVIDIA, as a full-stack computing platform company, is at the forefront of accelerating scientific discovery by providing comprehensive solutions that span hardware, software, and cloud services. The increasing diversity and scale of scientific applications, and the introduction of AI into scientific applications and workflows, introduce significant complexity to scientific computing infrastructure. NVIDIA addresses these challenges through the development of microservices, reference architectures & workflows, and AI development frameworks. By abstracting away complexity, NVIDIA enables scientists to focus on research rather than computing infrastructure management. Streamlined deployment and optimized performance shorten the time from hypothesis to discovery. NVIDIA’s ongoing scientific software development exemplifies its commitment to accelerating scientific discovery. These solutions empower researchers to harness the full potential of AI and high-performance computing, driving faster and more impactful scientific breakthroughs.
|
AI for Science |
| 10:00am-10:20am | Yu Xie | Scalable emulation of protein equilibrium ensembles with generative deep learning |
Following the sequence and structure revolutions, predicting the dynamical mechanisms of proteins that implement biological function remains an outstanding scientific challenge. Several experimental techniques and molecular dynamics (MD) simulations can, in principle, determine conformational states, binding configurations and their probabilities, but suffer from low throughput. Here we develop a Biomolecular Emulator (BioEmu), a generative deep learning system that can generate thousands of statistically independent samples from the protein structure ensemble per hour on a single graphical processing unit. By leveraging novel training methods and vast data of protein structures, over 200 milliseconds of MD simulation, and experimental protein stabilities, BioEmu’s protein ensembles represent equilibrium in a range of challenging and practically relevant metrics. Qualitatively, BioEmu samples many functionally relevant conformational changes, ranging from formation of cryptic pockets, over unfolding of specific protein regions, to large-scale domain rearrangements. Quantitatively, BioEmu samples protein conformations with relative free energy errors around 1 kcal/mol, as validated against millisecond-timescale MD simulation and experimentally-measured protein stabilities. By simultaneously emulating structural ensembles and thermodynamic properties, BioEmu reveals mechanistic insights, such as the causes for fold destabilization of mutants, and can efficiently provide experimentally-testable hypotheses.
|
AI for Biology |
| Tea Break | ||||
| Session Chair: Ron Dror | 10:40–11:20am | Alex Aliper | From Algorithm to Human Clinical Trials: Accelerating Drug Discovery and Development With Generative AI and Robotics |
In this talk we will cover the application of AI to disease modeling, target discovery, indication prioritization, indication expansion, and small molecule drug design. We will explore key case studies and cover the current state of the industry, highlighting its limitations, bottlenecks, and opportunities for advancing drug discovery. We will also discuss the applications of generative AI to development of foundational models for chemistry and multi-species multi-omics life models for aging and fundamental biological research.
|
AI for Biology |
| 11:20–11:40am | Wei Lu | Compute-in-memory devices and architectures for efficient information processing |
Modern computing needs are increasingly limited by the latency and energy costs of memory access. Emerging memory devices such as resistive random-access memory (RRAM) have shown potential to enable efficient computing architectures, as the data can be mapped as the conductance values of RRAM devices and computation can be directly performed in-memory. Specifically, by converting input activations into voltage pulses, vector-matrix multiplications (VMM) can be performed in analog domain, in place and in parallel, thus achieving high energy efficiency during operation. In this presentation, I will discuss how practical neural network models can be mapped onto realistic RRAM arrays in a modular design. System performance metrics including throughput and energy efficiency will be discussed. Challenges such as quantization effects, finite array size, and device non-idealities will be analyzed, and techniques such as fine-grained structured-pruning and tensor-train factoring are explored to address the memory capacity concerns. At the architecture level, effective compiler needs to be developed to map the network graph on to the tiled weight-stationary architecture, and examples of different generations of networks will be presented.
|
Unconventional Computing |
| 11:40–12:00pm | Hsin-Yuan Huang | The vast world of quantum advantage |
While quantum devices promise extraordinary capabilities, discerning genuine advantages from mere illusions remains a formidable challenge. In this endeavor, quantum theorists are like prophets, striving to foretell a future where quantum technologies will transform our world. Most people understand quantum advantage primarily as offering faster computation. This talk explores the vast world of quantum advantages that extends far beyond computation to include fundamental advantages in learning, sensing, cryptography, and memory storage. I will also demonstrate how some quantum advantages are inherently unpredictable using classical resources alone, suggesting a landscape far richer than currently envisioned. As quantum technologies proliferate, these unpredictable advantages may enable transformative applications beyond the reach of our classical imagination.
|
Unconventional Computing |
| 12:20–12:20pm | Yujie Huang | KronosAI |
KronosAI has achieved state-of-the-art for silicon photonics. We are building foundation model-based physics solvers that are faster, more accurate and more generalizable than traditional numerical solvers. Yujie will discuss insights gained from KronosAI's training, and how our work is paving the way for sustainable, eco-friendly engineering by integrating inverse design and design space exploration into everyday practice.
|
AI for Physics |
| Break | ||||
| Poster Session & Lunch | ||||
Afternoon Sessions
(Parallel Sessions)
Morning Sessions
All morning sessions are to be held at Auditorium 1
| Time | Speaker | Title | Abstract | Topic | |
|---|---|---|---|---|---|
| Session Chair: Terence O'Kane | |||||
| 9:00–9:40am | Stan Posey | Directions in Energy Efficient AI for Driving Earth Digital Twins |
AI is becoming a critical component of Earth system science workflows that are experiencing rapid growth in data from model output of increasing resolution in weather and climate models, and Earth observation systems that produce orders of magnitude more data than their previous generations. Efforts are underway in the weather and climate modeling community towards refining the horizontal resolution of atmosphere GCMs towards km-scale to explicitly resolve certain small-scale convective cloud processes and provide more realistic local information on climate change. At the same time, exascale HPC systems have arrived and in most cases are designed with GPU accelerator technology that offers opportunities in reasonable simulation turn-around times balanced with efficiency in energy consumption. Ultimately, output from global storm-resolving models at km-scale will become the essential driver behind the deployment of Earth digital twins for programs like the EC Destination Earth and NVIDIA Earth-2. For model emulation of the Earth system, AI models become increasingly accurate as they train on more data, yet computational and storage requirements in data-distributed computing environments with energy-efficiency considerations are the current challenges for the HPC vendor community. This talk will describe advances in HPC for GPU-accelerated numerical models, AI software and system features for large-scale data handling, and ML model training and inference that when combined, provide the critical components towards the vision of Earth system digital twins.
|
AI for Climate and Weather | |
| 9:40-10:00am | David Finkelshtein | Quantitative Investing in the Era of AI Revolution | AI for Finance | ||
| 10:00-10:20am | Antonio Helio Castro Neto | From 2D to 3D: from semiconductors to cement | AI for Materials Science | ||
| 10:20am to 10:40am – Tea Break | |||||
| Session Chair: Megan Stanley | |||||
| 10:40-11:00am | Chen Chen | Preparations for Next-Generation Weather and Climate Modelling at the Centre for Climate Research Singapore (CCRS) |
The tropical urban nature of the weather and climate in the Singapore region places particular requirements on future observation, models and IT infrastructure. Significant progress has been made in recent years to meet these requirements through added value km-scale Numerical Weather Prediction (NWP) and regional climate projections based largely on physical climate modelling. However, recent advances in AI4Weather/Climate create exciting opportunities for further added value. This talk will provide an overview of CCRS’ plans to move from the current generation physical climate/weather modelling system (based on the Unified Model - UM) to next-generation weather and climate modelling approaches employing a combination of physical, hybrid AI and fully-data driven models suitable for applications in the Singapore region.
|
AI for Climate and Weather | |
| 11:00-11:20am | Seok Min Lim | AI x Cybersecurity | |||
| 11:20–12:40pm – Poster Session | |||||
| 12:40 pm – Lunch | |||||
Afternoon Sessions
(Parallel Sessions)
| 4:00pm to 4:20pm – Tea Break |
Morning Sessions
All morning sessions are to be held at Auditorium 1
| Session Chair: Teck Leong Tan |
| Time | Speaker | Title | Abstract | Topic |
|---|---|---|---|---|
| 9:00–9:20am | Xavier Bresson | Graph Transformers for Molecular Science -- Overcoming Limitations in Graph Representation Learning |
Graph Neural Networks (GNNs) have shown great potential in graph representation learning but are limited by over-squashing and poor long-range dependency capture. In this work, we introduce Graph ViT, a novel approach that leverages Visual Transformers (ViT). This new architecture addresses the standard challenges by effectively capturing long-range dependencies, improving memory and computational efficiency, and offering high expressive power in graph isomorphism. These advantages enable Graph ViT to outperform traditional message-passing GNNs, especially in molecular science applications.
|
ML Algorithmic Advances |
| 9:20–9:40am | Wessel Bruinsma | A Foundation Model for the Earth System: Air Pollution and Ocean Waves |
Aurora is a foundation model for the Earth system pretrained on a large and diverse collection of geophysical data. The key ability of Aurora is that the model can be fine-tuned to produce forecasts for a wide variety of environmental forecasting applications, often matching or even outperforming state-of-the-art traditional approaches at a fraction of the computational cost. In this first part of a two-part talk on Aurora, I will discuss the concept of a foundation model for the Earth system and show how Aurora can be fine-tuned to produce state-of-the-art operational forecasts for air pollution and ocean waves.
|
AI for Climate and Weather |
| 9:40–10:0am | Alexandre Tkatchenko | Realizing Schrödinger's Dream with AI-Enabled Molecular Simulations |
The convergence between accurate quantum-mechanical (QM) models (and codes) with efficient machine learning (ML) methods seem to promise a paradigm shift in molecular simulations. Many challenging applications are now being tackled by increasingly powerful QM/ML methodologies. These include modeling covalent materials, molecules, molecular crystals, surfaces, and even whole proteins in explicit water. In this talk, I will attempt to provide a reality check on these recent advances and on the developments required to enable fully quantum dynamics of complex functional (bio)molecular systems. Multiple challenges are highlighted that should keep theorists in business for the foreseeable future.
|
AI for Physics |
| 10:00-10:40am | Tea Break | |||
| Session Chair: Xujie Si | ||||
| 10:40–11:00am | Yang-Hui He | The AI Mathematician |
We argue how AI can assist mathematics in three ways: theorem-proving, conjecture formulation, and language processing. Inspired by initial experiments in geometry and string theory in 2017, we summarize how this emerging field has grown over the past years, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory, to combinatorics, and to number theory. At the heart of the programme is the question how does AI help with theoretical discovery, and the implications for the future of mathematics.
|
AI for Mathematics |
| 11:00–11:20am | Sergei Gukov | What kind of game is mathematics? |
It comes as no surprise that solving challenging research-level math problems drives progress in mathematics. What is more surprising, though, is that solving such long-standing open problems also contributes to an entirely different field: the development of the next generation AI systems. We live in an exciting time where mathematics and AI can greatly benefit each other, and the goal of the talk is to explain how and why, drawing on specific examples from knot theory and combinatorial group theory. Based on recent work with Ali Shehper, Anibal Medina-Mardones, Lucas Fagan, Bartłomiej Lewandowski, Angus Gruen, Yang Qiu, Piotr Kucharski, and Zhenghan Wang.
|
AI for Mathematics |
| 11:20–11:40am | Nicola Marzari | The electronic-structure genome of inorganic materials |
The structure and properties of inorganic materials have been extensively explored in the last decade with machine learning models built on computational databases. Typically, the descriptors are based on atomic positions, and the properties are thermodynamic quantities such as energies, forces, stresses. This extremely successful paradigm has even given rise to foundational - i.e., universal - machine learning models. Here, we switch our attention to electronic-structure properties, and to electronic-structure descriptors. For this, we have built robust and reliable protocols able to map automatically the electronic-structure of a material (typically calculated at the level of density-functional theory) into the exact but also minimal set of maximally localized Wannier functions. These latter provide the most compact representation of any desired manifold of electronic structure bands. We constructed more than 1.3M Wannier functions for 20,000+ inorganic, stoichiometric, and experimentally known materials, drawn from the Materials Cloud MC3D database. For these, we explore materials or materials combinations that could deliver optimal performance as thermoelectrics, as nonlinear Hall materials, and as heterojunctions for solar cells.
|
AI for Materials Science |
| 11:40–12:00pm | Sunny Lu | AI X Blockchain |
Abstract
|
AI for Finance |
| 12:00 – 2:40pm Lunch | ||||
Afternoon Sessions
(Parallel Sessions)












