The goal of emulating biology has stimulated many investigations into properties that lead to robust assembly. Many of the works that focus on designing self-assembly have pursued a strategy of tuning the interactions among the various components to stabilize a given target structure. In this talk, I will discuss some work that approaches this problem from a distinct viewpoint, namely, through the lens of nonequilibrium control processes in which external perturbations are tuned to drive the assembly dynamics to states unreachable in equilibrium. I will discuss computational strategies for carrying out an optimization to control the steady state of interacting particle systems using only external fields. In addition, I will introduce a framework to quantify the dissipative costs of maintaining a nonequilibrium steady state that uses observable properties alone.

In probability theory, the notion of weak convergence is often used to describe two equivalent probability distributions. This metric requires equivalence of the average value of well-behaved functions under the two probability distributions being compared. In coarse-grained modeling, Noid and Voth developed a thermodynamic equivalence principle that has a similar requirement. Nevertheless, there are many functions of the fine-grained system that we simply cannot evaluate on the coarse-grained degrees of freedom. In this talk, I will describe an approach that combines accelerated sampling of a coarse-grained model with invertible neural networks to invert a coarse-graining map in a statistically precise fashion. I will show that for non-trivial biomolecular systems, we can recover the fine-grained free energy surface from coarse-grained sampling.

In many applications in computational physics and chemistry, we seek to estimate expectation values of observables that yield mechanistic insight about reactions, transitions, and other “rare” events. These problems are often plagued by metastability; slow relaxation between metastable basins leads to slow convergence of estimators of such expectations. In this talk, I will focus on efforts to exploit developments in generative modeling to sample distributions that are challenging to sample with local dynamics (e.g., MCMC or molecular dynamics) due to metastability. I will discuss the problem of sampling when there is not a large, pre-existing data set on which to train. By simultaneously sampling with traditional methods and learning a sampler, we assess the prospects of neural network driven sampling to accelerate convergence and to aid exploration of high-dimensional distributions. This is joint work with Marylou Gabrié and Eric Vanden-Eijnden.

A pedagogical overview of the mean-field approach to studying the dynamics and trainability of neural networks.

The surprising flexibility and undeniable empirical success of machine learning algorithms has inspired many theoretical explanations for the efficacy of neural networks. Here, I will briefly introduce one perspective that provides not only asymptotic guarantees of trainability and accuracy in high-dimensional learning problems, but also provides some prescriptions and design principles for learning. Bolstered by the favorable scaling of these algorithms in high dimensional problems, I will turn to a central problem in computational condensed matter physics—that of computing reaction pathways. From the perspective of an applied mathematician, these problems typically appear hopeless; they are not only high-dimensional, but also dominated by rare events. However, with neural networks in the toolkit, at least the dimensionality is somewhat less intimidating. I will describe an algorithm that combines stochastic gradient descent with importance sampling to optimize a function representation of a reaction pathway for an arbitrary system. Finally, I will provide numerical evidence of the power and limitations of this approach.

Grant M. Rotskoff 2023

Powered by the Academic theme for Hugo.