Schedule
8:00-8:50 Breakfast & Registration, ISEB Lobby and outdoor plaza
Plenary Session I
ISEB 1010
Chair: Chris Miles (UC Irvine)
8:50-9:00 Welcome and Opening Remarks
9:00-9:40 Franca Hoffmann (Caltech)
Covariance-modulated Optimal Transport and Gradient Flows
9:45-10:25 Hayden Schaeffer (UC Los Angeles)
Randomized Methods for Data-Discovery and Dynamical Systems
9:00-9:40 Franca Hoffmann (Caltech)
Covariance-modulated Optimal Transport and Gradient Flows
We present a variant of the dynamical optimal transport problem in which the energy to be minimized is modulated by the covariance matrix of the current distribution. Such transport metrics arise naturally in mean-field limits of certain ensemble Kalman methods for solving inverse problems. We show that the transport problem splits into two coupled minimization problems up to degrees of freedom given by rotations: one for the evolution of mean and covariance of the interpolating curve, and one for its shape. Similarly, on the level of the gradient flows a similar splitting into the evolution of moments and shapes of the distribution can be observed. Those show better convergence properties in comparison to the classical Wasserstein metric in terms of exponential convergence rates independent of the Gaussian target.
9:45-10:25 Hayden Schaeffer (UC Los Angeles)
Randomized Methods for Data-Discovery and Dynamical Systems
As the field of “artificial intelligence for scientific discovery” or “scientific machine learning” grows, so too does the need for robust, stable, and consistent algorithms. One of the long-term goals is to provide automated approaches to support and accelerate growth in data-based discovery, high-consequence decision making, and prototyping. In this talk, I will discuss sparsity-promoting random feature methods and their applications to scientific modeling and engineering design problems. These methods address some of the challenges of approximating high-dimensional systems using kernels when one has limited data with noise and outliers. In particular, I will show that the algorithms perform well on benchmark tests for a wide range of scientific applications. In addition, our methods come with theoretical guarantees of success in terms of generalization and complexity bounds. Some applications of interest include learning governing equations from time-series data, high-dimensional surrogate modeling, and time-series forecasting.
10:30-10:45 Coffee Break, ISEB Outdoor Plaza
Morning Contributed Sessions
Track 1
ISEB 1010
Chair: Daniel Z. Huang (Caltech)
10:45-11:00 Yizhe Zhu
Overparameterized Random Feature Regression with Nearly Orthogonal Data
11:05-11:20 Tingwei Meng
Leveraging Multi-time Hamilton-Jacobi PDEs for Certain Scientific Machine Learning Problems
11:25-11:40 Harsh Sharma
Physics-preserving Learning of Reduced-order Models for Large-scale Dynamical Systems
11:45-12:00 Yifan Chen
Gradient flows for sampling: affine invariance and numerical approximation
10:45-11:00 Yizhe Zhu
Overparameterized Random Feature Regression with Nearly Orthogonal Data
We consider the random feature ridge regression (RFRR) given by a two-layer neural network with random Gaussian initialization. We study the non-asymptotic behaviors of the RFRR with nearly orthogonal deterministic unit-length input data vectors in the overparameterized regime, where the width of the first layer is much larger than the sample size. We establish non-asymptotic concentration results of the training errors, cross-validations, and generalization errors of RFRR around their corresponding quantities of the kernel ridge regression (KRR), respectively, where the KRR is given by an expected kernel from a nonlinear random feature map. We then approximate the performance of the KRR by a polynomial kernel matrix obtained from the Hermite polynomial expansion of the activation function, whose degree only depends on the orthogonality among different input vectors. This polynomial kernel essentially determines the asymptotic behavior of the RFRR and the KRR. Our results hold for a general class of activation functions and input data with nearly orthogonal properties. Based on these approximations, we obtain a lower bound for the generalization error of the RFRR under a nonlinear student-teacher model.
11:05-11:20 Tingwei Meng
Leveraging Multi-time Hamilton-Jacobi PDEs for Certain Scientific Machine Learning Problems
Hamilton-Jacobi partial differential equations (HJ PDEs) have deep connections with a wide range of fields, including optimal control, differential games, and imaging sciences. By considering the time variable to be a higher dimensional quantity, HJ PDEs can be extended to the multi-time case. In this talk, I will talk about a novel theoretical connection between specific optimization problems arising in machine learning and the multi-time Hopf formula, which corresponds to a representation of the solution to certain multi-time HJ PDEs. Through this connection, we increase the interpretability of the training process of certain machine learning applications by showing that when we solve these learning problems, we also solve a multi-time HJ PDE and, by extension, its corresponding optimal control problem. As a first exploration of this connection, we develop the relation between the regularized linear regression problem and the Linear Quadratic Regulator (LQR). We then leverage our theoretical connection to adapt standard LQR solvers (namely, those based on the Riccati ordinary differential equations) to design new training approaches for machine learning. Finally, we provide some numerical examples that demonstrate the versatility and possible computational advantages of our Riccati-based approach in the context of continual learning, post-training calibration, transfer learning, and sparse dynamics identification. This is a joint work with Paula Chen, Zongren Zou, Jerome Darbon, George Em Karniadakis.
11:25-11:40 Harsh Sharma
Physics-preserving Learning of Reduced-order Models for Large-scale Dynamical Systems
Computational modeling, simulation, and control of physical systems characterized by Hamiltonian and Lagrangian mechanics are essential for many science and engineering applications such as plasma physics, climate modeling, and robotics. This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of large-scale nonlinear dynamical systems. Traditional intrusive projection-based model reduction approaches construct physics-preserving ROMs by projecting the governing equations of the full model onto a subspace. This projection requires complete knowledge of the full model operators and full access to manipulate the code. In contrast, the proposed physics-preserving learning approach embeds the physics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying geometric structure. The proposed method is gray-box in that it utilizes knowledge of the Hamiltonian/Lagrangian structure at the partial differential equation level, However, it does not require access to computer code, only data to learn the models. Our numerical results demonstrate structure-preserving operator inference on the cubic nonlinear Schrodinger equation, the sine–Gordon equation, and a large-scale discretization of a soft robot fishtail with 779,232 degrees of freedom. Accurate predictions far outside the training time interval for nonlinear examples illustrate the generalizability of our learned models.
11:45-12:00 Yifan Chen
Gradient flows for sampling: affine invariance and numerical approximation
Sampling a target distribution with an unknown normalization constant is a fundamental problem in data driven inference. Using dynamical systems to generate solutions to approach the target gradually has been a compelling idea. In this talk, we focus on probability gradient flows as the dynamical system and study several related basic questions in sampling distributions. Any implementation of a gradient flow needs an energy functional, a metric, and a numerical approximation scheme. We show how KL divergence is a special and unique energy functional and how the affine invariant property in the metric can improve convergence. We also discuss numerical approximations that lead to implementable methods such as interacting particles, parametric variational inference, and Kalman approaches.
Track 2
ISEB 1200
Chair: Yuhua Zhu (UCSD)
10:45-11:00 James K. Alcala
Moving anchor acceleration methods in extragradient-type algorithms
11:05-11:20 Kevin Bui
A Stochastic ADMM Algorithm for Large-Scale Ptychography with Weighted Difference of Anisotropic and Isotropic Total Variation
11:25-11:40 Nicholas Nelsen
On the Sample Complexity of Linear Operator Learning
11:45-12:00 Jingrong Wei
Accelerated Gradient and Skew-Symmetric Splitting Methods for a Class of Monotone Operator Equations
10:45-11:00 James K. Alcala
Moving anchor acceleration methods in extragradient-type algorithms
Our work introduces a moving anchor technique to extragradient algorithms for smooth structured minimax problems. First, our moving anchor technique is introduced into the original algorithmic anchoring framework known as EAG. We match the optimal order of convergence in terms of worst-case complexity on the squared gradient, O(1/k^2). As many problems of practical interest are nonconvex-nonconcave, the recently developed FEG class of algorithms brings order-optimal methods developed within EAG to the nonconvex-nonconcave problem settings. We introduce the moving anchor methods to the FEG class of algorithms and again obtain order-optimal complexity results. In both problem settings, a variety of numerical examples demonstrate the efficacy of our algorithms. A proximal-point version of our algorithms is also developed.
11:05-11:20 Kevin Bui
A Stochastic ADMM Algorithm for Large-Scale Ptychography with Weighted Difference of Anisotropic and Isotropic Total Variation
Ptychography is an imaging technique that has various scientific applications, ranging from biology to optics. The method scans the object of interest in a series of overlapping positions, thereby generating a set of multiple Fourier magnitude measurements that are potentially corrupted by noise. From these measurements, an image of the object can be reconstructed depending on how the related inverse problem is formulated and solved. In this paper, we propose a class of variational models that incorporate the weighted anisotropic--isotropic total variation (AITV), an effective regularizer for image recovery. This class of models is applicable to measurements corrupted by either Gaussian or Poisson noise. In order to have the models applicable for large number of ptychographic scans, we design an efficient stochastic alternating direction method of multipliers algorithm. Numerical experiments demonstrate that from a large set of highly corrupted Fourier measurements, the proposed stochastic algorithm with AITV regularization can reconstruct complex-valued images with satisfactory quality, especially for the phase components.
11:25-11:40 Nicholas Nelsen
On the Sample Complexity of Linear Operator Learning
This talk establishes generalization error guarantees for learning linear operators between function spaces. The theoretical treatment is based on a Bayesian inverse problems analysis. Several fundamental insights emerge from the theory that are united by a common theme: smoothness of the problem, smoothness of the training data, and smoothness of the test data. These principles have implications for the robustness of learned models under data distribution shifts and how the accuracy of such models should be evaluated. The theory is applied to answer several basic questions. For example, can the operators of differentiation or integration of functions be learned from noisy data pairs? If so, how much data is required? Numerical evidence validates the findings.
11:45-12:00 Jingrong Wei
Accelerated Gradient and Skew-Symmetric Splitting Methods for a Class of Monotone Operator Equations
A class of monotone operator equations, which can be decomposed into sum of a gradient of a strongly convex function and a linear and skew-symmetric operator, is considered in this work. Based on discretization of the generalized gradient flow, gradient and skew-symmetric splitting (GSS) methods are proposed and proved to convergent in linear rate. To further accelerate the convergence, an accelerated gradient flow is proposed and accelerated gradient and skew-symmetric splitting (AGSS) methods are developed, which extends the acceleration among the existing works on the convex minimization to a more general class of monotone operator equations. In particular, when applied to smooth saddle point systems with bilinear coupling, an accelerated transformed primal-dual (ATPD) method is proposed and shown to achieve linear rates with optimal lower iteration complexity.
Track 3
ISEB 1310
Chair: Weitao Chen (UCR)
10:45-11:00 Abigail Hickok
An Intrinsic Approach to Scalar Curvature Estimation
11:05-11:20 Varun Khurana
Linearized Wasserstein dimensionality reduction with approximation guarantees
11:25-11:40 Dhruv Kohli
A bottom-up manifold learning framework to embed closed and non-orientable manifolds into their intrinsic dimensions
11:45-12:00 Justin Marks
In Pursuit of the Grassmann Manifold Projection Mean
10:45-11:00 Abigail Hickok
An Intrinsic Approach to Scalar Curvature Estimation
I will discuss recent research in which we introduce an intrinsic estimator for the scalar curvature of a data set presented as a finite metric space (e.g., a distance matrix, a point cloud, or a network). Our estimator depends only on the metric structure of the data, and not on an embedding in Euclidean space. Our estimator is consistent in the sense that for points sampled randomly from a compact Riemannian manifold, the estimator converges to the scalar curvature as the number of points increases. Additionally, our estimator is stable with respect to perturbations of the metric, which justifies its use in applications. We validate our estimator experimentally on synthetic data sampled from manifolds with known curvature.
11:05-11:20 Varun Khurana
Linearized Wasserstein dimensionality reduction with approximation guarantees
We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space. The algorithm is motivated by the observation that many datasets are naturally interpreted as probability measures rather than points in R^n, and that finding low-dimensional descriptions of such datasets requires manifold learning algorithms in the Wasserstein space. Most available algorithms are based on computing the pairwise Wasserstein distance matrix, which can be computationally challenging for large datasets in high dimensions. Our algorithm leverages approximation schemes such as Sinkhorn distances and linearized optimal transport to speed-up computations, and in particular, avoids computing a pairwise distance matrix. We provide guarantees on the embedding quality under such approximations, including when explicit descriptions of the probability measures are not available and one must deal with finite samples instead. Experiments demonstrate that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size. We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
11:25-11:40 Dhruv Kohli
A bottom-up manifold learning framework to embed closed and non-orientable manifolds into their intrinsic dimensions
Manifold learning algorithms aim to map high dimensional data into lower dimensions while minimizing some measure of local (and possibly global) distortion of the underlying mapping. These algorithms generally follow either a top-down or a bottom-up approach. The former (such as UMAP and t-SNE) start with an initial global embedding and refine it iteratively to minimize some measure of local distortion, while the latter (such as LTSA) start with low distortion local views of the data and align them to produce a global embedding. In the context of bottom-up manifold learning, we first provide an iterative framework for aligning views that enables tearing of manifolds such as a torus, Kl\`ein bottle etc. so as to embed them into their intrinsic dimension. By a simple automated procedure to color the points on the tear adjacent in the data with the same color, we equip these embeddings with gluing instructions, thus allowing a user to infer the original topology through the torn embedding. Finally, we show the embeddings of several synthetic and real world datasets produced by two algorithms developed from the above framework, namely, Low Distortion Local Eigenmaps (LDLE) and Riemannian Alignment of Tangent Spaces (RATS). We compare them with the embeddings produced by several commonly used algorithms such as UMAP, t-SNE, ISOMAP and LTSA.
11:45-12:00 Justin Marks
In Pursuit of the Grassmann Manifold Projection Mean
Applications of geometric data analysis often involve producing collections of subspaces, such as illumination spaces for digital imagery. For a given collection of subspaces, a natural task is to find the average of the collection. A robust suite of algorithms has been developed to generate measures of center for a collection of subspaces of fixed dimension, or equivalently, a collection of points on a particular Grassmann manifold. These measures of center include the Flag Mean, the Flag Median, the Normal Mean, and the Karcher Mean. In this talk, we discuss the pursuit of a Projection Mean, which hinges upon the identification of a Grassmannian tangent space projection operator which is invariant to selection of orthonormal basis representative.
Track 4
ISEB 2020
Chair: Zirui Zhang (UCI)
10:45-11:00 Yat Tin Chow
Resolution analysis in some scattering problems and enhanced resolution in certain scenarios
11:05-11:20 Siting Liu
A numerical algorithm for an Inverse mean-field game problem
11:25-11:40 Mingtao Xia
Adaptive spectral methods in unbounded domains
11:45-12:00 Qihao Ye
Monotone meshfree methods for linear elliptic equations in non-divergence form via nonlocal relaxation
10:45-11:00 Yat Tin Chow
Resolution analysis in some scattering problems and enhanced resolution in certain scenarios
In this talk, we explore image resolution and the ill-posed-ness of inverse scattering problems. In particular, we would like to discuss how certain properties of the inclusion might induce high-resolution imaging. We first explore why resolution matters, and then observe the super-resolution phenomenon with certain particular high contrast inclusion. We then discuss how local sensitivity (and resolution) around a point is related to the extrinsic curvature of the surface of inclusion around the point. Along the line, we also discuss concentration of plasmon resonance (in a certain manner) at boundary points of high curvature by quantizing a Hamiltonian flow with the help of the Heisenberg picture, and with the help of an understanding from quantum integrable system. We then turn to a transmission eigenvalue problem, and observe concentration of almost transmission eigenfunctions along the boundary. The results discussed in this talk are joint works with Habib Ammari (ETH Zurich), Hongyu Liu (CityU of HK), Keji Liu (Shanghai Key Lab), Mahesh Sunkula (Purdue), Jun Zou (CUHK).
11:05-11:20 Siting Liu
A numerical algorithm for an Inverse mean-field game problem
In this talk, we consider a novel inverse problem in mean-field games (MFG). We aim to recover the MFG model parameters that govern the underlying interactions among the population based on a limited set of noisy partial observations of the population dynamics under the limited aperture. Due to its severe ill-posedness, obtaining a good quality reconstruction is very difficult. Nonetheless, it is vital to recovering the model parameters stably and efficiently to uncover the underlying causes of population dynamics for practical needs.
Our work focuses on the simultaneous recovery of running cost and interaction energy in the MFG equations from a finite number of boundary measurements of population profile and boundary movement. To achieve this goal, we formalize the inverse problem as a constrained optimization problem of a least squares residual functional under suitable norms. We then develop a fast and robust operator splitting algorithm to solve the optimization using techniques including harmonic extensions, three-operator splitting scheme, and primal-dual hybrid gradient method. Numerical experiments illustrate the effectiveness and robustness of the algorithm.
11:25-11:40 Mingtao Xia
Adaptive spectral methods in unbounded domains
In order to solve the numerical difficulty of unbounded-domain PDEs, we devise efficient adaptive techniques for spectral methods so that spatiotemporal PDEs in unbounded domains could be efficiently and accurately solved. We propose a scaling technique, a moving technique, and a p-adaptive technique to adaptively cluster enough collocation points in a region of interest and adjust the spectral expansion order in order to achieve a fast spectral convergence. Our scaling and p-adaptive algorithm employ an indicator in the frequency domain that is used to determine when scaling is needed and informs the tuning of a scaling factor to redistribute collocation points to adapt to the diffusive behavior of the solution as well as to determine when the expansion order needs adjusting to maintain accuracy and reduce computational cost. Our moving technique adopts an exterior-error indicator and moves the collocation points to capture the translation. Both frequency and exterior-error indicators are defined using only the numerical solutions. We apply our methods to a number of different models, including diffusive and moving Fermi-Dirac distributions and nonlinear Dirac solitary waves, Schrodinger equations in quantum mechanics, as well as an unbounded-domain cellular proliferation model with possible blowup behavior to demonstrate its effectiveness and advantages over non-adaptive spectral methods.
11:45-12:00 Qihao Ye
Monotone meshfree methods for linear elliptic equations in non-divergence form via nonlocal relaxation
We design a monotone meshfree finite difference method for linear elliptic PDEs in non-divergence form on point clouds via a nonlocal relaxation method. The key idea is a combination of a nonlocal integral relaxation of the PDE problem with a robust meshfree discretization on point clouds. Minimal positive stencils are obtained through a linear optimization procedure that automatically guarantees the stability and, therefore, the convergence of the meshfree discretization. A major theoretical contribution is the existence of consistent and positive stencils for a given point cloud geometry. We provide sufficient conditions for the existence of positive stencils by finding neighbors within an ellipse (2d) or ellipsoid (3d) surrounding each interior point, generalizing the study for Poisson’s equation by Seibold in 2008. It is well-known that wide stencils are in general needed for constructing consistent and monotone finite difference schemes for linear elliptic equations. Our result represents a significant improvement in the stencil width estimate for positive-type finite difference methods for linear elliptic equations in the near-degenerate regime (when the ellipticity constant becomes small), compared to previously known works in this area. Numerical algorithms and practical guidance are provided with an eye on the case of small ellipticity constant. Numerical results will be presented in both 2d and 3d, examining a range of ellipticity constants including the near-degenerate regime.
Track 5
ISEB 4020
Chair: Kristin Kurianski (CSUF)
10:45-11:00 Robert Bowden
Sheaf-based Opinion Dynamics
11:05-11:20 Weiqi Chu
Non-Markovian opinion models inspired by random processes on networks
11:25-11:40 Wen Jian Chung
Human immunodeficiency virus (HIV) dynamics in secondary lymphoid tissues and the evolution of cytotoxic T lymphocyte (CTL) escape mutants
11:45-12:00 Lihong Zhao
Global Sensitivity Analysis of Strategies to Mitigate Covid-19 Transmission on a Structured College Campus
10:45-11:00 Robert Bowden
Sheaf-based Opinion Dynamics
We construct a novel hypergraph laplacian matrix to generalize graph-based opinion dynamics models to higher-order social networks. Using the lens of topology, opinion dynamics occurs on a sheaf of opinions over a graph, giving room to extend the dynamics to hypergraphs. Sheaf cohomology tools allow us to write down Hodge k-Laplacians, which we modify to eventually construct the hypergraph laplacian. As an application, we begin to examine the Bounded Confidence hypergraph laplacian, which generalizes the Hegselmann-Krause opinion dynamics model to hypergraphs. We prove that Ricci curvature dictates fragmentation of the social network. Graph laplacian-based dynamics play a central role in many network models, and so the hypergraph laplacian, combined with the generality of sheaves, provides a direct path to incorporating higher-order network structure into a wide variety of network dynamics models.
11:05-11:20 Weiqi Chu
Non-Markovian opinion models inspired by random processes on networks
The study of opinion dynamics models opinion evolution as dynamical processes on social networks. For social networks, nodes encode social entities (such as people and twitter accounts), while edges encode relationship or events between entities. Traditional models of opinion dynamics consider how opinions evolve either on time-independent networks or on temporal networks with edges that follow Poisson statistics. However, in many real-life networks, interactions between individuals (and hence the edges in a network) follow non-Poisson processes, which leads to dynamics on networks with memory-dependent effects (such as stereotypes). In this talk, we model social interactions as random processes on temporal networks and derive the opinion model that is governed by waiting-time distributions (WTDs) between social events. When random processes have non-Poisson interevent statistics, the corresponding opinion models yield non-Markovian dynamics naturally. We analyze the convergence to consensus of these models and illustrate a variety of induced opinion models from common WTDs (including Dirac delta, exponential, and heavy-tailed distributions). When the opinion model does not have an explicit form (such as models induced by heavy-tailed WTDs), we provide a discrete-time approximation method and derive an associate set of discrete-time opinion-dynamics models.
11:25-11:40 Wen Jian Chung
Human immunodeficiency virus (HIV) dynamics in secondary lymphoid tissues and the evolution of cytotoxic T lymphocyte (CTL) escape mutants
The human immunodeficiency virus (HIV) can replicate both in the follicular and the extrafollicular compartments of secondary lymphoid tissues. Yet, virus is concentrated in the follicular compartment in the absence of antiretroviral therapy, in part due to the lack of cytotoxic T lymphocyte (CTL)-mediated activity there. CTL home to the extrafollicular compartment, where they can suppress virus load to relatively low levels. We use mathematical models to show that this compartmentalization can explain seemingly counterintuitive observations. First, it can explain the observed constancy of the viral decline slope during antiviral therapy irrespective of the presence of CTL in SIV-infected macaques, under the assumption that CTL-mediated lysis significantly contributes to virus suppression. Second, it can account for the relatively long times it takes for CTL escape mutants to emerge during chronic infection even if CTL-mediated lysis is responsible for virus suppression. The reason is the heterogeneity in CTL activity, and the consequent heterogeneity in selection pressure between the follicular and extrafollicular compartments. Hence, to understand HIV dynamics more thoroughly, this analysis highlights the importance of measuring virus populations separately in the extrafollicular and follicular compartments rather than using virus load in peripheral blood as an observable; this hides the heterogeneity between compartments that might be responsible for the particular patters seen in the dynamics and evolution of the HIV in vivo.
11:45-12:00 Lihong Zhao
Global Sensitivity Analysis of Strategies to Mitigate Covid-19 Transmission on a Structured College Campus
In response to the COVID-19 pandemic, many higher educational institutions moved their courses from face-to-face instruction to online or hybrid instruction in hopes of slowing disease spread. The advent of multiple highly-effective vaccines offers the promise of a return to ``normal'' in-person operations, but it is not clear if -- or for how long -- campuses should employ non-pharmaceutical interventions such as indoor mask mandate or capping the size of in-person classes. We developed an ODE-based model of COVID-19 dynamics on a college campus that interacts with the outside and conducted global sensitivity analysis to evaluate how both pharmaceutical and non-pharmaceutical interventions impact disease spread.
Track 6
ISEB 5020
Chair: Federico Bocci (UCI)
10:45-11:00 Badal Joshi
Absolute concentration robustness in covalent modification networks
11:05-11:20 German Enciso
Ultrasensitivity bounds In biochemical reaction cascades
11:25-11:40 Luisa Gianuca
Getting drugs to the brain: a differential equation model
11:45-12:00 Pedro Aceves Sanchez
Emergence of Vascular Networks
10:45-11:00 Badal Joshi
Absolute concentration robustness in covalent modification networks
Shinar and Feinberg defined the notion of absolute concentration robustness (ACR) in reaction networks to mean that the concentration of a certain species (called ACR species) is invariant across all positive steady states. This means that even though the steady state values depend on initial concentrations, the ACR species concentration at steady state does not. Shinar and Feinberg gave a simple sufficient condition for existence of an ACR species, the network has a deficiency of one and two non-terminal complexes differ in the ACR species. We study each condition separately and show that the deficiency condition is not necessary and many biologically important networks do not have a deficiency of one. Moreover, the second condition is related to the existence of a bifunctional enzyme, although in a non-trivial manner. Previous experimental and modeling work had identified a bifunctional enzyme to be implicated in a mechanism for ACR in some reaction networks. We define bifunctionality in a large class of networks called covalent modification networks, which includes n-site futile cycles or phosphorylation-dephosphorylation cycles. Such networks can have arbitrary deficiency. We give necessary and sufficient conditions for ACR in this class of networks.
11:05-11:20 German Enciso
Ultrasensitivity bounds In biochemical reaction cascades
In this short talk I will present recent results on an inequality establishing bounds for ultra sensitive responses in cascades of biochemical reactions. Specifically, the inequality postulates that the Hill coefficient of a composition of two sigmoidal functions is at most the product of the Hill coefficients of the two functions. We prove this inequality in the context of Hill function dose responses, find a counterexample for other functions, and provide computational evidence for the inequality in other families of dose responses.
11:25-11:40 Luisa Gianuca
Getting drugs to the brain: a differential equation model
The brain is protected from unwanted invasions by protective cells known as the “blood brain barrier”. They regulate what molecules and cells can move between the blood and the brain, making it difficult to deliver therapies to the brain. In this project we describe a model given by a system of differential equations that can be used to design therapies for degenerative diseases such as Parkinson’s. The drug is delivered in sound-sensitive nanocarriers that release their drug load when stimulated by an external trigger like an ultrasound. The differential equations of the model are derived by balancing gain and loss terms due to transport into or out of the brain and to chemical reactions among the constituents. The system of equations is then solved numerically using open source software. The goal is to optimize the delivery of drug while at the same time minimizing possible negative side effects of the ultrasound.
11:45-12:00 Pedro Aceves Sanchez
Emergence of Vascular Networks
The emergence of vascular networks is a long-standing problem which has been the subject of intense research in the past decades. One of the main reasons being the widespread applications that it has in tissue regeneration, wound healing, cancer treatment, etc. The mechanisms involved in the formation of vascular networks are complex and despite the vast amount of research devoted to it, there are still many mechanisms involved which are poorly understood. Our aim is to bring insight into the study of vascular networks by defining heuristic rules, as simple as possible, and to simulate them numerically to test their relevance in the vascularization process. We will introduce a hybrid agent-based/continuum model coupling blood flow, oxygen flow, capillary network dynamics and tissues dynamics. And we will show a few simulations showing the capability of our model to capture the main features of vascular networks.
12:05-13:35 Lunch and Poster Session, ISEB Outdoor Plaza
Plenary Session II
ISEB 1010
Chair: Anna Ma (UC Irvine)
13:35-14:15 Deanna Needell (UC Los Angeles)
Towards Transparency, Fairness, and Efficiency in Machine Learning
14:20-15:00 Qing Nie (UC Irvine)
Multiscale spatiotemporal reconstruction of single-cell genomics data
13:35-14:15 Deanna Needell (UC Los Angeles)
Towards Transparency, Fairness, and Efficiency in Machine Learning
In this talk, we will address several areas of recent work centered around the themes of transparency and fairness in machine learning as well as practical efficiency for methods with high dimensional data. We will discuss recent results involving linear algebraic tools for learning, such as methods in non-negative matrix factorization and CUR decompositions. We will showcase our derived theoretical guarantees as well as practical applications of those approaches. These methods allow for natural transparency and human interpretability while still offering strong performance. Then, we will discuss new directions in debiasing of word embeddings for natural language processing as well as an example in large-scale optimization that allows for population subgroups to have better predictors than when treated within the population as a whole. We will conclude with work on compression and reconstruction of large-scale tensorial data from practical measurement schemes. Throughout the talk, we will include example applications from collaborations with community partners. This talk will also include discussion of recent leadership experience, initiatives, and related work.
14:20-15:00 Qing Nie (UC Irvine)
Multiscale spatiotemporal reconstruction of single-cell genomics data
Cells make fate decisions in response to dynamic environments, and multicellular structures emerge from multiscale interplays among cells and genes in space and time. The recent single-cell genomics technology provides an unprecedented opportunity to profile cells. However, those measurements are taken as static snapshots of many individual cells that often lose spatiotemporal information. How to obtain temporal relationships among cells from such measurements? How to recover spatial interactions among cells, such as cell-cell communication? In this talk I will present our newly developed computational tools that dissect transition properties of cells and infer cell-cell communication based on nonspatial single-cell genomics data. In addition, I will present methods to derive multicellular spatiotemporal patterns from spatial transcriptomics datasets. Through applications of those methods to several complex systems in development, regeneration, and diseases, we show the discovery power of such methods and identify areas for further development for spatiotemporal reconstruction of single-cell genomics data.
15:05-15:25 Conference Picture & Coffee Break, ISEB Outdoor plaza
Afternoon Contributed Sessions
Track 1
ISEB 1010
Chair: Daniel Z. Huang (Caltech)
15:25-15:40 Daniel Zhengyu Huang
Efficient derivative-free Bayesian inference for large-scale inverse problems
15:45-16:00 Dongjin Lee
Multifidelity method for coherent risk assessment in nonlinear systems with high-dimensional random variables
16:05-16:20 Scott Little
Koopman von Neumann Operator for AdS-CFT Stochastic Feynman-Kac Mellin Transform
16:25-16:40 Zhichao Wang
High-Dimensional Asymptotics of Feature Learning in the Early Phase of Neural Network Training
15:25-15:40 Daniel Zhengyu Huang
Efficient derivative-free Bayesian inference for large-scale inverse problems
We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model, which is often given as a black box or is impractical to differentiate. We propose a framework, which is built on Kalman methodology and Fisher-Rao Gradient flow, to efficiently calibrate and provide uncertainty estimations of such models with noisy observation data.
15:45-16:00 Dongjin Lee
Multifidelity method for coherent risk assessment in nonlinear systems with high-dimensional random variables
We present novel computational methods for Conditional Value-at-Risk (CVaR) estimation for nonlinear systems under high-dimensional dependent random inputs [1]. The methods are built on a novel surrogate model: a fusion of dimensionally decomposed generalized polynomial chaos expansion [2] and Kriging (called DD-GPCE-Kriging) is proposed as an accurate approximation for highly nonlinear and nonsmooth random output functions. We integrate DD-GPCE-Kriging with two sampling-based CVaR estimation methods: standard Monte Carlo simulation (MCS) and multifidelity importance sampling (MFIS). The proposed MCS-based method samples from the computationally efficient DD-GPCE-Kriging surrogate and is shown to be accurate in the presence of high-dimensional and dependent random inputs. Inevitably, sampling from a surrogate model introduces a bias. For cases of high bias, we propose the MFIS-based method, where the DD-GPCE-Kriging surrogate determines a biasing density efficiently. The high-fidelity model is then used to obtain an importance-sampling-based CVaR estimate. To further speed up the construction of the biasing density, we compute DD-GPCE-Kriging by computational cheap low-fidelity model evaluations. Numerical results for mathematical functions confirm that the DD-GPCE-Kriging-based methods provide accurate and computationally efficient CVaR estimates. The scalability of the proposed methods and their applicability to complex engineering problems are demonstrated by solving a three-dimensional composite T-joint problem with 20 (partly dependent) random inputs. In the composite problem, the proposed MFIS-based method achieves a speedup factor of 24x compared to standard MCS using the high-fidelity model, while producing an accurate CVaR estimate with a 0.98% error.
16:05-16:20 Scott Little
Koopman von Neumann Operator for AdS-CFT Stochastic Feynman-Kac Mellin Transform
Koopman Operator Theory (KOT) is currently a major theory and application of nonlinear dynamics based on the elegant theorem developed by Koopman and Von Neumann in the 1930’s as a precursor to the Feynman-Kac formula path integral known as the KvN Integral. It was introduced by Koopman in 1931 to study Hamiltonian systems and was crucial to the development of Ergodic Theory. More recently the KOT has been incorporated into Dynamic and Chaotic Systems Theory by M´ezic. The linear Koopman Operator Theory includes a state space of infinite dimensions to control a finite dimensional nonlinear dynamic system using a dynamic mode decomposition of data snapshot eigenvalues and mode functions. This data can be stochastic non-deterministic. Stochastic string theory is referred to as “postmodern” string theory. The strings are treated not as discrete objects but as probabilistic spaces to account for quantum uncertainties and nonlinear effects. The previous papers in this series contain a proof relating the Anti-de-Sitter Spacetime Conformal Field Theory Correspondence or AdS/CFT Duality to a Feynman-Kac stochastic string solution in Mellin Transform Space. This proof includes a Witten diagram and string background field SLE chaotic fractal boundaries in triangular quantum wells. The third paper focused on a proof correlating the Stochastic Feynman-Kac AdS/CFT solution to the Boltzmann Machine, a Machine Learning model derived from Statistical Mechanics and used as an energy node neural network. There is previous work done on correlations of Boltzmann Machines to AdS/CFT holographic solutions. This paper will focus on a brief literature review and a proof of the KOT KvN Integral correlation to the previous Feynman-Kac stochastic string solutions. Future work will include correlating the KOT to the AdS/CFT Duality, Boltzmann Machine utilizing analytical data, and will also study Koopman D-Branes with the Dirac-Born-Infeld Action, the string action in a magnetic field defined on a KAM torus with chaotic SLE6 boundary conditions.
16:25-16:40 Zhichao Wang
High-Dimensional Asymptotics of Feature Learning in the Early Phase of Neural Network Training
In this talk, I will present the benefit of feature (representation) learning due to gradient descent training of the first-layer parameters in a two-layer neural network, where all the weights are randomly initialized, and the training objective is the empirical MSE loss. We consider the ``early phase'' of learning in the proportional asymptotic limit, where sample size, feature dimension and width are growing at the same rate, and the number of gradient steps $t$ remains finite. In an idealized student-teacher setting, we show that gradient updates in the early phase contain a rank-1 ``spike'', which results in an alignment between the first-layer weights and the teacher model $f^*$. To quantify the impact of this alignment, we compute the asymptotic prediction risk of ridge regression on the trained conjugate kernel features. We consider two scalings of first-layer learning rate $\eta$. For small $\eta$, we establish a Gaussian equivalence property for the trained feature map and prove that the learned kernel improves upon the initial random features model, but cannot defeat the best linear model on the input after finitely many gradient steps. Whereas for sufficiently large $\eta$, we prove that even after one gradient step, the same ridge estimator on trained features can go beyond this ``linear regime'' and outperform a wide range of (fixed) kernels for certain $f^*$. Our analysis precisely demonstrates the advantage of learned representation over random features and highlights the role of learning rate scaling in the initial phase of training.
Track 2
ISEB 1200
Chair: Heather Zinn-Brooks (HMC)
15:25-15:40 Claire Chang
The Sensitivity of a Family of Ranking Methods
15:45-16:00 Jiajie (Jerry) Luo
Persistent Homology for Resource Coverage: A Case Study of Access to Polling Sites
16:05-16:20 Haixiao Wang
Exact recovery for general Stochastic Block Model on non-uniform random hypergraphs
16:25-16:40 Yiyun He
Algorithmically Effective Differentially Private Synthetic Data
15:25-15:40 Claire Chang
The Sensitivity of a Family of Ranking Methods
Ranking from pairwise comparisons is a particularly rich subset of ranking problems, which involves combining potentially incomplete or contradictory information into a single list. This group of problems has applications as diverse as choosing political candidates, ranking web pages, filtering job applicants, and comparing sports teams. In this work, we focus on the sensitivity of a family of ranking methods for pairwise comparisons which encompasses the Massey, Colley, and Markov methods. We will accomplish two objectives. First, we will consider a network diffusion interpretation for this family. Second, we will analyze the sensitivity of this family by studying the “maximal upset” where the direction of an arc between the highest and lowest ranked alternatives is flipped. Through these analyses, we will build intuition to answer the question “what are the characteristics of robust ranking methods?’” to ensure fair rankings in a variety of applications.
15:45-16:00 Jiajie (Jerry) Luo
Persistent Homology for Resource Coverage: A Case Study of Access to Polling Sites
We present novel computational methods for Conditional Value-at-Risk (CVaR) estimation for nonlinear systems under high-dimensional dependent random inputs [1]. The methods are built on a novel surrogate model: a fusion of dimensionally decomposed generalized polynomial chaos expansion [2] and Kriging (called DD-GPCE-Kriging) is proposed as an accurate approximation for highly nonlinear and nonsmooth random output functions. We integrate DD-GPCE-Kriging with two sampling-based CVaR estimation methods: standard Monte Carlo simulation (MCS) and multifidelity importance sampling (MFIS). The proposed MCS-based method samples from the computationally efficient DD-GPCE-Kriging surrogate and is shown to be accurate in the presence of high-dimensional and dependent random inputs. Inevitably, sampling from a surrogate model introduces a bias. For cases of high bias, we propose the MFIS-based method, where the DD-GPCE-Kriging surrogate determines a biasing density efficiently. The high-fidelity model is then used to obtain an importance-sampling-based CVaR estimate. To further speed up the construction of the biasing density, we compute DD-GPCE-Kriging by computational cheap low-fidelity model evaluations. Numerical results for mathematical functions confirm that the DD-GPCE-Kriging-based methods provide accurate and computationally efficient CVaR estimates. The scalability of the proposed methods and their applicability to complex engineering problems are demonstrated by solving a three-dimensional composite T-joint problem with 20 (partly dependent) random inputs. In the composite problem, the proposed MFIS-based method achieves a speedup factor of 24x compared to standard MCS using the high-fidelity model, while producing an accurate CVaR estimate with a 0.98% error.
16:05-16:20 Haixiao Wang
Exact recovery for general Stochastic Block Model on non-uniform random hypergraphs
Consider the community detection problem in random hypergraphs under the non-uniform hypergraph stochastic block model (HSBM), where each hyperedge appears independently with some given probability depending only on the community structure of the vertices within this hyperedge. We establish, for the first time in the literature, a sharp threshold for exact recovery and show that phase transition occurs around this threshold. We also provide two efficient algorithms that successfully achieve exact recovery when the model is recoverable. Both of our algorithms consist of two stages: initial estimation and refinement. They share the same first stage with the membership assignment obtained from a spectral algorithm. This initial stage assigns all but a vanishing fraction of vertices correctly. As for the second stage, the agnostic algorithm refines iteratively based on an estimator minimizing Kullback-Leibler divergence. The other algorithm refines according to Maximum A Posteriori estimation, where the knowledge of generating probabilities is needed. Theoretical analysis of our algorithm relies on the concentration and regularization of the adjacency matrix for non-uniform random hypergraphs, which could be of independent interest. Besides that, some open problems in this area will be addressed.
16:25-16:40 Yiyun He
Algorithmically Effective Differentially Private Synthetic Data
We present a highly effective algorithmic approach for generating $\varepsilon$-differentially private synthetic data in a bounded metric space with near-optimal utility guarantees under the 1-Wasserstein distance. In particular, for a dataset $X$ in the hypercube $[0,1]^d$, our algorithm generates synthetic dataset $Y$ such that the expected 1-Wasserstein distance between the empirical measure of $X$ and $Y$ is $O((\varepsilon n)^{-1/d})$ for $d\geq 2$, and is $O(\log^2(\varepsilon n)(\varepsilon n)^{-1})$ for $d=1$. The accuracy guarantee is optimal up to a constant factor for $d\geq 2$, and up to a logarithmic factor for $d=1$. Our algorithm has a fast running time of $O(\varepsilon n)$ for all $d\geq 1$ and demonstrates improved accuracy compared to the method in \cite{boedihardjo2022private} for $d\geq 2$.
Track 3
ISEB 1310
Xiaochuan Tian (UCSD)
15:25-15:40 Nathan Schroeder
Local Shape Optimization Problems on Spherical and Annular Domains
15:45-16:00 Zhaolong Han
Nonlocal half-ball vector operators on bounded domains: Poincar\'e inequality and its applications
16:05-16:20 Samuel Shen
Fast delivery of big climate data to classrooms and households: The 4DVD technology
16:25-16:40 Kai-Wen Tu
On the update of a curriculum for introductory-level numerical computing
15:25-15:40 Nathan Schroeder
Local Shape Optimization Problems on Spherical and Annular Domains
For a given shape functional $J$ and initial shape $\Omega$, the \textit{local shape optimization problem} seeks conditions on a deformation field $V$ that guarantee $\Omega$ optimizes $J$ when restricted to the perturbation $\Omega_t =\{ x + tV(x) : x \in \Omega \}$, where $t \in (-\delta, \delta)$ for some $\delta > 0$. We report on some sufficient conditions for local optimization when $\Omega$ is either a spherical or annular domain in either $\mathbb{R}^2$ or $\mathbb{R}^3$ and $J$ is a branch of a multiple Steklov eigenvalue. The theoretical results are complemented by numerical visualizations of the Steklov eigenvalue branches under perturbation as well as the shape derivatives of the eigenvalue branches at the initial shape.
15:45-16:00 Zhaolong Han
Nonlocal half-ball vector operators on bounded domains: Poincar\'e inequality and its applications
This work contributes to nonlocal vector calculus as an indispensable mathematical tool for the study of nonlocal models that arises in a variety of applications. We define the nonlocal half-ball gradient, divergence and curl operators with general kernel functions (integrable or fractional type with finite or infinite supports) and study the associated nonlocal vector identities. We study the nonlocal function space on bounded domains associated with zero Dirichlet boundary conditions and the half-ball gradient operator and show it is a separable Hilbert space with smooth functions dense in it. A major result is the nonlocal Poincar\'e inequality, based on which a few applications are discussed, and these include applications to nonlocal convection-diffusion, nonlocal correspondence model of linear elasticity, and nonlocal Helmholtz decomposition on bounded domains.
16:05-16:20 Samuel Shen
Fast delivery of big climate data to classrooms and households: The 4DVD technology
This presentation is a demonstration of the 4-Dimensional Visual Delivery (4DVD) technology www.4dvd.org, a software system designed to visually deliver big climate data at an extremely fast speed. The system visualizes and delivers netCDF climate data in a 4-dimensional space-time domain. It allows users to quickly visualize the data before making a download for further analysis. Users can use zoom in-and-out or other graphics options to help identify desired climate dynamics patterns. Data can eventually be downloaded for a spatial map of a given time and a historical climate time series of a given location after the map and time series are identified to be useful. The fast speed of the 4DVD software system is achieved through optimally harnessing the technologies of distributed computing, database, and storage. In this way, climate data are at your fingertips: https://sciences.sdsu.edu/climate-data-at-your-fingertips/ . We will demonstrate how to use 4DVD as a tool for teaching mathematics and statistics.
16:25-16:40 Kai-Wen Tu
On the update of a curriculum for introductory-level numerical computing
For this talk, we will discuss the gradual update of a curriculum from a traditional introductory-level numerical computing course to providing an early introduction of the elementary computing concept for application areas of growing importance in recent decades, such as parallel computing, image processing, signal processing, time series analysis, differential equation, neural network and classification. As stated in a 2018 SIAM workshop report [1] entitled - Research and Education in Computational Science and Engineering, “mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society . . .”. This curriculum is intended to nurture and broaden the computing interest of students from diverse disciplines early, as well as bridge the current gap in preparing them for specialized and advanced study in one or more of the above areas later on. To cover a multitude of wide-ranging topics, the curriculum takes a gentle and simplified approach for multi-disciplinary (science, engineering, business and social science) students who may have limited mathematical background beyond calculus and basic linear algebra. Mathematical derivation and proofs are given when suitable for a first course in numerical computing. Synergy between numerical computing and data science are evident in the development of this curriculum, as pointed out in the 2018 study.
Track 4
ISEB 2020
Chair: Weitao Chen (UCR)
15:25-15:40 Lingyun Ding
Shear dispersion of multispecies electrolyte solutions in channel domain
15:45-16:00 Junyuan Lin
Diffusion-based Metrics for Mining Protein-Protein Interaction Networks with Application to the Disease Module Identification DREAM Challenge
16:05-16:20 Brittany Leathers
The Immersed Boundary Double Layer Method for flows with rigid bodies
16:25-16:40 Samuel Christensen
Physical Analysis of Microfluidic Devices
15:25-15:40 Lingyun Ding
Shear dispersion of multispecies electrolyte solutions in channel domain
In multispecies electrolyte solutions, even in the absence of an external electric field, differences in ion diffusivities can induce an electric potential and generate additional fluxes for each species. This electro-diffusion process is well-described by the advection-Nernst-Planck equation. This study aims to analyze the long-time behavior of the governing equation under the electroneutrality and zero current conditions and investigate how the diffusion-induced electric potential and the shear flow enhance the effective diffusion coefficients of each species in channel domains. To achieve this goal, the homogenization method was used to derive a reduced model of the advection-Nernst-Planck equation in the channel domain. There are several interesting properties of the effective equation. First, it is a generalization of the result of Taylor dispersion, with a nonlinear diffusion tensor taking the place of a scalar diffusion coefficient. Second, the effective equation reveals that the system in absence of the flow is asymptotically equivalent to the system with a strong flow and scaled physical parameters. Furthermore, when the background concentration is much greater than the perturbed concentration, the effective equation reduces to a multidimensional diffusion equation, consistent with the classical Taylor dispersion theory. However, when the concentration approaches zero at infinity, the nonlinearity of the equation can result in several phenomena does not present in the advection-diffusion equation, which includes upstream migration of some species, spontaneous separation of ions, and a non-monotonic dependence of the effective diffusivity on Peclet numbers. Last, the dependence of effective diffusivity on concentration and ion diffusivity suggests a method to infer the concentration ratio of each component and ion diffusivity by measuring the effective diffusivity.
15:45-16:00 Junyuan Lin
Diffusion-based Metrics for Mining Protein-Protein Interaction Networks with Application to the Disease Module Identification DREAM Challenge
In this project, we present the award-winning method that ranked No.1 in the Disease Module Identification DREAM international bioinformatics challenge. We defined the Diffusion State Distance metric Protein-Protein Interaction networks to measure proximity and applied a modified Algebraic Multi-grid (AMG) Method to calculate the distance between each pair of nodes efficiently. Finally, we applied spectral clustering to partition the protein network into modules to predict their functionalities. This is a joint work led by Lenore Cowen (Tufts) and Xiaozhe Hu (Tufts) and our team was invited by the data challenge organizers to publish the results in Nature Methods.
16:05-16:20 Brittany Leathers
The Immersed Boundary Double Layer Method for flows with rigid bodies
The Immersed Boundary (IB) method is useful for problems that involve fluid-structure interactions or complex geometries. By using a regular Cartesian grid that is independent of the geometry, the IB framework yields a robust scheme that can efficiently handle immersed deformable structures. The IB method has also been adapted to problems with prescribed motion. IB methods for these problems traditionally involve penalty forces or they are formulated as constraint problems. In the latter approach, one must find the unknown forces by solving an equation that corresponds to a poorly conditioned first-kind integral equation. This operation can require a large number of iterations of a Krylov method, and since a time-dependent problem requires this solve at each time step, this method can be prohibitively inefficient without preconditioning. In this talk, we introduce a new, well-conditioned IB formulation for flows with rigid bodies, which we call the Immersed Boundary Double Layer (IBDL) method. In this formulation, the equation for the unknown boundary distribution corresponds to a well-conditioned second-kind integral equation that can be solved efficiently with a small number of iterations of a Krylov method without preconditioning. Furthermore, the iteration count is independent of both the mesh size and boundary point spacing. Additionally, while the original constraint method applies only to Dirichlet problems, the IBDL formulation can also be used for Neumann problems.
16:25-16:40 Samuel Christensen
Physical Analysis of Microfluidic Devices
Within microcentrifuge devices, a microfluidic vortex separates larger particles from a heterogeneous suspension using inertial migration, a phenomenon that causes particles to migrate across streamlines. The ability to selectively capture particles based on size differences of a few microns makes microcentrifuges useful diagnostic tools for trapping rare cells within blood samples. However, rational design of microcentrifuges has been held back from its full potential by a lack of quantitative modeling of particle capture mechanics. Here we use an asymptotic method, in which particles are accurately modeled as singularities in a linearized flow field, to rapidly calculate particle trajectories within microcentrifuges. Our predictions for trapping thresholds and trajectories agree well with published experimental data. Our results clarify how capture reflects a balance between advection of particles within a background flow and their inertial focusing and shows why the close proximity of trapped and untrapped incoming streamlines makes it challenging to design microcentrifuges with sharp trapping thresholds.
Track 5
ISEB 4020
Chair: Kristin Kuriansk (CSUF)
15:25-15:40 Anuradha Agarwal
Modeling Spatial–Temporal Distribution of HIV Particles on Cervicovaginal Mucus (CVM)
15:45-16:00 Mayte Bonilla-Quintana
Biophysical modeling of shape changes in the postsynaptic spine
16:05-16:20 Alexander Klotz
Biophysics-inspired topology
16:25-16:40 Zirui Zhang
Parameter Inference in Diffusion-Reaction Models of Glioblastoma Using Physics-Informed Neural Networks
15:25-15:40 Anuradha Agarwal
Modeling Spatial–Temporal Distribution of HIV Particles on Cervicovaginal Mucus (CVM)
Human Immunodeficiency Virus (HIV) epidemics remain devastating around the world. Since there is no cure for HIV, preventive therapy has received tremendous attention. To find the immune cells, the primary target of HIV, the virus needs to cross the cervicovaginal mucus (CVM) layer, which acts as a barrier for the virus to move freely. The drug-filled nanoparticles that destroy viruses in CVM are one of the essential preventive therapies. In this study, we develop mathematical models to describe how the virus transports through the CVM and how this transport is affected by the CVM acidity. Since the motion of the virus in the acidic CVM is hindered, accurate modeling is necessary to incorporate hindrance due to adherence in acidic media. We model the temporal dynamics of virus concentration using two model components diffusion, and hindrance where diffusion is modeled using Fick’s law and hindrance is modeled with pH dependency. We will use our model to evaluate the effects of nanoparticle-based therapy on virus distribution and transport across CVM. Our objective is to show that the proper implementation of nanoparticle-based therapy can significantly control virus entry through CVM, thereby avoiding the establishment of HIV infection. Such preventive approaches can help curb the global HIV epidemic.
15:45-16:00 Mayte Bonilla-Quintana
Biophysical modeling of shape changes in the postsynaptic spine
Mathematical models allow us to investigate the behavior of complex biological phenomena for which experiments are challenging to perform and are limited to observing one event at a time. Particularly, models can enhance our knowledge of the biochemical and mechanical mechanisms underlying memory and learning through changes in the strength of the connections between neurons (synapses). Interestingly, strength changes in the synapse correlate with shape changes. In this work, we developed a 3D model of the postsynaptic spine, the part of the neuron that receives input from other neurons. Our model describes the interaction between proteins through partial differential equations with moving boundaries [1]. In this model, the shape of the postsynapse is dictated by an imbalance between the force generated by chemical reactions of the proteins and a force generated by the membrane, which counteracts deformations. Such a model allows us to combine different experimental observations and study the postsynapse shape changes arising from input changes. Finally, we generate experimentally testable predictions on the features of the synapse that promote its efficient function.
16:05-16:20 Alexander Klotz
Biophysics-inspired topology
While most DNA has either linear or circular topology, a variety of exotic DNA topologies exist including branched, knotted, and linked molecular architectures. Knotted DNA is investigated as a model system to study polymer entanglement, and trypanosome parasites store their mitochondrial DNA in a topologically linked chainmail network called a kinetoplast. Here, I will discuss a few recent mathematical results inspired by the biophysics of topologically complex DNA. These include investigations into the ropelength of complex knots (the relationship between the crossing number and minimum contour length of a physical knot), as well as the percolation threshold of Borromean networks (inseparable linked-ring systems in which no two rings share a common topological link).
16:25-16:40 Zirui Zhang
Parameter Inference in Diffusion-Reaction Models of Glioblastoma Using Physics-Informed Neural Networks
Glioblastoma is an aggressive brain tumor that proliferates and infiltrates into the surrounding normal brain tissue. The growth of Glioblastoma is commonly modeled mathematically by diffusion-reaction type partial differential equations (PDEs). These models can be used to predict tumor progression and guide treatment decisions for individual patients. However, this requires parameters and brain anatomies that are patient specific. Inferring patient specific biophysical parameters from medical scans is a very challenging inverse modeling problem because of the lack of temporal data, the complexity of the brain geometry and the need to perform the inference rapidly in order to limit the time between imaging and diagnosis. Physics-informed neural networks (PINNs) have emerged as a new method to solve PDE parameter inference problems efficiently. PINNs embed both the data the PDE into the loss function of the neural networks by automatic differentiation, thus seamlessly integrating the data and the PDE. In this work, we use PINNs to solve the diffusion-reaction PDE model of glioblastoma and infer biophysical parameters from numerical data. The complex brain geometry is handled by the diffuse domain method. We demonstrate the efficiency, accuracy and robustness of our approach.
Track 6
ISEB 5020
Chair: Sui Tang (UCSB)
15:25-15:40 Charles Kulick
Scalable Agent-Based Modeling with Gaussian Processes
15:45-16:00 Nathaniel Linden
Multimodel modeling: Accounting for model uncertainty in biology with multiple models
16:05-16:20 Krista Faber
Mathematics side of GOATA
16:25-16:40 Alexander Mayer
The Role of RNA Condensation in Reducing Gene Expression Noise
15:25-15:40 Charles Kulick
Scalable Agent-Based Modeling with Gaussian Processes
We approach the data-driven learning problem for a general second order ODE agent-based system (potentially with multiple species). By modeling with interaction kernels, we can use a Gaussian Process approach to learn a nonparametric model for the dynamical system with built-in uncertainty quantification. In this talk, we develop the modeling system, present theoretical analysis on the learning methodology, and present empirical investigations into scalability and practicality using a biological predator-prey model.
15:45-16:00 Nathaniel Linden
Multimodel modeling: Accounting for model uncertainty in biology with multiple models
Uncertainty in a model formulation due to modeling assumptions or unknown system mechanisms is often overlooked when applying mathematical models in biology and medicine. For instance, in diabetes diagnostics, mathematical models have long been used to infer metabolic health metrics from available clinical data. However, these approaches often rely on a single model to approximate the physiological system, ignoring model uncertainty. Usually, a family of models based on varied assumptions and formulations is available to represent one biological system. Given a family of models, the standard practice is to select the single best model based-on rankings by information metrics that measure the quality of fit to the data (i.e., prediction error) versus the model complexity (number of parameters). In this work, we instead focus on leveraging the whole family of models to develop robust predictors in the face of model uncertainty. We compare several approaches, including Bayesian model averaging and probability distribution fusion, to create robust predictors based on all models in a model family. In this presentation, we highlight and demonstrate these methods to predict metrics of metabolic health from available clinical measurements.
16:05-16:20 Krista Faber
Mathematics side of GOATA
Everyday people walk with their feet pointed outwards, hips tucked in, and hunched over. This creates fractures, unhealthy bone health, and muscle deficiency. Luckily, there is GOATA. GOATA stands for Greatest Of All Times Athletics. Though we may all not be athletes, we can still adjust our bodies to live longer via fractals and 22.5 degrees.
16:25-16:40 Alexander Mayer
The Role of RNA Condensation in Reducing Gene Expression Noise
Biomolecular condensates have been shown to play a fundamental role in localizing biochemistry in a cell. RNA is a common constituent of condensates, and can determine their biophysical properties. Functions of biomolecular condensates are varied including activating, inhibiting, and localizing reactions. Recent theoretical work has shown that the phase separation of proteins into droplets can diminish cell to cell variability in protein abundance. However, the extent to which phase separation involving mRNAs may also buffer noise has yet to be explored. Here, we introduce a phenomenological model for the phase separation of mRNAs into RNP condensates, and quantify noise suppression as a function of gene expression kinetic parameters. Through stochastic simulations, we highlight the ability for condensates formed from just a handful of mRNAs to regulate the abundance and suppress the fluctuations of proteins. We place particular emphasis on how this mechanism can facilitate efficient transcription by reducing noise even in the situation of infrequent bursts of transcription by exploiting the physics of a concentration-dependent, deterministic phase separation threshold. We investigate two biologically relevant models in which phase separation acts to either "buffer" noise by storing mRNA in inert droplets, or "filter" phase separated mRNAs by accelerating their decay, and quantify expression noise as a function of kinetic parameters. In either case the most efficient expression occurs when bursts produce mRNAs close to the phase separation threshold, which we find to be broadly consistent with observations of an RNP-droplet forming cyclin in multinucleate Ashbya gossypii cells. We finally consider the contribution of noise in the phase separation threshold, and show that protein copy number noise can be efficiently suppressed by phase separation threshold fluctuations in certain conditions.
16:45-17:05 Coffee Break, ISEB Outdoor Plaza
Plenary Session III & Poster Awards
ISEB 1010
Chair: Anna Ma (UC Irvine)
17:05-17:45 Treena Basu (Occidental College)
An Unusual Application of Machine Learning: The Educational Data Mining Context
17:50-18:00 Poster Awards and Closing Remarks
17:05-17:45 Treena Basu (Occidental College)
An Unusual Application of Machine Learning: The Educational Data Mining Context
We present a variant of the dynamical optimal transport problem in which the energy to be minimized is modulated by the covariance matrix of the current distribution. Such transport metrics arise naturally in mean-field limits of certain ensemble Kalman methods for solving inverse problems. We show that the transport problem splits into two coupled minimization problems up to degrees of freedom given by rotations: one for the evolution of mean and covariance of the interpolating curve, and one for its shape. Similarly, on the level of the gradient flows a similar splitting into the evolution of moments and shapes of the distribution can be observed. Those show better convergence properties in comparison to the classical Wasserstein metric in terms of exponential convergence rates independent of the Gaussian target.
17:50-18:00 Poster Awards and Closing Remarks