Rough Stochastic Differential Equations with Jumps
The theory of rough paths provides a framework for the study of nonlinear systems driven by highly oscillatory (deterministic) signals. The corresponding analysis is inherently distinct from that of classical stochastic calculus, and neither theory alone is able to satisfactorily handle hybrid systems driven by both rough and stochastic noise (as sometimes appear, e.g., in stochastic filtering, pathwise stochastic control, and mixed Black-Scholes models). The introduction of the stochastic sewing lemma (Khoa Lê, 2020) has paved the way for a theory which can efficiently handle such hybrid systems. In this talk, we will discuss how this can be done in a general setting which allows for jump discontinuities in both sources of noise. The talk is based on joint work with Jost Pieper.
Ariel NeufeldNanyang Technical University
Markov Decision Processes under Model Uncertainty
In this talk we introduce a general framework for Markov decision problems under model uncertainty in a discrete-time infinite horizon setting. By providing a dynamic programming principle we obtain a local-to-global paradigm, namely solving a local, i.e., a one time-step robust optimization problem leads to an optimizer of the global (i.e. infinite time-steps) robust stochastic optimal control problem, as well as to a corresponding worst-case measure.Moreover, we apply this framework to portfolio optimization involving data of the S&P 500. We present two different types of ambiguity sets; one is fully data-driven given by a Wasserstein-ball around the empirical measure, the second one is described by a parametric set of multivariate normal distributions, where the corresponding uncertainty sets of the parameters are estimated from the data. It turns out that in scenarios where the market is volatile or bearish, the optimal portfolio strategies from the corresponding robust optimization problem outperforms the ones without model uncertainty, showcasing the importance of taking model uncertainty into account. This talk is based on joint work with Julian Sester and Mario Sikic.
Bruno BouchardParis Dauphine University
A \(C^1\)-Ito's Formula for Flows of Semimartingale Distributions
We provide an Ito's formula for \(C^1\)-functionals on the flows of conditional marginal distributions of a continuous semimartingale. This is based on the notion of the weak Dirichlet process and extends the \(C^1\)-Ito's formula in Gozzi and Russo (2006) to functionals which depend also on marginal distributions of semimartingales. As a first application, we study a class of McKean-Vlasov optimal control problems, and establish a verification theorem by requiring only \(C^1\)-regularity of its value function, which is equivalently the (viscosity) solution of the associated HJB master equation. Co-authors: Xiaolu Tan and Jixin Wang, Department of Mathematics, The Chinese University of Hong Kong.
Christoph CzyslowskyLondon School of Economics
Equilibrium Asset Pricing with Stochastic Investment Opportunities and Proportional Transaction Costs
In view of the increasing automatisation of financial markets, understanding the tradeoff between frequent trading and transaction costs becomes more and more important. In this talk, we consider the impact of proportional transaction costs on risk-sharing equilibria between two agents with mean-variance preferences. The state of the economy is modelled by an underlying diffusion process leading to stochastic investment opportunities for the agents. Our model allows us to observe new effects resulting from the interplay between the transaction costs and the stochastic investment opportunities. This includes solving a new two-dimensional free boundary problem rather than the one-dimensional versions that appear in models with constant coefficients. The talk is based on joint work with Justin Gwee and Mihalis Zervos.
David ProemelMannheim University
Well-Posedness of Stochastic Volterra Equations with non-Lipschitz Coefficients
The study of stochastic Volterra equations with non-Lipschitz continuous coefficients has recently attracted quite some attention, motivated by their very successful applications as well-suited volatility models in mathematical finance. While stochastic Volterra equations with Lipschitz continuous coefficients are well-studied integral equations, in the case of less regular coefficients many fundamental questions are still open. In this talk we discuss the existence of strong and weak solutions as well as pathwise uniqueness of stochastic Volterra equations with time-inhomogeneous non-Lipschitz continuous coefficients. The talk is based on joint work with David Scheffels.
Fenghui YuTechnical University Delft
Execution Probabilities in a Limit Order Book with State-Dependent Order Flows
We delve into the computation of execution probabilities for limit orders positioned at various price levels within the limit order book, a critical aspect in execution optimization. We adopt a generic stochastic model to capture the dynamics of the order book as a series of queueing systems. This generic model is state-dependent and encompasses stylized factors. We subsequently derive semi-analytical expressions to compute the relevant probabilities within the context of state-dependent stochastic order flows. These probabilities cover various scenarios, including the probability of a change in the mid-price, the execution probabilities of orders posted at the best quotes, and those posted at a price level deeper in the book, before the opposite best quote moves. Lastly, we conduct extensive numerical experiments using real order book data from the foreign exchange spot market.
Guojing WangSoochow University
Pricing CDS Index Tranches under Contagion Model with Regime Switching
The contagion credit risk model is usually used to describe the contagion effect among different financial institutions. In this paper, we introduce a default contagion model in which the default process is described by a Cox process with regime switching, and the default intensity process will increase with a positive jump once a considered firm defaults. We derive some closed form expressions for the distribution of default times and for the pricing formulas of the CDS index tranches. Based on market data we present some numerical results for the CDS index tranches. (a joint work with Ms Jiayuan Qian)
Hoi Ying WongThe Chinese University of Hong Kong
Robust Dividend Policy: Equivalence of Epstein-Zin and Meanhout Preferences
The classic optimal dividend problem aims to maximize the expected discounted dividend stream over the lifetime of a company. Since dividend payments are irreversible, this problem corresponds to a singular control problem with a risk-neutral utility function applied to the discounted dividend stream. In cases where the company's surplus process encounters model ambiguity under the Brownian filtration, we explore robust dividend payment strategies in worst-case scenarios. Two possible robust preferences are examined and compared. Using Epstein-Zin (EZ) preferences, we formulate the robust dividend problem as a recursive utility function with the EZ aggregator within a singular control framework. We investigate the existence and uniqueness of the EZ dividend problem. By employing Backward Stochastic Differential Equation theory, we demonstrate that the EZ formulation is equivalent to the maximin problem involving risk-neutral utility on the discounted dividend stream, incorporating Meanhout's regularity that reflects investors' ambiguity aversion. In other words, we establish a connection between ambiguity aversion in a robust singular control problem and risk aversion in EZ preferences. Additionally, considering the equivalent Meanhout's preferences, we solve the robust dividend problem using a Hamilton-Jacobi-Bellman approach combined with a variational inequality (VI). Our solution is obtained through a novel shooting method that simultaneously satisfies the VI and boundary conditions. This is a joint work with Kexin Chen and Kyunghyun Park.
Jean-Pierre FouqueUniversity of California Santa Barbara
Reinforcement Learning for Mean Field Game and Control Problems
We present our recent results on multi-scale reinforcement learning algorithms for mean field game and mean field control problems in finite state and action spaces. The proof of convergence sheds some light on the difference between MFG and MFC. Joint work with Andrea Angiuli, Mathieu Laurière, and Mengrui Zhang.
Jianfeng ZhangUniversity of Southern California
Set Values for Nonzero Sum Games
Nonzero sum games typically have multiple Nash equilibria, and more importantly, different equilibria may induce different values. In this talk we propose to study the set of values over all possible equilibria, called the set value of games. This set value is by nature unique, and it shares many nice properties of the value function for a standard control problem. In particular, it satisfies the dynamic programming principle and the regularity/stability. We shall also discuss how to characterize the dynamic set value function of games through set valued Hamiltonians and set valued PDEs. The talk is based on a series of works, joint with Feinstein, Iseri, Qiao, and Rudloff.
Jingjie ZhangUniversity of International Business and Economics
A Quantitative Comparison of Unemployment Benefit Extension and Level Increase
We use mean-field game theory to quantitatively compare two unemployment insurance (UI) extension policies commonly used during recessions: raising benefit levels versus extending the duration of benefits. Our heterogenous-agent model features costly job search and individual savings. Our calibrated model matches the savings distribution observed in the U.S. After requiring that the two UI policies have the same cost, we find: (1) a large, one-time lumpsum payment results in lower unemployment rates relative to a moderate, long-lived increase in benefits and (2) both policies deliver approximately the same ex-ante expected utility to unemployed individuals. (Joint work with Erhan Bayraktar and Indrajit Mitra)
Johannes Muhle-KarbeImperial College London
Concave Cross Impact
The price impact of large orders is well known to be a concave function of trade size. We discuss how to extend models consistent with this “square-root law” to multivariate settings with cross impact, where trading each asset also impacts the prices of the others. In this context, we derive consistency conditions that rule out price manipulation, discuss how cross impact affects optimal trading strategies, and illustrate these results using CFM metaorder data. (Joint work in progress with Natascha Hey and Iacopo Mastromatteo)
Johannes WieselCarnegie Mellon University
Empirical Martingale Projections via the Adapted Wasserstein Distance
Given a collection of multidimensional pairs \((Xi,Yi)_{1\leq i\leq n}\), we study the problem of projecting the associated suitably smoothed empirical measure onto the space of martingale couplings (i.e. distributions satisfying \(\mathbb{E}[Y|X]=X\)) using the adapted Wasserstein distance. We call the resulting distance the smoothed empirical martingale projection distance (SE-MPD), for which we obtain an explicit characterization. We also show that the space of martingale couplings remains invariant under the smoothing operation. We study the asymptotic limit of the SE-MPD, which converges at a parametric rate as the sample size increases if the pairs are either i.i.d. or satisfy appropriate mixing assumptions. Additional finite-sample results are also investigated. Using these results, we introduce a novel consistent martingale coupling hypothesis test, which we apply to test the existence of arbitrage opportunities in recently introduced neural network-based generative models for asset pricing calibration.
Juan LiShandong University
Mean field stochastic control problems under sublinear expectation
In this talk we study Pontryagin's stochastic maximum principle for a mean-field optimal control problem under Peng's \(G\)-expectation. The dynamics of the controlled state process is given by a stochastic differential equation driven by a \(G\)-Brownian motion, whose coefficients depend not only on the control, the controlled state process but also on its law under the \(G\)-expectation. Also the associated cost functional is of mean-field type. Under the assumption of a convex control state space we study the stochastic maximum principle, which gives a necessary optimality condition for control processes. Under additional convexity assumptions on the Hamiltonian it is shown that this necessary condition is also a sufficient one. The main difficulty which we have to overcome in our work consists in the differentiation of the \(G\)-expectation of parameterized random variables. Based on a joint work with Rainer Buckdahn (UBO, France), Bowen He (SDU, China).
Julien ClaisseParis Dauphine University
Mean-field Optimization Regularized by Fisher Information
Recently there is a rising interest in the research of mean-field optimization, in particular because of its role in analyzing the training of neural networks. In this talk, by adding the Fisher information (in other word, the Schrodinger kinetic energy) as the regularizer, we relate the mean-field optimization problem with a so-called mean field Schrodinger (MFS) dynamics. We develop a free energy method to show that the marginal distributions of the MFS dynamics converge exponentially quickly towards the unique minimizer of the regularized optimization problem. We shall see that the MFS is a gradient flow on the probability measure space with respect to the relative entropy. Finally we propose a Monte Carlo method to sample the marginal distributions of the MFS dynamics. This is a joint work with Giovanni Conforti, Zhenjie Ren and Songbo Wang.
Lan WuPeking University
TBA
TBA
Martin SchweizerETH Zurich
A Stochastic Fubini Theorem via Measure-Valued Integrands
We present a new stochastic Fubini theorem in a setting where the stochastic Fubini property is obtained in a weak sense. This involves defining a stochastic integral, with respect to a fixed semimartingale, of measure-valued integrand processes. Potential applications will perhaps be mentioned.
Mathieu RosenbaumEcole Polytechnique
The Two Square Root Laws of Market Impact and the Role of Sophisticated Market Participants
The goal of this work is to disentangle the roles of volume and participation rate in the price response of the market to a sequence of orders. To do so, we use an approach where price dynamics are derived from the order flow via no arbitrage constraints and make connections with the rough volatility paradigm. We also introduce in the model sophisticated market participants having superior abilities to analyse market dynamics. Our results lead to two square root laws of market impact, with respect to executed volume and with respect to participation rate. This is joint work with Bruno Durin and Grégoire Szymanski.
Michael KupperUniversity of Konstanz
Discrete Approximation of Risk-Based Pricing under Uncertainty
TBA
Min DaiThe Hong Kong Polytechnic University
Learning Optimal Investment Strategy with Transaction Costs via a Randomized Dynkin Game
We develop a reinforcement learning method to learn optimal investment strategy in the presence of transaction costs. Using a connection between singular control and a Dynkin game for portfolio choice with transaction costs, we design a reinforcement learning algorithm to learn the optimal strategy through a related randomized Dynkin game, where a regularization term is incorporated to encourage exploration. Numerical results are presented to demonstrate our algorithm.
Nizar TouziNew York University
Mean Field Games Approach to Systemic Risk
Connections between economic agents are desirable for risk diversification purposes. However, they may also be responsible for default contagion, a major concern at the heart of the financial stability of the system. We develop a model of system connection based on the strategic interaction between economic agents. We solve explicitly the mean field limit of the problem in the case of uncorrelated idiosyncratic risks, and we provide a quasi explicit solution of the mean field game. This allows to provide an approximate Nash equilibrium for the finite population problem. We also solve the case of correlated idiosyncratic risk through some common factor by analyzing an appropriate notion of no arbitrage in this context.
Paolo GuasoniDublin City University
Evaluating and Mitigating Transaction Costs with Recurrent Neural Networks
We develop a method for evaluating and mitigating the effect of transaction costs on trading strategies with many assets. An iteration procedure yields cost-adjusted portfolio returns, enabling the formulation of portfolio-choice problems as optimization of Recurrent Neural Networks (RNN). This method reproduces the theoretical results available for one risky asset and the numerical approximations available for two risky assets through finite-elements. Crucially, the RNN model scales to several assets and is fully interpretable, as its parameters identify their no-trade region. Importance-sampling significantly enhances the model’s performance, especially with several assets. An application to equally-weighted funds demonstrates the method's ability to reduce both tracking error and tracking difference from an empirical target. (Joint work with Ran Li.)
Ping LiBeihang University
Compound Heat-drought Risk and the Issuing Spread of China’s Municipal Corporate Bonds
This paper uses a dynamic copula model to measure compound heat-drought risk and examines its relationship with the issuing spread of China’s municipal corporate bonds. Local government financing vehicles that are more likely to be affected by compound heat-drought risk pay more costs to issue municipal corporate bonds compared to vehicles unlikely to be affected by compound heat-drought risk. Mechanism analyses find that the compound heat-drought risk damages the solvency of local government financing vehicles and the implicit guarantee ability of local governments. Higher investor attention increases the spread of municipal corporate bonds. Further, the green coverage rates of prefecture-level city can mitigate the negative compound heat-drought risk premium. The conclusion is of great significance for understanding the economic consequences of compound heat-drought risk.
Ruixun ZhangPeking University
On Consistency of Signatures Using Lasso and Applications in Option Pricing
Signature transforms are iterated path integrals of continuous and discrete-time time series data, and their universal nonlinearity linearizes the problem of feature selection. This paper revisits some statistical properties of signature transform under stochastic integrals with a Lasso regression framework, both theoretically and numerically. Our study shows that, for processes and time series that are closer to Brownian motion or random walk with weaker inter-dimensional correlations, the Lasso regression is more consistent for their signatures defined by Ito integrals; for mean reverting processes and time series, their signatures defined by Stratonovich integrals have more consistency in the Lasso regression. We apply these results to learning nonlinear functions and option pricing. Our findings highlight the importance of choosing appropriate definitions of signatures and stochastic models in statistical inference and machine learning. This is joint work with Xin Guo, Binnan Wang, and Chaoyi Zhao.
Sizhou WuNanyang Technical University
Modern numerical methods for semilinear PIDEs using Monte Carlo and random neural networks for high-dimensional option pricing
In this talk, we introduce two modern numerical algorithms to approximate semilinear parabolic partial integro-differential equations (PIDEs). The first algorithm is the so-called multilevel Picard (MLP) approximation based on Monte Carlo simulation, and the other one is the random deep-splitting algorithm using random neural networks. For both algorithms, we provide full error analysis and complexity analysis. Furthermore, we present a numerical example in which both algorithms are applied to price high-dimensional financial derivatives in a jump-diffusion model.
This talk is based on joint works with Ariel Neufeld and Philipp Schmocker
Tianyang NieShandong University
Indefinite partially observed \(LQ\) mean-field game
We investigate an indefinite linear-quadratic partially observed mean-field game with common noise, where both the state-average and control-average are considered. All weighting matrices in the cost functional can be indefinite. We obtain the decentralized optimal strategies by the Hamiltonian approach and demonstrate the well-posedness of Hamiltonian system by virtue of relaxed compensator. The related Consistency Condition and the feedback form of decentralized optimal strategies are derived. Moreover, we prove that the decentralized optimal strategies are \(\varepsilon\)-Nash equilibrium by using the relaxed compensator. The talk is based on the joint work with Dr, Tian Chen and Prof. Zhen Wu.
Xiang YuThe Hong Kong Polytechnic University
Continuous time \(q\)-learning for mean-field control problems
In this talk, we study \(q\)-learning, recently coined as the continuous time counterpart of \(Q\)-learning by Jia and Zhou (2023), for mean-field control problems in the setting of entropy-regularized reinforcement learning. In contrast to the single agent's control problem in Jia and Zhou (2023), the mean-field interaction of agents renders the definition of the q-function more subtle, for which we reveal that two distinct \(q\)-functions naturally arise: (i) the integrated q-function as the first-order approximation of the integrated \(Q\)-function introduced in Gu, Guo, Wei and Xu (2023), which can be learnt by a weak martingale condition involving test policies; and (ii) the essential q-function that is employed in policy improvement iterations. We show that two \(q\)-functions are related via an integral representation under all test policies. Based on the weak martingale condition and our proposed searching method of test policies, some model-free learning algorithms are devised. In two financial application examples, one in \(LQ\) control framework and one beyond \(LQ\) control framework, we can obtain the exact parameterization of the optimal value function and \(q\)-functions and illustrate our algorithms with simulation experiments. This is a joint work with Xiaoli Wei (Harbin Institute of Technology).
Xianhua PengPeking University
Reinforcement learning for financial index tracking
We propose the first discrete-time infinite-horizon dynamic formulation of the financial index tracking problem under both return-based tracking error and value-based tracking error. The formulation overcomes the limitations of existing models by incorporating the intertemporal dynamics of market information variables not limited to prices, allowing exact calculation of transaction costs, accounting for the tradeoff between overall tracking error and transaction costs, allowing effective use of data in a long time period, etc. The formulation also allows novel decision variables of cash injection or withdraw. We propose to solve the portfolio rebalancing equation using a Banach fixed point iteration, which allows to accurately calculate the transaction costs specified as nonlinear functions of trading volumes in practice. We propose an extension of deep reinforcement learning (RL) method to solve the dynamic formulation. Our RL method resolves the issue of data limitation resulting from the availability of a single sample path of financial data by a novel training scheme. A comprehensive empirical study based on a 17-year-long testing set demonstrates that the proposed method outperforms a benchmark method in terms of tracking accuracy and has the potential for earning extra profit through cash withdraw strategy. This is a joint work with Chenyin Gong, Xuedong He, and Yi Wu.
Xiaoli WeiHarbin Institute of Technology
Continuous-Time \(q\)-Learning for Mean-Field Control Problems with Common Noise
This paper investigates the continuous-time entropy-regularized reinforcement learning (RL) for mean-field control problems with common noise. We study the continuous-time counterpart of the \(Q\)-function in the mean-field model, coined as \(q\)-function in Jia and Zhou (2023) in the single agents model. It is shown that the controlled common noise gives rise to a nonlocal term of the policy in the exploratory HJB equation, rendering the policy improvement iteration intricate. To devise the model-free RL algorithm, we introduce the integrated \(q\)-function (\(Iq\)-function) on distributions of both state and action, and an optimal policy can be identified as a two-layer fixed point to the argmax operator of the Iq-function. The martingale characterization of the value function and \(Iq\)-function is established by exhausting all test policies. This allows us to propose several algorithms including the Actor-Critic learning algorithm, in which the policy is updated in the Actor-step based on the policy improvement rule induced by the linear derivative of the Iq-function with respect to the action distribution, and the value function and Iq-function are updated in the Critic-step based on the orthogonal martingale loss function involving all test policies. In two examples, within and beyond LQ-control framework, we implement and compare some simulation experiments of our RL algorithms with satisfactory performance. This a joint work with Zhenjie Ren, Xiang Yu and Xunyu Zhou
Xiaolu TanThe Chinese University of Hong Kong
On the Regularity of Solutions of some Linear Parabolic Path-Dependent PDEs
We study a class of linear parabolic path-dependent PDEs (PPDEs) defined on the space of càdlàg paths \(x\), in which the coefficient functions at time \(t\) depend on \(x(t)\) and \(\int_0^t x(s) dA_s\), for some (deterministic) continuous function \(A\) with bounded variations. Under uniform ellipticity and Hölder regularity conditions on the coefficients, together with some technical conditions on \(A\), we obtain the existence of a smooth solution to the PPDE by appealing to the notion of Dupire's derivatives. It provides a generalization to the existing literature studying the case where \(A_t = t\), and complements our previous work on the regularity of approximate viscosity solutions for parabolic PPDEs. As a by-product, we also obtain existence and uniqueness of weak solutions for a class of path-dependent SDEs. This is a joint work with Bruno Bouchard.
Xin GuoBerkeley University
Alpha Potential Games: A New Paradigm for N-Player Games
Static potential games, pioneered by Monderer and Shapley (1996) are non-cooperative static games in which there exists an auxiliary function called static potential function, so that any player's change in utility function upon unilaterally deviating from her policy can be evaluated through the change in the value of this potential function. The introduction of the potential function is powerful as it simplifies the otherwise challenging task of searching for Nash equilibria in multi-agent non-cooperative games: maximizers of potential functions lead to the game's Nash equilibria.
In this talk, we propose an analogous and new framework called \(\alpha\)-potential game for dynamic \(N\)-Player games, with the potential function in the static setting replaced by an \(\alpha\)-potential function. We will present the analytical criteria for any game to be an \(\alpha\)-potential game, and identify several important classes of Markov \(\alpha\)-potential games.
We will provide detailed analysis for games with mean-field interactions, distributed games, and crowd aversion games, in which \(\alpha\) is shown to depend on the number of players, the admissible policies, and the cost structure. We will also show the changes of \(\alpha\) from open-loop to closed-loop settings.
Xingye YueSoochow University
Trinomial tree method for \(G\)-Expectation and sampling for \(G\)-normal random variable after Measurement
Given a \(1\)-Dimensional \(G\)-normal random variable (RV) \(X\), a trinomial tree method is proposed to approximate the \(G\)-Expectation \(E[\phi(X)]\) for any test function. The method is stable and convergent. There is no need to worry about the boundary condition. Furthermore, the whole numerical process actually yields the samples to estimate the expectation \(E[\phi(X)]\). As we know, direct sampling for a \(G\)-normal RV \(X\) is infeasible due to the uncertainty of its distribution. But for a “measurement” \(E[\phi(X)]\), the sampling is feasible. This is just like an measurement on quantum state: before measurement, the state is uncertain and unknown, after measurement, the state is definite.
Yan ZengSun Yat-sen University
The Impact of Intermediaries on Insurance Demand and Pricing
This paper studies the impact of an independent insurance intermediary who holds a fiduciary duty to an unsophisticated insurance buyer on insurance demand and pricing. The intermediary utilizes two remuneration systems, namely, a fee-for-advice system and a commission system. Insurance contracting between the buyer (represented by the intermediary) and an insurer, is formulated as a Stackelberg insurance game. We obtain the buyer’s equilibrium indemnity and the insurer’s equilibrium pricing strategy in closed forms, and subsequently conduct comparative statics analysis with respect to the intermediary’s level of fiduciary duty and remuneration. Our results show that the phenomenon of over-insuring despite high premium, a long-standing paradox in insurance economics, also attributes to the deficiency of the intermediary’s fiduciary duty.
Yufei ZhangImperial College London
Some Recent Progress in Continuous-Time Reinforcement Learning
Recently, reinforcement learning (RL) has attracted substantial research interests. Much of the attention and success, however, has been for the discrete-time setting. Continuous-time RL, despite its natural analytical connection to stochastic controls, has been largely unexplored and with limited progress. In particular, characterising sample efficiency for continuous-time RL algorithms remains a challenging and open problem. In this talk, we will discuss some recent advances in the convergence rate analysis for the episodic linear-convex RL problem, and report regret bounds of various learning algorithms. The approach is probabilistic, involving analysing learning efficiency using concentration inequalities for correlated observations, and quantifying the performance of feedback controls derived from estimated models. The latter is achieved via the stability of the associated forward-backward stochastic differential equation.
Yufeng ShiShandong University
Deep learning approaches in numerical computation of backward stochastic differential equation and applications in financial asset pricing
Deep learning can effectively overcome dimensional disasters in the numerical methods of Nonlinear partial differential equation and backward Stochastic differential equation, and has become an important research direction of numerical computation in recent years. We study the numerical calculation methods of high-dimensional backward doubly stochastic differential equations and high-dimensional mean field backward doubly stochastic differential equations, in which the deep neural network is introduced as the key step to achieve numerical solution. Based on the numerical method of backward Stochastic differential equation, this paper studies the option data driven g-pricing modeling method, and verifies the model performance with SPX options. At the same time, with the help of deep learning data mining capabilities, a model for mining high-frequency order book price change information was established, and empirical research was conducted using high-frequency order book data.
Zhenjie RenParis Dauphine University
Self-Interacting Approximation to McKean-Vlasov Long Time Limit
Motivated by the mean-field optimization model of the training of two-layer neural networks, we propose a novel method to approximate the invariant measures of a class of McKean-Vlasov diffusions. We introduce a proxy process that substitutes the mean-field interaction with self-interaction through a weighted occupation measure of the particle's past. If the McKean-Vlasov diffusion is the gradient flow of a convex mean-field potential functional, we show that the self-interacting process exponentially converges towards its unique invariant measure close to that of the McKean-Vlasov diffusion. As an application, we show how to learn the optimal weights of a two-layer neural network by training a single neuron.
Zhenya LiuRenmin University of China
When will China’s economy overtake the United States?
The uncertainty of the world economy is increasing due to various kinds of shocks. As the two largest economies in the world, the United States and China, their competition and power transition will impact the international economic order. We have constructed stochastic process models to predict the GDP paths of both countries. Based on historical data into the future, we estimate the model's parameters and predict when China's GDP will surpass that of the United States based on the first passage time theory. We have used the change point detection method to handle different economic situations. The changes in our model parameters provide valuable economic information, and our predictions become more reliable once we have considered the change point.
Zihao GuShanghai Jiao Tong University
Quadratic BSDEs with mean reflection driven by G-Brownian motion
In this talk, we present the well-posedness of a kind of backward stochastic differential equation driven by G-Brownian motions (G-BSDEs for short) with mean reflection. In particular, the generator is allowed having quadratic growth in z. Combining a representation of the solution with G-BMO martingale techniques, fixed point argument, and θ-method, existence and uniqueness results for such G-BSDEs are provided under bounded and unbounded terminal condition. In addition, the mean constraint can be generalized to a risk constraint for financial application. This is a joint work with Yiqing Lin and Kun Xu.
Ziteng ChengUniversity of Toronto
Deep Learning Conditional Distributions on Continuous Spaces
We investigate sample-based learning of conditional distributions on multi-dimensional unit boxes, allowing for different dimensions of the input and output spaces. Our approach leverages grouping data in proximity to varying query points, employing two distinct methods: one based on a fixed-radius ball and the other on nearest neighbors. We establish upper bounds for the convergence rates of both methods and, from these bounds, deduce optimal configurations for the radius and the number of neighbors. To obtain a Lipschitz continuous estimator of the conditional distribution for enhanced certifiability, we incorporate the nearest neighbor method into neural network training. For improved efficiency, the training process utilizes approximate nearest neighbor search with random binary space partitioning. Additionally, it employs Sinkhorn algorithm and sparsity-enforced transport plan for training objective computation. Our empirical findings reveal that, with a suitably designed structure, the neural network displays adaptive continuity. This is joint work with Cyril Benezet (ENSIIE) and Sebastian Jaimungal (University of Toronto).
Zuoquan XuThe Hong Kong Polytechnic University
State-dependent temperature control for Langevin diffusions
We study the temperature control problem for Langevin diffusions in the context of non-convex optimization. The classical optimal control of such a problem is of the bang-bang type, which is overly sensitive to errors. A remedy is to allow the diffusions to explore other temperature values and hence smooth out the bang-bang control. We accomplish this by a stochastic relaxed control formulation incorporating randomization of the temperature control and regularizing its entropy. We derive a state-dependent, truncated exponential distribution, which can be used to sample temperatures in a Langevin algorithm, in terms of the solution to an HJB partial differential equation. We carry out a numerical experiment on a one- dimensional baseline example, in which the HJB equation can be easily solved, to compare the performance of the algorithm with three other available algorithms in search of a global optimum. This is a joint work with Xuefeng Gao (The Chinese University of Hong Kong) and Xun Yu Zhou (Columbia University).