Chaospy - Toolbox for performing uncertainty quantification.
-
Updated
Mar 10, 2025 - Python
Chaospy - Toolbox for performing uncertainty quantification.
Providing reproducibility in deep learning frameworks
MATLAB/Octave library for stochastic optimization algorithms: Version 1.0.20
A model free Monte Carlo approach to price and hedge American options equiped with Heston model, OHMC, and LSM
Riemannian stochastic optimization algorithms: Version 1.0.3
[AAAI 2020 Oral] Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution
Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)
Pricing and Analysis of Financial Derivative by Credit Suisse using Monte Carlo, Geometric Brownian Motion, Heston Model, CIR model, estimating greeks such as delta, gamma etc, Local volatility model incorporated with variance reduction.(For MH4518 Project)
A platform for distributed optimization expriments using OpenMPI
Simple MATLAB toolbox for deep learning network: Version 1.0.3
Framework to model two stage stochastic unit commitment optimization problems.
In this paper, we propose Filter Gradient Decent (FGD), an efficient stochastic optimization algorithm that makes a consistent estimation of the local gradient by solving an adaptive filtering problem with different designs of filters.
Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"
Introduction to options pricing theory and advanced numerical methods for pricing both vanilla and exotic options.
Variance reduction in energy estimators accelerates the exponential convergence in deep learning (ICLR'21)
PyTorch implementation for " Differentiable Antithetic Sampling for Variance Reduction in Stochastic Variational Inference" (https://arxiv.org/abs/1810.02555).
Chance-constrained control and pricing for natural gas networks using Julia/JuMP.
Reproduced PyTorch implementation for ICML 2017 Paper "Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning."
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning. NeurIPS, 2022
This project focuses on applying advanced simulation methods for derivatives pricing. It includes Monte-Carlo, Variance Reduction Techniques, Distribution Sampling Methods, Euler Schemes, and Milstein Schemes.
Add a description, image, and links to the variance-reduction topic page so that developers can more easily learn about it.
To associate your repository with the variance-reduction topic, visit your repo's landing page and select "manage topics."