Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating a custom Environment for Reinforcement learning (RL) using VectorBT #469

Open
Athe-kunal opened this issue Jun 27, 2022 · 1 comment

Comments

@Athe-kunal
Copy link

Athe-kunal commented Jun 27, 2022

I am trying to build a custom environment that will be used for training an RL trading agent. VectorBT provides a lot of functionalities for building portfolios, so I want to build on top of it. In RL, the agent comes up with an action that it thinks will maximize return moving forward. So the environment simulates based on a single action and it continues till we reach the end date. So I can use vbt.Portfolio.from_orders to build a portfolio, but it will be for each time step and there will be multiple portfolios for the entire episode. Is there a mitigate this design problem using VectorBT, like having an online or accumulating portfolio? As the RL trading agent has to observe the next portfolio value to take a decision, so it needs to be an updating portfolio. I am happy to answer any further questions
Thanks in advance.

@eromoe
Copy link

eromoe commented Jan 16, 2023

Do you get a solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants