This is the public repository of the code part of the paper Practically High-Performant Neural Adaptive Video Streaming, the Best Paper award winner at ACM CoNext 2024.
This is a clean implementation of the adaptive bitrate reinforcement learning environment in Open AI's Gym. The code is partly based off of the code in Park Project, Pensieve and Puffer.
If you are using any of this code (or any of the deployment code) as part of a research project, we ask that you please cite the original paper:
@article{plume2024,
author = {Patel, Sagar and Zhang, Junyang and Narodystka, Nina and Jyothi, Sangeetha Abdu},
title = {Practically High Performant Neural Adaptive Video Streaming},
year = {2024},
issue_date = {December 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {2},
number = {CoNEXT4},
url = {https://doi.org/10.1145/3696401},
doi = {10.1145/3696401},
journal = {Proc. ACM Netw.},
month = nov,
articleno = {30},
numpages = {23},
keywords = {deep reinforcement learning, video streaming}
}
The code is split into two directories: controlled_abr
and puffer_abr
. controlled_abr
corresponds to the controlled Trace-Bench environment in the paper (where the traces are generated in a controlled manner), while puffer_abr
is the simulation environment that uses the logs produced by Puffer, a free and open-source live TV streaming website and a research study at Stanford University. This is the environment that implements Gelato.
For more details and usage, please clone the repo and see the README.md
of those directories.
We will add more documentation and add more functionality in the future.
- Add function to integrate given throughput traces
- Add documentation for evaluating classical policies
- Add documentation for plotting functions given