A Streamlit dashboard to display information from the Github Copilot Usage API endpoints.
This repository is currently using depreciated endpoints to collect CoPilot Usage information and will not be able to show information past the 1st of February 2025.
We are working on refactoring the dashboard and its lambda to make use of the new endpoints and its data structure.
Documentation | Link |
---|---|
Old Endpoint | https://docs.github.com/en/rest/copilot/copilot-usage?apiVersion=2022-11-28 |
New Endpoint | https://docs.github.com/en/rest/copilot/copilot-metrics?apiVersion=2022-11-28 |
While we work on refactoring the CoPilot Dashboard, this repository will contain a temporary solution.
This solution will display high-level information from the new API endpoints.
The old dashboard pages (Organisation Usage & Team Usage) have been toggled to use example data in order to still see them.
The data on these pages is completely synthetic.
Please note that the code available in this repository for creating a Dashboard to track GitHub Copilot usage within your organisation is in its early stages of development. It may not fully comply with all Civil Service / ONS best practices for software development. Currently, it is being used by a limited number of individuals within ONS. Additionally, due to the limited number of users, this project has not been tested for WACG 2.1 compliance nor accessibility. Please consider this when using the project.
We are sharing this piece of code at this stage to enable other Civil Service entities to utilise it as soon as possible.
Feel free to fork this repository and use it as you see fit. If you wish to contribute to this work, please make a pull request, and we will consider adding you as an external collaborator.
This project uses poetry for package management.
Instructions to install Poetry
This project uses MkDocs for documentation which gets deployed to GitHub Pages at a repository level.
For more information about MkDocs, see the below documentation.
There is a guide to getting started on this repository's GitHub Pages site.
Prior to running outside of Docker ensure you have the necessary environment variables setup locally where you are running the application. E.g in linux or OSX you can run the following, providing appropriate values for the variables:
export AWS_ACCESS_KEY_ID=<aws_access_key_id>
export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key_id>
export AWS_DEFAULT_REGION=eu-west-2
export AWS_SECRET_NAME=<aws_secret_name>
export GITHUB_ORG=ONSDigital
export GITHUB_APP_CLIENT_ID=<github_app_client_id>
export GITHUB_APP_CLIENT_SECRET=<github_app_client_secret>
export AWS_ACCOUNT_NAME=sdp-sandbox
export APP_URL=http://localhost:8501
Please Note:
- APP_URL should point to the url which the app is running at. For example, if running locally you'd use localhost:8501 and on AWS the appropriate domain url.
- The GITHUB_APP_CLIENT_ID and GITHUB_APP_CLIENT_SECRET can be found from the GitHub App in developer settings.
-
Navigate into the project's folder and create a virtual environment using
python3 -m venv venv
-
Activate the virtual environment using
source venv/bin/activate
-
Install all project dependancies using
make install
-
When running the project locally, you need to edit
app.py
.When creating an instance of
boto3.Session()
, you must pass which AWS credential profile to use, as found in~/.aws/credentials
.When running locally:
session = boto3.Session(profile_name="<profile_name>")
When running from a container:
session = boto3.Session()
-
Run the project using
streamlit run src/app.py
-
Build a Docker Image
docker build -t copilot-usage-dashboard .
-
Check the image exists
docker images
Example Output:
REPOSITORY TAG IMAGE ID CREATED SIZE copilot-usage-dashboard latest afa0494f35a5 7 minutes ago 1.02GB
-
Run the image locally mapping local port 5801 to container port 5801 and passing in AWS credentials to download a .pem file from AWS Secrets Manager to the running container. These credentials should also allow access to S3 for historic reporting.
docker run -p 8501:8501 \ -e AWS_ACCESS_KEY_ID=<aws_access_key_id> \ -e AWS_SECRET_ACCESS_KEY=<aws_secret_access_key_id> \ -e AWS_DEFAULT_REGION=eu-west-2 \ -e AWS_SECRET_NAME=<aws_secret_name> \ -e GITHUB_ORG=ONSDigital \ -e GITHUB_APP_CLIENT_ID=<github_app_client_id> \ -e GITHUB_APP_CLIENT_SECRET=<github_app_client_secret> \ -e AWS_ACCOUNT_NAME=sdp-sandbox \ -e APP_URL=http://localhost:8501 copilot-usage-dashboard
-
Check the container is running
docker ps
Example Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae4aaf1daee6 copilot-usage-dashboard "/app/start_dashboar…" 7 seconds ago Up 6 seconds 0.0.0.0:8501->8501/tcp, :::8501->8501/tcp quirky_faraday
-
To view the running in a browser app navigate to
You can now view your Streamlit app in your browser. URL: http://0.0.0.0:8501
-
To stop the container, use the container ID
docker stop ae4aaf1daee6
When you make changes to the application a new container image must be pushed to ECR.
This script assumes you have a ~/.aws/credentials file set up with profiles of the credentials for pushing to ECR and that a suitably named repository (environmentname-toolname) is already created in the ECR. In the credential file you should use the profile that matches the IAM user associated with permissions to push to the ECR.
Setup the environment for the correct credentials. Ensure the script is executable:
chmod a+x set_aws_env.sh
Run the script:
./set_aws_env.sh <aws-profile e.g ons_sdp_dev_ecr> <environment e.g sdp-dev>
Verify the output is as expected:
Environment variables are set as:
export AWS_ACCESS_KEY_ID=MYACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETACCESSKEY
export AWS_DEFAULT_REGION=eu-west-2
export APP_NAME=sdp-dev-copilot-usage
Ensure the script to build and push the image is executable:
chmod a+x publish_container.sh
Check the version of the image you want to build (verify the next available release by looking in ECR)
Run the script, which will build an image locally, connect to ECR, push the image and then check the image is uploaded correctly.
./publish_container.sh <AWS Profile - e.g ons_sdp_dev_ecr> <AWS_ACCOUNT_NUMBER> <AWS Env - e.g sdp-dev> <image version - e.g v0.0.1>
These instructions assume:
- You have a repository set up in your AWS account named copilot-usage-dashboard.
- You have created an AWS IAM user with permissions to read/write to ECR (e.g AmazonEC2ContainerRegistryFullAccess policy) and that you have created the necessary access keys for this user. The credentials for this user are stored in ~/.aws/credentials and can be used by accessing --profile <aws-credentials-profile>, if these are the only credentials in your file then the profile name is default
You can find the AWS repo push commands under your repository in ECR by selecting the "View Push Commands" button. This will display a guide to the following (replace <aws-credentials-profile>, <aws-account-id> and <version> accordingly):
-
Get an authentication token and authenticate your docker client for pushing images to ECR:
aws ecr --profile <aws-credentials-profile> get-login-password --region eu-west-2 | docker login --username AWS --password-stdin <aws-account-id>.dkr.ecr.eu-west-2.amazonaws.com
-
Tag your latest built docker image for ECR (assumes you have run docker build -t sdp-repo-archive . locally first)
docker tag copilot-usage-dashboard:latest <aws-account-id>.dkr.ecr.eu-west-2.amazonaws.com/copilot-usage-dashboard:<version>
Note: To find the <version> to build look at the latest tagged version in ECR and increment appropriately
-
Push the version up to ECR
docker push <aws-account-id>.dkr.ecr.eu-west-2.amazonaws.com/copilot-usage-dashboard:<version>
When running the dashboard, you can toggle between using Live and Example Data.
To use real data from the Github API, the project must be supplied with a copilot-usage-dashboard.pem file in AWS Secret Manager (as mentioned here).
This project also supports historic reporting outside of the 28 days which the API supplies. For more information on setup, please see this README.md.
This page shows CoPilot Usage at a team level.
The user will be prompted to login to GitHub on the page.
Logged in users will only be able to see teams that they are a member of.
If the logged in user is apart of an admin team, they can search for and view any team's metrics. See Updating Admin Teams for more information.
Please Note:
- The team must have a minimum of 5 users with active copilot licenses to have any data.
- The team must be in the organisation the tool is running in.
Currently, there are 2 admin teams keh-dev
and sdp-dev
.
These teams are defined in admin_teams.json
in the copilot-usage-dashboard
bucket.
To add another admin team, simply add the team name to admin_teams.json
.
admin_teams.json
is in the following format and must be created manually on a fresh deployment:
["team-A", "team-B"]
A .pem file is used to allow the project to make authorised Github API requests through the means of Github App authentication. The project uses authentication as a Github App installation (documentation).
In order to get a .pem file, a Github App must be created an installed into the organisation of which the app will be managing. This app should have Read and Write Administration organisation permission and Read-only GitHub Copilot Business organisation permission.
This file should be uploaded to AWS Secret Manager as below.
It is vital that a callback URL is added to allow a login through GitHub when using the /team_usage
page.
To do this, add <app_url>/team_usage
as a callback url within your GitHub App's settings.
If you receive an error about an invalid callback uri, this callback url either doesn't exist or is incorrect.
The deployment of the service is defined in Infrastructure as Code (IaC) using Terraform. The service is deployed as a container on an AWS Fargate Service Cluster.
When first deploying the service to AWS the following prerequisites are expected to be in place or added.
The Terraform in this repository expects that underlying AWS infrastructure is present in AWS to deploy on top of, i.e:
- Route53 DNS Records
- Web Application Firewall and appropriate Rules and Rule Groups
- Virtual Private Cloud with Private and Public Subnets
- Security Groups
- Application Load Balancer
- ECS Service Cluster
That infrastructure is defined in the repository sdp-infrastructure
The following users must be provisioned in AWS IAM:
- ecr-user
- Used for interaction with the Elastic Container Registry from AWS cli
- ecs-app-user
- Used for terraform staging of the resources required to deploy the service
The following groups and permissions must be defined and applied to the above users:
- ecr-user-group
- EC2 Container Registry Access
- ecs-application-user-group
- Dynamo DB Access
- EC2 Access
- ECS Access
- ECS Task Execution Role Policy
- Route53 Access
- S3 Access
- Cloudwatch Logs All Access (Custom Policy)
- IAM Access
- Secrets Manager Access
Further to the above an IAM Role must be defined to allow ECS tasks to be executed:
- ecsTaskExecutionRole
To store the state and implement a state locking mechanism for the service resources a Terraform backend is deployed in AWS (an S3 object and DynamoDbTable).
The service requires access to an associated Github App secret, this secret is created when the Github App is installed in the appropriate Github Organisation. The contents of the generated pem file is stored in the AWS Secret Manager and retrieved by this service to interact with Github securely.
AWS Secret Manager must be set up with a secret:
- /sdp/tools/copilot-usage/copilot-usage-dashboard.pem
- A plaintext secret, containing the contents of the .pem file created when a Github App was installed.
There are associated README files in each of the Terraform modules in this repository.
- terraform/service/main.tf
- This provisions the resources required to launch the service.
Depending upon which environment you are deploying to you will want to run your terraform by pointing at an appropriate environment tfvars file.
Example service tfvars file: dashboard/env/sandbox/example_tfvars.txt
If the application has been modified then the following can be performed to update the running service:
-
Build a new version of the container image and upload to ECR as per the instructions earlier in this guide.
-
Change directory to the dashboard terraform
cd terraform/dashboard
-
In the appropriate environment variable file env/sandbox/sandbox.tfvars, env/dev/dev.tfvars or env/prod/prod.tfvars
- Change the container_ver variable to the new version of your container.
- Change the force_deployment variable to true.
-
Initialise terraform for the appropriate environment config file backend-dev.tfbackend or backend-prod.tfbackend run:
terraform init -backend-config=env/dev/backend-dev.tfbackend -reconfigure
The reconfigure options ensures that the backend state is reconfigured to point to the appropriate S3 bucket.
Please Note: This step requires an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be loaded into the environment if not already in place. This can be done using:
export AWS_ACCESS_KEY_ID="<aws_access_key_id>" export AWS_SECRET_ACCESS_KEY="<aws_secret_access_key>"
-
Refresh the local state to ensure it is in sync with the backend
terraform refresh -var-file=env/dev/dev.tfvars
-
Plan the changes, ensuring you use the correct environment config (depending upon which env you are configuring):
E.g. for the dev environment run
terraform plan -var-file=env/dev/dev.tfvars
-
Apply the changes, ensuring you use the correct environment config (depending upon which env you are configuring):
E.g. for the dev environment run
terraform apply -var-file=env/dev/dev.tfvars
-
When the terraform has applied successfully the running task will have been replaced by a task running the container version you specified in the tfvars file
Delete the service resources by running the following ensuring your reference the correct environment files for the backend-config and var files:
cd terraform/dashboard
terraform init -backend-config=env/dev/backend-dev.tfbackend -reconfigure
terraform refresh -var-file=env/dev/dev.tfvars
terraform destroy -var-file=env/dev/dev.tfvars
To view all commands
make all
Linting tools must first be installed before they can be used
make install-dev
To clean residue files
make clean
To format your code
make format
To run all linting tools
make lint
To run a specific linter (black, ruff, pylint)
make black
make ruff
make pylint
To run mypy (static type checking)
make mypy
To run the application locally
make run-local