Lightweight Python code to move data
We focus on the needs & constraints of Python-first data platform teams: how to write any data source, achieve data democracy, modernise legacy systems and reduce cloud costs.
NEW DLT+ CORE CONCEPTS & REFERENCE ARCHITECTURES
Try our reference architectures for running dlt pipelines in production
Why our commercial product is in Early Access and how you can engage
Thousands of companies run dlt in production, but when it comes to upgrading their Pythonic data platforms, many end up building essential components in-house to meet requirements.
To address this challenge, our Early Access program initially introduces dlt+ concepts such as Project, Cache, Iceberg, and AI workflows that are designed to be easy to use and loved by the dlt community.
As a next step, we aim to offer complete off-the-shelf reference architectures built with dlt and dlt+. Currently, together with our Early Access design partners, amongst others we develop on-prem database sync and CDC for 50+ databases, a Pythonic Iceberg warehouse upgrade, streaming ingestion with a lakehouse, a vendor-free Iceberg lakehouse, and a data science platform.
If you're looking to upgrade your data platform, consider joining us in Early Access.

OPEN SOURCE DLT
Pip install dlt and go
With over 1M downloads per month, dlt 1.4 is the most popular production-ready Python library for moving data. You can add dlt to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. Unlike other non-Python solutions, with dlt, there's no need to use any backends or containers. We do not replace your data platform, deployments, or security models. Simply import dlt in a Python file or a Jupyter Notebook cell. You can load data from any source that produces Python data structures, including APIs, files, databases, and more.

The current machine learning revolution has been enabled by the Cambrian explosion of Python open-source tools that have become so accessible that a wide range of practitioners can use them. As a simple-to-use Python library, dlt is the first tool that this new wave of people can use. By leveraging this library, we can extend the machine learning revolution into enterprise data.

- Julien Chaumond
- CTO/Co-Founder at Hugging Face
Python and machine learning under security constraints are key to our success. We found that our cloud ETL provider could not meet our needs. dlt is a lightweight yet powerful open source tool we can run together with Snowflake. Our event streaming and batch data loading performs at scale and low cost. Now anyone who knows Python can self-serve to fulfil their data needs.

- Maximilian Eber
- CPTO & Co-Founder at Taktile
OPEN SOURCE DLT TOOLING
Access any data you want in Python
Today it is easier to pip install dlt and write a custom pipeline than to setup and configure a traditional ETL platform. In Jan '25 we crossed 20,000 dlt total custom sources created by the community since we launched dlt in summer '23. Because dlt is code we continue to automate engineering work and pass on productivity gains to organisations using dlt. Our REST API Source toolkit is a short, declarative configuration driven way of creating sources. dlt-init-openapi is a a tool that generates pipelines code out of any OpenAPI spec.
dlt has enabled me to completely rewrite all of our core SaaS service pipelines in 2 weeks and have data pipelines in production with full confidence. We also achieved data democracy for our data platform. Our product, business, and operation teams can independently satisfy a majority of their data needs through no-code self-service. The teams built multi-touch attribution for how Harness acquires customers, and models for how Harness customers utilize licenses. If the teams want to build anything else to push the company forward, they don't need to wait for permission or data access to do it.

- Alex Butler
- Senior Data Engineer at Harness