DigitalOcean’s cover photo
DigitalOcean

DigitalOcean

Software Development

New York, NY 129,993 followers

The simplest scalable cloud. ☁️

About us

DigitalOcean simplifies cloud computing so businesses can spend more time creating software that changes the world. With its mission-critical infrastructure and fully managed offerings, DigitalOcean helps developers at startups and growing digital businesses rapidly build, deploy and scale, whether creating a digital presence or building digital products. DigitalOcean combines the power of simplicity, security, community and customer support so customers can spend less time managing their infrastructure and more time building innovative applications that drive business growth.

Website
https://www.digitalocean.com
Industry
Software Development
Company size
1,001-5,000 employees
Headquarters
New York, NY
Type
Public Company
Founded
2012
Specialties
Cloud Computing, Cloud Servers, Virtual Hosting, Cloud Hosting, Cloud Infrastructure, Simple Hosting, and Virtual Servers

Locations

Employees at DigitalOcean

Updates

  • Welcome to Cloud Chats, your weekly roundup of cloud news, trends, and the fun of building in the cloud. In this episode, we dive into Mercury—the first commercial-scale diffusion large language model that’s changing the game. We also check out an AI that can juggle up to 50 complex tasks at once, and get excited about fresh OpenAI API announcements, including a new responses API, agents SDK, and some playful experiments with Anthropic’s MCP Model Context Protocol. Join our Discord community at https://lnkd.in/eVp7S26B for live discussions, Q&As, and direct access to fellow developers and DigitalOcean experts.

    Cloud Chats Episode 65: Mercury LLM, Multi-Task AI and OpenAI API Surprises

    Cloud Chats Episode 65: Mercury LLM, Multi-Task AI and OpenAI API Surprises

    www.linkedin.com

  • DigitalOcean reposted this

    View profile for YASH SHARMA

    Engineering DigitalOcean | Core Maintainer @meshery (CNCF project) | Ex-SWE Layer5 | LFX mentee & mentor | KubeCon Speaker | Cloud native | Go | React

    KubeCon 🇬🇧 is now just around the corner, and we at DigitalOcean can't wait to meet so many amazing people and cloud-native community. This KubeCon I've three talks lined up and I'm already getting my hands dirty on PPTs See you all there

  • DigitalOcean reposted this

    View profile for Darpan Dinker

    VP of AI/ML and PaaS @ DigitalOcean; ex-AWS and entrepreneur

    DigitalOcean GenAI Platform is built on 3 pillars: simplicity, scalability and approachability. One way we are improving *approachability* is allowing our customers to ingest with a wide variety of file types. Checkout the comparison of file types supported for a RAG solution out-of-the-box and let us know what other file types are of interest to you!

    • No alternative text description for this image
  • DigitalOcean reposted this

    View profile for Wade Wegner

    Chief Ecosystem and Growth Officer at DigitalOcean

    This great article on Byte Latent Transformers from the DigitalOcean team sent me down a fascinating technical rabbit hole that I think many developers will find interesting, even if you're not deep into ML architecture. At its core, BLTs solve a fundamental problem with today's large language models. Traditional LLMs like Llama 3 and GPT models rely on tokenization - breaking text into predefined vocabulary units before processing. This approach has worked well enough, but creates inherent biases, struggles with unusual text patterns, and poses serious challenges for multilingual processing. What makes BLTs revolutionary is their approach to working directly with raw bytes instead of tokens. The system uses a concept called "entropy patching" (essentially measuring uncertainty in byte prediction) to dynamically group bytes where it makes sense. High-entropy sections (where the next byte is unpredictable) create patch boundaries, while low-entropy sections get efficiently grouped together. Think of it like compression algorithms that adapt to data complexity - BLTs allocate more computational resources to complex, unpredictable parts of text and less to simple, predictable parts. This means the model can process any language or character set without the limitations of a fixed vocabulary. The three-part architecture is elegant: a main Global Transformer handles the heavy lifting on patch representations (not individual bytes), while lightweight Local Encoder/Decoder components convert between bytes and patches. This design allows for computational efficiency without sacrificing the model's ability to understand nuanced patterns in text. For developers working on multilingual applications, this approach is particularly valuable - no more wrestling with language-specific tokenization rules or watching your model struggle with languages it wasn't specifically trained to handle. The same architecture works universally across languages and scripts because it's operating at the byte level. While optimization challenges remain (current ML frameworks are heavily optimized for token-based approaches), early experiments show that BLTs can match or exceed traditional models at scale. The research also suggests promising results when "byte-ifying" existing models like Llama 3 without full retraining. As we push toward more flexible, robust AI systems, this type of foundational architecture innovation may prove more significant than incremental improvements to existing approaches. It's a reminder that questioning fundamental assumptions in system design can sometimes lead to the most interesting breakthroughs. Lastly, if it sounds like I know what I'm talking about, I give all credit to the great writers at DigitalOcean who created this comprehensive technical guide. https://lnkd.in/gNsuzRQ5

Similar pages

Browse jobs

Funding

DigitalOcean 21 total rounds

Last Round

Post IPO equity

US$ 34.9M

See more info on crunchbase