Announcing ✨ Interrupt: The AI Agent Conference by LangChain ✨ – the largest gathering of builders pushing the boundaries of agentic applications. 🦜 Join us in San Francisco this May for an event packed with technical talks and hands-on workshops from the industry’s leading engineers and researchers. Meet other members of our community and exchange ideas! 🚀 Hear from industry leaders like Michele Catasta (President of Replit), Adam D’Angelo (Founder & CEO of Quora), and other AI luminaries. Capacity will be limited. Sign up for the ticket drop: https://lnkd.in/gzXMaRig Apply to speak: https://lnkd.in/g4XwzZbm
About us
We're on a mission to make it easy to build the LLM apps of tomorrow, today. We build products that enable developers to go from an idea to working code in an afternoon and in the hands of users in days or weeks. We’re humbled to support over 50k companies who choose to build with LangChain. And we built LangSmith to support all stages of the AI engineering lifecycle, to get applications into production faster.
- Website
-
langchain.com
External link for LangChain
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Type
- Privately Held
Employees at LangChain
Updates
-
🤖 How to Evaluate Document Extraction 📃 Document extraction is a common use case for LLMs, transforming unstructured text into structured data. It's important to evaluate your extraction pipeline's performance, especially for large-scale or high-stakes applications. In this video, we walk through how to build a document extraction pipeline and evaluate its performance in generating structured output. You'll learn how to: - Compare the performance of two different models based on latency, cost, and accuracy - Use an LLM judge to evaluate outputs against a ground truth dataset - Run multiple judge repetitions to validate results Watch the video ➡️ https://lnkd.in/gZmHNyhS
-
🚀 New in LangSmith: Bulk View for Annotation Queues When working with large datasets for model training, managing thousands of annotations can be overwhelming. In LangSmith, you can now: • View multiple annotation runs at once for a high-level overview of the queue • Quickly delete specific runs or clear the entire queue with a few clicks Streamline your workflow today: smith.langchain.com
-
You can now visualize traces as a waterfall graph for deeper insights into your app’s latency in LangSmith. Use the waterfall graph to: • Spot bottlenecks at a glance • Understand parallel vs. sequential execution • Optimize response times with precision Try it out today: smith.langchain.com
-
🧱 Structured outputs in Mistral Mistral AI has released a dedicated structured output feature, allowing you to ensure that generations adhere to your custom schema. langchain-mistralai 0.2.5 now supports this feature, allowing you to declare schemas with TypedDict, Pydantic models, JSON schema, or OpenAI format. Call `llm.with_structured_output(schema, method="json_schema")` to try it out! Structured output docs here: https://lnkd.in/gurUfVv8
-
🔐 Secure your LangChain RAG agent with Okta FGA Learn how to use fine-grained authorization to control access, safeguard sensitive data, and enhance security— all while leveraging LangGraph on Node.js. Read more from Auth0 by Okta : https://lnkd.in/ePAswf99
-
⭐️Introducing the LangGraph Functional API We spent a lot of time making LangGraph's state management (short/long term memory🧠, human in the loop🧑🤝🧑, time travel⏲️) production ready You can now use these outside of the Graph syntax - with any framework (or none at all!) With the release of the functional API, it is possible to get these features **without having to explicitly define a graph** The functional API allows you to leverage LangGraph features using a more traditional programming paradigm, making it easier to build AI workflows that incorporate human-in-the-loop interactions, short-term and long-term memory, and streaming capabilities. This API is complementary to the Graph API (StateGraph) and can be used in conjunction with it as both APIs use the same underlying runtime. This allows you to mix and match the two paradigms to create complex workflows that leverage the best of both worlds. Blog: https://lnkd.in/gH78myUD Video: https://lnkd.in/geFucwqJ
-
Trying out Deepseek-R1? Put it to the test in LangSmith Playground. 🛝 Run side-by-side comparisons between Deepseek-R1 + other models, iterate on your prompts, and see how they stack up. Go play ➡️ https://lnkd.in/gfn5J8-A
-
🔥 Learn how Decagon built their AI Agent Engine 🔥 Join us for an evening meetup on February 4th in San Francisco hosted at Pear VC, featuring a fireside chat with Harrison Chase (CEO of LangChain) and Bihan Jiang (Product Lead at Decagon). Bihan and Harrison will take you behind the scenes of Decagon's comprehensive system for the deploying production-ready AI agents. Trusted by industry leaders like Duolingo, Notion, Rippling, and Eventbrite, Decagon's customer support agent handles end-to-end customer interactions. Enjoy snacks, drinks, and insightful discussions! Spots are limited, so RSVP now ➡️ https://lu.ma/w7y0bqwr
-
📀Exploring prompt optimization We created 5 different datasets and 5 different algorithms for prompt optimization. Here's what we learned: 🥇Claude sonnet performs best (> o1) 🧠Prompt optimization ~ memory: most effective on tasks where the model lacks domain knowledge Read the full blog here: https://lnkd.in/g8zExYAt Next steps: - Explore memory more directly - Explore more prompt optimization algorithms - Incorporate into LangSmith If you are interested in beta testing prompt optimization algorithms, sign up here: https://lnkd.in/g8zExYAt