Summary
We’ve taken a quick and broad tour of the LangChain toolbox in this chapter. First, we created a basic application that only takes input from the user for predefined fields and deployed it on a web server. Next, we used LangGraph to create an open-ended chat, to which we added a memory thread for the model to recall prior information from our interaction with it.
We then extended our open-ended chat application to include a RAG lookup for relevant information to include in our prompt, which we downloaded from the LangChain codebase and stored in a vector database for similarity lookup. Finally, we enhanced our RAG application with conditionally activated tool nodes via an LLM, enabling human-in-the-loop input and integration of automated web search in the application. We deployed these tools on a FastAPI server, which sets the foundation for building interactive applications on the web powered by LLMs.