Building a simple LLM application
To begin building with LangChain, we’ll start by installing the library and needed dependencies:
pip install -U langchain langchain-mistralai FastAPI langserve sse_starlette nest-asyncio pyngrok uvicorn
For this example, we’re going to use one of the models from Mistral AI; we’ll need to create an API key to use in the rest of our code, which you can do on the page shown in Figure 8.3 at https://console.mistral.ai/api-keys/:

Figure 8.3: Mistral AI API key creation
Finally, we’ll want to be able to view the results of our calculations in a Python server, so we’ll use the ngrok platform to host our LLM application. You can create an account on ngrok at https://dashboard.ngrok.com/get-started/your-authtoken (Figure 8.4); you’ll need a token later to serve your application.

Figure 8.4: ngrok token creation
Now that we’ve gotten all the tokens we need, let’s set...