First, download Ollama according to your operation system. If you're using Windows or MacOS system, get Ollama from the official website. If you're using Linux system, get Ollama with following command line:
curl -fsSL https://ollama.com/install.sh | sh
Next, start Ollama to download LLM models.
There are sevral popular LLM models listed on Ollama, ex. Llama 2, mistral, Gemma, etc. You can view all the models here. In this example, I will use Llama 3. Let's get the model.

Llama 3 has different versions for use. We'll get the latest version for demo.
ollama pull llama3:latest
After finish pulling the model, we can see the downloaded models by following command
ollama list
Now, we can apply Llama 3 with Streamlit locally.
Before run app.py, install the required packges.
pip install -r requirements.txt
Next, run app.py with streamlit command.
streamlit run app.py

You'll see the interactive textbox with Llama3.