Skip to content

SarahWei0804/Web-app-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Download Ollama

First, download Ollama according to your operation system. If you're using Windows or MacOS system, get Ollama from the official website. If you're using Linux system, get Ollama with following command line:

curl -fsSL https://ollama.com/install.sh | sh

Next, start Ollama to download LLM models.

Get the LLM models

There are sevral popular LLM models listed on Ollama, ex. Llama 2, mistral, Gemma, etc. You can view all the models here. In this example, I will use Llama 3. Let's get the model.

image

Llama 3 has different versions for use. We'll get the latest version for demo.

ollama pull llama3:latest

After finish pulling the model, we can see the downloaded models by following command

ollama list

Now, we can apply Llama 3 with Streamlit locally.

Python: app.py

Before run app.py, install the required packges.

pip install -r requirements.txt

Next, run app.py with streamlit command.

streamlit run app.py
image

You'll see the interactive textbox with Llama3.

About

Run a LLM model locally with Streamlit and Ollama.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages