# Ollama

Ollama is an excellent open-source tool that lets you easily run and manage various large language models (LLMs) locally. Cherry Studio now supports Ollama integration, allowing you to interact directly with locally deployed LLMs in a familiar interface without relying on cloud services!

## What is Ollama?

Ollama is a tool that simplifies the deployment and use of large language models (LLMs). It has the following features:

* **Local execution:** The model runs entirely on your local computer, without needing an internet connection, protecting your privacy and data security.
* **Easy to use:** With simple command-line instructions, you can download, run, and manage various LLMs.
* **Rich model selection:** Supports many popular open-source models such as Llama 2, Deepseek, Mistral, and Gemma.
* **Cross-platform:** Supports macOS, Windows, and Linux.
* **Open API**: Supports OpenAI-compatible interfaces and can be integrated with other tools.

## Why use Ollama in Cherry Studio?

* **No cloud service needed:** No longer limited by cloud API quotas and costs—fully experience the power of local LLMs.
* **Data privacy:** All your conversation data stays local, so you don't have to worry about privacy leaks.
* **Available offline:** You can continue interacting with the LLM even without an internet connection.
* **Customization:** Choose and configure the LLM that best fits your needs.

## Configure Ollama in Cherry Studio

### **1. Install and run Ollama**

First, you need to install and run Ollama on your computer. Please follow these steps:

* **Download Ollama:** Visit the Ollama official website (<https://ollama.com/>), and download the appropriate installer for your operating system.\
  On Linux, you can install Ollama directly by running the command:

  ```sh
  curl -fsSL https://ollama.com/install.sh | sh
  ```
* **Install Ollama:** Follow the installer’s instructions to complete the installation.
* **Download a model:** Open a terminal (or command prompt) and use the `ollama run` command to download the model you want to use. For example, to download the Llama 2 model, you can run:

  ```sh
  ollama run llama3.2
  ```

  Ollama will automatically download and run the model.
* **Keep Ollama running:** While you are using Cherry Studio to interact with the Ollama model, make sure Ollama stays running.

### **2. Add Ollama as a provider in Cherry Studio**

Next, add Ollama as a custom AI provider in Cherry Studio:

* **Open Settings:** In the left sidebar of the Cherry Studio interface, click “Settings” (gear icon).
* **Go to Model Services:** On the settings page, select the “Model Services” tab.
* **Add provider:** Click Ollama in the list.

<figure><img src="/files/447bac05fcc903e381d25e79e57ce7e54fc31ee9" alt=""><figcaption></figcaption></figure>

### **3. Configure the Ollama provider**

Find the Ollama you just added in the provider list and configure it in detail:

1. **Enabled status:**
   * Make sure the switch on the far right of the Ollama provider is turned on, indicating it is enabled.
2. **API key:**
   * Ollama does not require an**API key by default.** You can leave this field blank or enter any content.
3. **API address:**
   * Enter the local API address provided by Ollama. Usually, the address is:

     ```
     http://localhost:11434/
     ```

     If you changed the port, please modify it accordingly.
4. **Keep alive time:** This option sets the session retention time, in minutes. If there are no new conversations within the set time, Cherry Studio will automatically disconnect from Ollama and release resources.
5. **Model management:**
   * Click the “+ Add” button to manually add the names of models you have already downloaded in Ollama.
   * For example, if you have already`ollama run llama3.2`downloaded`llama3.2`the model via, then you can enter it here`llama3.2`
   * Click the “Manage” button to edit or delete added models.

## Get started

After completing the above configuration, you can select the Ollama provider and your downloaded model in Cherry Studio’s chat interface and start chatting with the local LLM!

## Tips and notes

* **First run of a model:** The first time you run a model, Ollama needs to download the model files, which may take a long time. Please be patient.
* **View available models:** Run the `ollama list` command in the terminal to view the list of Ollama models you have downloaded.
* **Hardware requirements:** Running large language models requires certain computing resources (CPU, memory, GPU). Please make sure your computer configuration meets the model’s requirements.
* **Ollama documentation**: You can click the`View Ollama documentation and models`link on the configuration page to quickly jump to the official Ollama documentation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cherry-ai.com/docs/en-us/pre-basic/providers/ollama.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
