Ollama
Ollama is an excellent open-source tool that lets you easily run and manage various large language models (LLMs) locally. Cherry Studio now supports Ollama integration, allowing you to interact directly with locally deployed LLMs within a familiar interface without relying on cloud services!
What is Ollama?
Ollama is a tool that simplifies the deployment and use of large language models (LLMs). It has the following features:
Runs locally: Models run entirely on your local computer without needing an internet connection, protecting your privacy and data security.
Simple and easy to use: With simple command-line instructions, you can download, run, and manage various LLMs.
Rich model selection: Supports many popular open-source models such as Llama 2, Deepseek, Mistral, Gemma, and more.
Cross-platform: Supports macOS, Windows, and Linux systems.
Open API: Supports an OpenAI-compatible interface and can be integrated with other tools.
Why use Ollama in Cherry Studio?
No cloud services required: No longer constrained by cloud API quotas and fees—fully enjoy the power of local LLMs.
Data privacy: All your conversation data stays local, so you don't need to worry about privacy leaks.
Available offline: You can continue interacting with the LLM even without a network connection.
Customization: You can choose and configure the LLM that best fits your needs.
Configuring Ollama in Cherry Studio
1. Install and run Ollama
First, you need to install and run Ollama on your computer. Follow these steps:
Download Ollama: Visit the official Ollama website (https://ollama.com/), and download the installer for your operating system. On Linux, you can directly run the command to install ollama:
Install Ollama: Complete the installation following the installer’s instructions.
Download a model: Open a terminal (or command prompt) and use
ollama runcommand to download the model you want to use. For example, to download the Llama 2 model, you can run:Ollama will automatically download and run that model.
Keep Ollama running: While you use Cherry Studio to interact with Ollama models, make sure Ollama stays running.
2. Add Ollama as a provider in Cherry Studio
Next, add Ollama as a custom AI provider in Cherry Studio:
Open Settings: In the Cherry Studio interface left navigation bar, click "Settings" (gear icon).
Go to Model Services: On the settings page, select the "Model Services" tab.
Add provider: Click Ollama in the list.

3. Configure the Ollama provider
Find the Ollama you just added in the providers list and configure it in detail:
Enabled status:
Ensure the switch on the far right of the Ollama provider is turned on, indicating it is enabled.
API key:
Ollama by defaultdoes not require an API key. You can leave this field blank or enter any value.
API address:
Fill in the local API address provided by Ollama. Typically, the address is:
If you changed the port, please update it accordingly.
Keep-alive time: This option sets the session keep-alive time in minutes. If there is no new conversation within the set time, Cherry Studio will automatically disconnect from Ollama to free resources.
Model management:
Click the "+ Add" button to manually add the names of the models you have already downloaded in Ollama.
For example, if you have already
ollama run llama3.2downloadedthe llama3.2model, you can enter it herethe llama3.2Click the "Manage" button to edit or delete the models you have added.
Get started
After completing the above configuration, you can select the Ollama provider and the model you downloaded in Cherry Studio’s chat interface and start conversing with the local LLM!
Tips and tricks
First run of a model: The first time you run a model, Ollama needs to download the model files, which may take a long time—please be patient.
View available models: Run in the terminal
ollama listcommand to view the list of Ollama models you have downloaded.Hardware requirements: Running large language models requires certain computing resources (CPU, memory, GPU); please ensure your computer meets the model’s requirements.
Ollama documentation: You can click the
View Ollama documentation and modelslink on the configuration page to quickly jump to the Ollama official documentation.
Last updated
Was this helpful?