Ollama
Project Introduction
This document was translated from Chinese by AI and has not yet been reviewed.
This document is translated from Chinese by AI and has not yet been reviewed. I will try to check the document one by one to ensure the translation is reasonable.

Cherry Studio is an all-in-one AI assistant platform integrating multi-model conversations, knowledge base management, AI painting, translation, and more.
Star History
Follow Our Social Accounts
Ollama
Ollama is an excellent open-source tool that allows you to easily run and manage various large language models (LLMs) locally. Cherry Studio now supports Ollama integration, enabling you to interact directly with locally deployed LLMs within a familiar interface, without relying on cloud services!
What is Ollama?
Ollama is a tool that simplifies the deployment and use of large language models (LLM). It has the following features:
Local Execution: Models run entirely on your local computer, without needing an internet connection, protecting your privacy and data security.
Easy to Use: Download, run, and manage various LLMs with simple command-line instructions.
Rich Model Support: Supports many popular open-source models such as Llama 2, Deepseek, Mistral, Gemma, and more.
Cross-Platform: Supports macOS, Windows, and Linux systems.
Open API: Supports OpenAI-compatible interfaces for integration with other tools.
Why Use Ollama in Cherry Studio?
No Cloud Services Needed: No longer limited by cloud API quotas and costs; enjoy the full power of local LLMs.
Data Privacy: All your conversation data remains local, eliminating concerns about privacy breaches.
Offline Availability: Continue interacting with LLMs even without an internet connection.
Customization: Choose and configure the LLM that best suits your needs.
Configuring Ollama in Cherry Studio
1. Install and Run Ollama
First, you need to install and run Ollama on your computer. Please follow these steps:
Download Ollama: Visit the Ollama official website (https://ollama.com/) and download the appropriate installation package for your operating system. On Linux, you can install Ollama directly by running the command:
curl -fsSL https://ollama.com/install.sh | shInstall Ollama: Follow the installer's instructions to complete the installation.
Download Models: Open your terminal (or command prompt) and use the
ollama runcommand to download the model you want to use. For example, to download the Llama 2 model, you can run:ollama run llama3.2Ollama will automatically download and run the model.
Keep Ollama Running: Ensure Ollama remains running while you interact with Ollama models using Cherry Studio.
2. Add Ollama Provider in Cherry Studio
Next, add Ollama as a custom AI provider in Cherry Studio:
Open Settings: In the Cherry Studio interface, click "Settings" (gear icon) in the left navigation bar.
Go to Model Services: On the settings page, select the "Model Services" tab.
Add Provider: Click Ollama in the list.

3. Configure the Ollama Provider
Locate the newly added Ollama in the provider list and configure it in detail:
Enabled Status:
Ensure the toggle switch on the far right of the Ollama provider is turned on, indicating it is enabled.
API Key:
Ollama does not require an API key by default. You can leave this field blank or enter any content.
API Address:
Enter the local API address provided by Ollama. Typically, the address is:
http://localhost:11434/If the port has been modified, please change it accordingly.
Keep Alive Time: This option sets the session's keep-alive duration, in minutes. If there are no new conversations within the set time, Cherry Studio will automatically disconnect from Ollama and release resources.
Model Management:
Click the "+ Add" button to manually add the name of the model you have already downloaded in Ollama.
For example, if you have downloaded the
llama3.2model viaollama run llama3.2, then you can enterllama3.2here.Click the "Manage" button to edit or delete added models.
Get Started
After completing the above configurations, you can select the Ollama provider and your downloaded model in Cherry Studio's chat interface to start conversing with your local LLM!
Tips and Tricks
First Time Running a Model: The first time you run a model, Ollama needs to download the model files, which may take a considerable amount of time. Please be patient.
View Available Models: Run the
ollama listcommand in the terminal to view a list of your downloaded Ollama models.Hardware Requirements: Running large language models requires certain computing resources (CPU, memory, GPU). Please ensure your computer's configuration meets the model's requirements.
Ollama Documentation: You can click the 'View Ollama Documentation and Models' link on the configuration page to quickly jump to the official Ollama documentation.
Last updated
Was this helpful?