仅本页所有页面
由 GitBook 提供支持
1 / 49

English

CherryStudio

Introduction

CherryStudio: Your All-in-One AI Assistant Platform

Follow us on social media: Twitter (X),Rednote, Weibo, Bilibili, Douyin

Join our communities: QQ Group (575014769), Telegram, Discord,


CherryStudio is a versatile AI assistant platform integrating multi-model dialogue, knowledge base management, AI art generation, translation, and more. With its highly customizable design, powerful extensibility, and user-friendly experience, CherryStudio is the ideal choice for both professional users and AI enthusiasts. Whether you are a beginner or a developer, you can find suitable AI functionalities in CherryStudio to enhance your work efficiency and creativity.


Features and Highlights

1. Basic Dialogue Functionality

  • One Question, Multiple Answers: Supports generating responses from multiple models simultaneously for the same question, allowing users to easily compare the performance of different models.

  • Automatic Grouping: Dialogue history for each assistant is automatically grouped for easy and quick access to past conversations.

  • Dialogue Export: Supports exporting complete dialogues in various formats (e.g., Markdown, PDF, etc.) for convenient storage and sharing.

  • Highly Customizable Parameters: In addition to basic parameter adjustments, users can also fill in custom parameters to meet personalized needs.​

  • Assistant Marketplace: Built-in marketplace with over a thousand industry-specific assistants, covering fields such as translation, programming, and writing, while also supporting user-defined assistants.​

  • Multiple Format Rendering: Supports Markdown rendering, formula rendering, HTML real-time preview, and other features to enhance content presentation.

2. Integration of Diverse Feature Functions

  • AI Art Generation: Provides a dedicated drawing panel, allowing users to generate high-quality images through natural language descriptions.

  • AI Mini-Programs: Integrates various free web-based AI tools, allowing direct use without switching browsers.

  • Translation Function: Supports dedicated translation panels, dialogue translation, prompt translation, and other translation scenarios.

  • File Management: Unified and categorized management of files from dialogues, art generation, and knowledge bases, avoiding tedious searching.

  • Global Search: Supports quickly locating historical records and knowledge base content, improving work efficiency.

3. Unified Management Mechanism for Multiple Service Providers

  • Service Provider Model Aggregation: Supports unified calling of models from mainstream service providers such as OpenAI, Gemini, Anthropic, Azure, etc.

  • Automatic Model Acquisition: One-click acquisition of the complete model list, eliminating the need for manual configuration.

  • Multi-Key Rotation: Supports the use of multiple API keys in rotation to avoid rate limit issues.

  • Precise Avatar Matching: Automatically matches exclusive avatars for each model, enhancing recognition.

  • Custom Service Providers: Supports the接入 (jiē rù - integration/access) of third-party service providers that comply with OpenAI, Gemini, Anthropic, and other specifications, ensuring strong compatibility. (Keeping "接入" as "integration/access" for technical accuracy, could also be "connecting to")

4. Highly Customizable Interface and Layout

  • Custom CSS: Supports global style customization to create an exclusive interface style.

  • Custom Dialogue Layout: Supports list or bubble style layouts, and customizable message styles (such as code snippet styles).

  • Custom Avatars: Supports setting personalized avatars for both the software and assistants.

  • Custom Sidebar Menu: Users can hide or sort sidebar functions according to their needs, optimizing the user experience.

5. Local Knowledge Base System

  • Multiple Format Support: Supports importing various file formats such as PDF, DOCX, PPTX, XLSX, TXT, MD, etc.

  • Multiple Data Source Support: Supports local files, URLs, sitemaps, and even manual input as knowledge base sources.

  • Knowledge Base Export: Supports exporting processed knowledge bases and sharing them with others.

  • Search and Check Support: After importing the knowledge base, users can perform real-time search tests to view processing results and segmentation effects.

6. Featured Focus Functions

  • Quick Q&A: Invoke a quick assistant in any scenario (such as WeChat, browser) to quickly obtain answers.

  • Quick Translation: Supports quickly translating words or text in other scenarios.

  • Content Summarization: Quickly summarize lengthy text content to improve information extraction efficiency.

  • Explanation: No need for complex prompts, one-click explanation of questions you don't understand.

7. Data Security

  • Multiple Backup Solutions: Supports local backup, WebDAV backup, and scheduled backups to ensure data security.

  • Data Security: Supports full local usage scenarios, combined with local large models, to avoid data leakage risks.


Project Advantages

  1. Beginner-Friendly: CherryStudio is committed to lowering the technical threshold, allowing users with no prior experience to get started quickly, so users can focus on work, study, or creation.

  2. Comprehensive Documentation: Provides detailed user documentation and troubleshooting manuals to help users quickly solve problems.

  3. Continuous Iteration: The project team actively responds to user feedback and continuously optimizes features to ensure the healthy development of the project.

  4. Open Source & Extensibility: Supports users to customize and extend through open-source code to meet personalized needs.


Applicable Scenarios

  • Knowledge Management and Query: Quickly build and query exclusive knowledge bases through the local knowledge base function, suitable for research, education, and other fields.

  • Multi-Model Dialogue and Creation: Supports simultaneous dialogue with multiple models, helping users quickly obtain information or generate content.

  • Translation and Office Automation: Built-in translation assistant and file processing functions, suitable for users who need cross-language communication or document processing.

  • AI Art Generation and Design: Generate images through natural language descriptions, meeting creative design needs.

StarHistory

Follow Our Social Media Accounts

Cover

Rednote

Cover

Bilibili

Cover

Weibo

Cover

Douyin

Cover

Twitter (X)

Download

Direct Download Links

Windows Version:

Note: Windows 7 system is not supported for installing CherryStudio.

  • Windows Installer (Setup)

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】

  • Windows Portable Version (Portable)

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】


macOS Version:

  • macOS Intel Version (x64)

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】

  • macOS Apple Silicon Version (ARM64-M Series Chip)

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】


Linux Version:

  • Linux x86_64 Version

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】

  • Linux ARM64 Version

Main Download Link (GitHub):

【Download】

Alternative Download Links:

【Link 1】 【Link 2】 【Link 3】


Cloud Drive Download

May not be the latest version, it is recommended to use the links below.

Quark Drive: https://pan.quark.cn/s/c8533a1ec63e#/list/share

Planning

Todo List

Feature Introduction

Chat Interface

Assistants and Topics

Assistants

Assistants are used to personalize the settings of a selected model, such as preset prompts and parameter presets. These settings allow the chosen model to better meet your expected workflow.

Default System Assistant presets a relatively general parameter set (no prompt). You can use it directly or find the presets you need on the Agents Page.

Topics

Assistants are parent sets of Topics. Multiple topics (i.e., dialogues) can be created under a single assistant. All topics share the assistant's parameter settings and preset words (prompts), and other model settings.

Chat Settings

Model Settings

Model settings are synchronized with the Settings parameters in the assistant settings. See the Assistant Settings section of the documentation for details.

In dialogue settings, only the model settings apply to the current assistant, while other settings apply globally. For example, if you set the message style to bubble, it will be the bubble style under any topic of any assistant.

Message Settings

Message Separator Line:

Use a separator line to separate the message body from the operation bar.

Use Serif Font:

Font style switching. You can now also change the font through Custom CSS.

Show Line Numbers for Code:

Displays line numbers for code blocks when the model outputs code snippets.

Collapsible Code Blocks:

When enabled, long code outputs within code snippets will be automatically collapsed.

Message Style:

Allows switching the dialogue interface to bubble style or list style.

Code Style:

Allows switching the display style of code snippets.

Mathematical Formula Engine:

  • KaTeX renders faster because it is specifically designed for performance optimization.

  • MathJax renders slower but is more feature-rich and supports more mathematical symbols and commands.

Message Font Size:

Adjust the font size of the dialogue interface.

Input Settings

Show Estimated Token Count:

Displays the estimated number of tokens consumed by the input text in the input box (not the actual tokens consumed by the context, for reference only).

Paste Long Text as File:

When a long text is copied from elsewhere and pasted into the input box, it will automatically be displayed as a file style to reduce interference when entering subsequent content.

Markdown Render Input Messages:

When turned off, only the messages replied by the model are rendered, and the sent messages are not rendered.

Translate with 3 Quick Spacebar Taps:

After entering a message in the dialogue interface input box, quickly tapping the spacebar three times can translate the entered content into English.

Note: This operation will overwrite the original text.


Assistant Settings

In the assistant interface, select the assistant name you need to set → select the corresponding setting in the right-click menu

Edit Assistant

Assistant settings apply to all topics under this assistant.

Prompt Settings

Name:

You can customize an assistant name for easy identification.

Prompt:

That is, the prompt. You can refer to the prompt writing on the agents page to edit the content.

Model Settings

Default Model:

You can fix a default model for this assistant. When adding from the agent page or copying an assistant, the initial model will be this model. If this item is not set, the initial model will be the global initial model (i.e., Default Assistant Model).

There are two types of default models for assistants: one is the Global Default Dialogue Model, and the other is the assistant's default model. The assistant's default model has a higher priority than the global default dialogue model. When the assistant's default model is not set, the assistant's default model = global default dialogue model.

Auto Reset Model:

When turned on - when switching to use other models during use under this topic, creating a new topic will reset the new topic to the assistant's default model. When this item is turned off, the model of the new topic will follow the model used in the previous topic.

For example, if the default model of the assistant is gpt-3.5-turbo, and I create topic 1 under this assistant, and switch to gpt-4o during the dialogue in topic 1, then:

If auto-reset is turned on: When creating a new topic 2, the default model selected for topic 2 is gpt-3.5-turbo;

If auto-reset is turned off: When creating a new topic 2, the default model selected for topic 2 is gpt-4o.

Temperature :

The temperature parameter controls the randomness and creativity of the text generated by the model (default value is 0.7). Specifically:

  • Low temperature value (0-0.3):

    • Output is more deterministic and focused

    • Suitable for code generation, data analysis and other scenarios requiring accuracy

    • Tends to choose the most probable words for output

  • Medium temperature value (0.4-0.7):

    • Balances creativity and coherence

    • Suitable for daily conversations, general writing

    • Recommended for chatbot dialogues (around 0.5)

  • High temperature value (0.8-1.0):

    • Produces more creative and diverse output

    • Suitable for creative writing, brainstorming and other scenarios

    • But may reduce the coherence of the text

Top P (Nucleus Sampling):

The default value is 1. The smaller the value, the more monotonous and easier to understand the content generated by AI; the larger the value, the wider the range of vocabulary the AI replies with, and the more diversified it is.

Nucleus sampling affects the output by controlling the probability threshold for vocabulary selection:

  • Smaller value (0.1-0.3):

    • Only consider the highest probability vocabulary

    • Output is more conservative and controllable

    • Suitable for code comments, technical documentation and other scenarios

  • Medium value (0.4-0.6):

    • Balances vocabulary diversity and accuracy

    • Suitable for general conversation and writing tasks

  • Larger value (0.7-1.0):

    • Consider a wider range of vocabulary choices

    • Produces richer and more diverse content

    • Suitable for creative writing and other scenarios requiring diversified expression

  • These two parameters can be used independently or in combination.

  • Choose appropriate parameter values according to the specific task type.

  • It is recommended to find the parameter combination that best suits specific application scenarios through experiments.

  • The above content is for reference and conceptual understanding only, and the given parameter range may not be suitable for all models. Please refer to the parameter recommendations given in the relevant model documentation.

Context Window

The number of messages to keep in the context. The larger the value, the longer the context and the more tokens consumed:

  • 5-10: Suitable for normal conversations

  • >10: Complex tasks requiring longer memory (e.g., tasks that generate long articles step-by-step according to a writing outline, which requires ensuring coherent context logic)

  • Note: The more messages, the more tokens are consumed.

Enable Message Length Limit (MaxToken)

Maximum number of Tokens for a single response. In large language models, max token (maximum number of tokens) is a key parameter that directly affects the quality and length of the model's generated responses. The specific setting depends on your needs, and you can also refer to the following suggestions.

For example: When testing whether the model is connected after filling in the key in CherryStudio, you only need to know whether the model returns messages correctly without specific content. In this case, set MaxToken to 1.

The MaxToken limit for most models is 4k Tokens, and of course there are also 2k, 16k or even more. You need to check the corresponding introduction page for details.

Suggestions:

  • General chat: 500-800

  • Short article generation: 800-2000

  • Code generation: 2000-3600

  • Long article generation: 4000 and above (requires model support)

In general, the model's generated responses will be limited to the MaxToken range. Of course, there may also be truncation (such as when writing long code) or incomplete expressions. In special cases, you also need to flexibly adjust according to the actual situation.

Stream Output

Stream output is a data processing method that allows data to be transmitted and processed in a continuous stream form, instead of sending all data at once. This method allows data to be processed and output immediately after it is generated, greatly improving real-time performance and efficiency.

In CherryStudio client and similar environments, it is simply a typewriter effect.

When turned off (non-stream): The model outputs the entire paragraph of information at once after generation (imagine the feeling of receiving a message on WeChat);

When turned on: Output word by word. It can be understood that the large model sends you each word as soon as it generates it, until all words are sent.

If some special models do not support stream output, you need to turn off this switch, such as o1-mini, which initially only supported non-stream output.

Custom Parameters

Add extra request parameters to the request body (body), such as fields like presence_penalty. Generally, most people will not need to use this.

How to fill in: Parameter Name - Parameter Type (Text, Number, etc.) - Value. Reference documentation: Click to go

The above parameters such as top-p, maxtokens, stream are among these parameters.

Custom parameters have higher priority than built-in parameters. That is, if custom parameters are duplicated with built-in parameters, the custom parameters will override the built-in parameters.

For example: After setting model to gpt-4o in custom parameters, no matter which model is selected in the dialogue, the gpt-4o model is used.

Agents

The Agents page is an assistant marketplace where you can select or search for your desired model presets. Clicking on an agent card will add the assistant to your assistant list in the chat page.

You can also edit and create your own assistants on this page.

  • Click on Agents and then click the create agent card to start creating your own assistant.

The button in the upper right corner of the prompt input box is the AI optimize prompt button. Clicking it will overwrite the original text. The model used is the Global Default Assistant Model.

Drawing

The drawing feature currently only supports drawing models from Silicon Flow. You can register an account at Silicon Flow and add it to Providers to use this feature.

More providers will be added in the future, so stay tuned.

If you have questions about the parameters, you can hover your mouse over the ? icon in the corresponding area to view the description.

Translation

Cherry Studio's Translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.

Interface Overview

The translation interface mainly consists of the following parts:

  1. Source Language Selection Area:

    • Automatic Detection: Cherry Studio will automatically detect the source language and perform the translation.

  2. Target Language Selection Area:

    • Drop-down Menu: Select the language you want to translate the text into.

  3. Configuration Button

    • Allows you to set the default model for translation.

  4. Text Input Box (Left):

    • Enter or paste the text you need to translate.

  5. Translation Result Box (Right):

    • Displays the translated text.

    • Copy Button: Click the button to copy the translation result to the clipboard.

  6. Translate Button:

    • Click this button to start translation.

Usage Steps

  1. Select Languages:

    • In the Source Language Selection Area (1), select the original language (or choose "Automatic Detection").

    • In the Target Language Selection Area (2), select the language you want to translate to.

  2. Enter Text:

    • Enter or paste the text you want to translate in the text input box (4) on the left.

  3. Start Translation:

    • Click the "Translate" button (6).

  4. View and Copy Results:

    • The translation result will be displayed in the Translation Result Box (5) on the right.

    • Click the copy button to copy the translation result to the clipboard.

Advanced Features & Tips

  • Automatic Detection: Make full use of the "Automatic Detection" feature to avoid the hassle of manually selecting the source language.

  • Long Text Translation: Cherry Studio's translation function supports long text translation. If the text is too long, you may need to wait a moment.

  • Settings: You can select a model for the translation function via the settings button.

Frequently Asked Questions (FAQ)

  • Q: What if the translation is inaccurate?

    • A: Although AI translation is powerful, it is not perfect. For texts in professional fields or complex contexts, manual proofreading is recommended. You can also try switching to different models.

  • Q: Which languages are supported?

    • A: Cherry Studio's translation function supports multiple mainstream languages. For a list of supported languages, please refer to the official Cherry Studio website or the in-app instructions.

  • Q: Can I translate an entire file?

    • A: The current interface is mainly for text translation. For file translation, you may need to go to the chat page of Cherry Studio and add the file for translation.

  • Q: What if the translation speed is slow?

    • A: Translation speed may be affected by factors such as network connection, text length, and server load. Please ensure your network connection is stable and wait patiently.

Min App

On the Mini Programs page, you can use the web versions of AI-related programs from various service providers within the client. Currently, custom adding and deleting are not supported.

Knowledge Base

For how to use the Knowledge Base, please refer to the Knowledge Base Tutorial in the advanced tutorials.

Files

The Files interface displays all files related to conversations, drawings, knowledge bases, etc. You can centrally manage and view them on this page.

Quick Assistant

Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations such as asking questions, translating, summarizing, and explaining.

Enable Quick Assistant

  1. Open Settings: Navigate to Settings -> Quick Assistant.

  2. Turn on the switch: Find and click the "Quick Assistant" enable button.

  1. Set Shortcut Key (Optional):

    • The default shortcut key is Ctrl+ E.

    • You can customize the shortcut key here to avoid conflicts or to better fit your usage habits.

Using Quick Assistant

  1. Activate: In any application, press your set shortcut key (default Ctrl + E) to open Quick Assistant.

  2. Interact: In the Quick Assistant window, you can directly perform the following operations:

    • Ask a Question: Ask the AI any question.

    • Text Translation: Enter the text to be translated.

    • Summarize Content: Enter long text to summarize.

    • Explanation: Enter the concept or term to be explained.

  3. Close: Press the ESC key or click anywhere outside the Quick Assistant window to close it.

Set Default Model

You can specify a default AI model for Quick Assistant to get a more consistent and personalized experience.

  1. Open Settings: Navigate to Settings -> Default Model -> Default Assistant Model.

  1. Select Model: Select the model you want to use from the dropdown list.

Tips and Tricks

  • Shortcut Key Conflicts: If the default shortcut key conflicts with other applications, please change the shortcut key.

  • Explore More Features: In addition to the features mentioned in the documentation, Quick Assistant may also support other operations, such as code generation, style transfer, etc. We encourage you to explore further as you use it.

  • Feedback and Improvement: If you encounter any problems or have any suggestions for improvement during use, please give feedback to the Cherry Studio team feedback.

Settings

Provider Settings

This page is only an introduction to the interface functions. For configuration tutorials, please refer to the Provider Configuration tutorial in the basic tutorials.

  • When using built-in providers, you only need to fill in the corresponding API keys.

  • Different providers may have different names for API keys, such as Secret Key, Key, API Key, Token, etc., but they all refer to the same thing.

API Key

In CherryStudio, a single provider supports multiple API Keys for polling usage. The polling method is a cyclic approach from the front to the back of the list.

  • Separate multiple keys with English commas when adding them. See the example below:

sk-xxxx1,sk-xxxx2,sk-xxxx3,sk-xxxx4

API Address

When using built-in providers, you generally do not need to fill in the API address. If you need to modify it, please strictly follow the address provided in the official documentation.

If the address provided by the provider is in the format of https://xxx.xxx.com/v1/chat/completions, you only need to fill in the root address part (https://xxx.xxx.com).

The CherryStudio client will automatically append the remaining path (/v1/chat/completions). Failure to fill in as required may result in the inability to use it properly.

Note: Most providers have unified routes for large language models, and generally, you do not need to perform the following operations. If the API path of the provider is v2, v3/chat/completions, or other versions, you can manually enter the corresponding version ending with "/" in the address bar. When the provider's request route is not the conventional /v1/chat/completions, use the complete address provided by the provider ending with "#".

That is:

  • When the API address ends with "/", only "chat/completions" is appended.

  • When the API address ends with "#", no appending operation is performed, and only the entered address is used.

Add Model

Generally, clicking the Manage button in the lower left corner of the provider configuration page will automatically retrieve all models supported by the provider. Click the "+" sign in the retrieved list to add them to the model list.

Note: The models in the pop-up list when you click the Manage button need to be added to the model list on the provider configuration page by clicking the "+" sign after the model before they can appear in the model selection list.

Connectivity Check

Click the check button after the API Key input box to test if the configuration is successful.

The model check defaults to using the last dialogue model added in the model list. If there are failures during the check, please check if there are incorrect or unsupported models in the model list.

After successful configuration, be sure to turn on the switch in the upper right corner; otherwise, the provider will still be in an unenabled state, and the corresponding models cannot be found in the model list.

Default Models

Default Assistant Model

When the assistant does not have a default assistant model set, the model selected by default in its new conversations will be the model set here.

Additionally, the model used for optimizing prompts when creating a new assistant is also the model set here.

Topic Naming Model

After each conversation, a model will be called to generate a topic name for the conversation. The model set here is the model used for naming.

Translation Model

The translation function in the input boxes for conversations, paintings, etc., and the translation model in the translation interface all use the model set here.

General Settings

On this page, you can set the interface language of the software and configure proxy settings, etc.

Display Settings

On this page, you can set the software's color theme, page layout, or customize CSS for personalized settings.

Theme Selection

You can set the default interface color mode here (Light Mode, Dark Mode, or Follow System).

Topic Settings

This setting is for the layout of the conversation interface.

Topic Layout

Automatically Switch to Topic

When this setting is enabled, clicking on the assistant name will automatically switch to the corresponding topic page.

Display Topic Time

When enabled, it will display the topic creation time below the topic title.

Custom CSS

Through this setting, you can flexibly make personalized changes and settings to the interface. For specific methods, refer to the Personalization Settings in the advanced tutorials.

Shortcut Settings

On this page, you can enable/disable and configure keyboard shortcuts for various functions. Please follow the instructions on the interface to set them up.

Data Settings

On this page, you can perform cloud and local backups of client data, query the local data directory, and clear cache, among other operations.

Data Backup

Currently, data backup only supports the WebDAV method. You can choose a service that supports WebDAV for cloud backups.

You can also achieve multi-device data synchronization by using the A—Backup → WebDAV—Restore → B method.

Example using Nutstore

  1. Log in to Nutstore, click on your username in the top right corner, and select “Account Info”:

  2. Select “Security Options” and click “Add Application”:

  3. Enter the application name and generate a random password:

  4. Copy and record the password:

  5. Obtain the server address, account, and password:

  6. In CherryStudio Settings - Data Settings, fill in the WebDAV information:

  7. Select to back up or restore data, and you can set the automatic backup time cycle.

WebDAV services with lower barriers to entry are generally cloud storage services:

  • Nutstore

  • 123Pan (Requires membership)

  • Aliyun Drive (Requires purchase)

  • Box (Free storage capacity is 10GB, single file size limit is 250MB.)

  • Dropbox (Dropbox offers 2GB for free, and you can expand to 16GB by inviting friends.)

  • TeraCloud (Free space is 10GB, and another 5GB of extra space can be obtained through invitations.)

  • Yandex Disk (Free users get 10GB of storage.)

Secondly, some services require self-deployment:

  • Alist

  • Cloudreve

  • sharelist

Basic Tutorial

Installation

Windows

Windows Installation Tutorial

Visit the Official Website

Note: Windows 7 is not supported for installing CherryStudio.

Click "Download" to choose the appropriate version.

LogoCherry Studio

Wait for the Download to Complete

If your browser displays a warning that the file is not trusted, choose to keep it.

Select "Keep" → Trust "Cherry-Studio"

Open File

Installation

MacOS

MacOS Installation Tutorial

First, go to the homepage and click to download the Mac version, or click below to go directly:

LogoCherry Studio - 全能的AI助手Cherry Studio

Download completed, click here:

Drag and drop the icon to install:

Installation complete:

How to confirm which version to download?

  • Click the Apple logo in the top left corner of your screen.

  • Click "About This Mac" in the expanded menu.

  • View processor information in the pop-up window.

If it is an Intel chip, download the Intel version installer.

If it is an Apple M* chip, download the Apple silicon installer.

Provider Configuration

OpenAI

Get API Key

  • On the official API Key page, click + Create new secret key

  • Copy the generated key and open CherryStudio's Provider Settings page.

  • Find the OpenAI provider and fill in the key you just obtained.

  • Click "Manage" or "Add" at the bottom to add supported models, and then turn on the provider switch in the upper right corner to start using it.

  • OpenAI services are not directly accessible in China (excluding Taiwan) and some other regions. You need to resolve proxy issues yourself.

  • An account with a balance is required.

Google Gemini

Get API Key

  • Before getting a Gemini API key, you need to have a Google Cloud project (if you already have one, you can skip this step).

  • Go to Google Cloud to create a project, fill in the project name and click "Create Project".

  • On the official API Key page, click Key Create API key

  • Copy the generated key and open CherryStudio's Provider Settings page.

  • Find the Gemini provider and fill in the key you just obtained.

  • Click "Manage" or "Add" at the bottom to add supported models, and then turn on the provider switch in the upper right corner to start using it.

  • Google Gemini services are not directly accessible in China (excluding Taiwan) and some other regions. You need to resolve proxy issues yourself.

Knowledge Base Tutorial

Knowledge Base Tutorial

In version 0.9.1, CherryStudio introduces the long-awaited knowledge base feature.

Below, we will present a step-by-step guide on how to use CherryStudio's knowledge base in detail.

Adding Embedding Models

  1. Find models in the model management service. You can quickly filter by clicking "Embedding";

  2. Find the desired model and add it to My Models.

Creating a Knowledge Base

  1. Knowledge Base Entry: In the CherryStudio toolbar on the left, click the knowledge base icon to enter the management page;

  2. Add Knowledge Base: Click "Add" to start creating a knowledge base;

  3. Naming: Enter the name of the knowledge base and add an embedding model. Taking bge-m3 as an example, you can complete the creation.

Adding Files and Vectorizing

  1. Add Files: Click the "Add Files" button to open the file selection dialog;

  2. Select Files: Choose supported file formats such as pdf, docx, pptx, xlsx, txt, md, mdx, etc., and open them;

  3. Vectorization: The system will automatically perform vectorization. When "Completed" (green ✓) is displayed, it indicates that vectorization is complete.

Adding Data from Multiple Sources

CherryStudio supports multiple ways to add data:

  1. Folder Directory: You can add an entire folder directory. Files in supported formats under this directory will be automatically vectorized;

  2. Website Link: Supports website URLs, such as https://docs.siliconflow.cn/introduction;

  3. Sitemap: Supports sitemap in xml format, such as https://docs.siliconflow.cn/sitemap.xml;

  4. Plain Text Notes: Supports entering custom content in plain text.

Hints:

  1. Illustrations in the imported knowledge base documents are temporarily not supported for conversion to vectors and need to be manually converted to text;

  2. Using a website URL as a knowledge base source may not always be successful. Some websites have strict anti-scraping mechanisms (or require login, authorization, etc.), so this method may not be able to obtain accurate content. It is recommended to test search after creation.

  3. Generally, websites provide sitemaps. For example, CherryStudio's sitemap. In most cases, you can get relevant information by adding /sitemap.xml after the root address of the website (i.e., URL). For example, aaa.com/sitemap.xml.

  4. If the website does not provide a sitemap or the URLs are complex, you can create a sitemap xml file yourself. The file temporarily needs to be filled in using a direct link that is directly accessible on the public network. Local file links will not be recognized.

  1. You can let AI generate sitemap files or have AI write a sitemap HTML generator tool;

  2. Direct links can be generated using OSS direct links or cloud drive direct links, etc. If there is no ready-made tool, you can also go to the ocoolAI official website, log in and use the free file upload tool in the website top bar to generate direct links.

Searching the Knowledge Base

Once files and other materials are vectorized, you can perform queries:

  1. Click the "Search Knowledge Base" button at the bottom of the page;

  2. Enter the content to be queried;

  3. The search results are presented;

  4. And the matching score of the result is displayed.

Generating Replies by Referencing the Knowledge Base in Conversations

  1. Create a new topic. In the conversation toolbar, click "Knowledge Base". The list of created knowledge bases will expand. Select the knowledge base you need to reference;

  2. Enter and send a question. The model will return an answer generated through retrieval results;

  3. At the same time, the data source of the reference will be attached below the answer, allowing you to quickly view the source file.

Knowledge Base Data

All data added to the Knowledge Base in Cherry Studio is stored locally. During the adding process, a copy of the document will be placed in the Cherry Studio data storage directory.

Vector Database: https://turso.tech/libsql

After a document is added to the Cherry Studio Knowledge Base, the file will be split into several fragments. These fragments will then be processed by the embedding model.

When using a large model for question answering, text fragments related to the question will be queried and passed to the large language model for processing together.

If you have data privacy requirements, it is recommended to use a local embedding database and a local large language model.

Embedding Model Information

To prevent errors, the max input values for some models in this document are not listed as their absolute limits. For example, when the official maximum input value is 8k (with no specific number given), the reference value provided in this document is 8191 or 8000, etc. (If this is unclear, please disregard and just fill in the reference values as provided in the document).

Volcano Engine - Doubao

Official Model Information Reference

Name
Max Input

Doubao-embedding

4095

Doubao-embedding-vision

8191

Doubao-embedding-large

4095

Alibaba Cloud

Official Model Information Reference

Name
Max Input

text-embedding-v3

8192

text-embedding-v2

2048

text-embedding-v1

2048

text-embedding-async-v2

2048

text-embedding-async-v1

2048

OpenAI

Official Model Information Reference

Name
Max Input

text-embedding-3-small

8191

text-embedding-3-large

8191

text-embedding-ada-002

8191

Baidu

Official Model Information Reference

Name
Max Input

Embedding-V1

384

tao-8k

8192

Zhipu AI

Official Model Information Reference

Name
Max Input

embedding-2

1024

embedding-3

2048

Hunyuan

Official Model Information Reference

Name
Max Input

hunyuan-embedding

1024

Baichuan

Official Model Information Reference

Name
Max Input

Baichuan-Text-Embedding

512

Together AI

Official Model Information Reference

Name
Max Input

M2-BERT-80M-2K-Retrieval

2048

M2-BERT-80M-8K-Retrieval

8192

M2-BERT-80M-32K-Retrieval

32768

UAE-Large-v1

512

BGE-Large-EN-v1.5

512

BGE-Base-EN-v1.5

512

Jina AI

Official Model Information Reference

Name
Max Input

jina-embedding-b-en-v1

512

jina-embeddings-v2-base-en

8191

jina-embeddings-v2-base-zh

8191

jina-embeddings-v2-base-de

8191

jina-embeddings-v2-base-code

8191

jina-embeddings-v2-base-es

8191

jina-colbert-v1-en

8191

jina-reranker-v1-base-en

8191

jina-reranker-v1-turbo-en

8191

jina-reranker-v1-tiny-en

8191

jina-clip-v1

8191

jina-reranker-v2-base-multilingual

8191

reader-lm-1.5b

256000

reader-lm-0.5b

256000

jina-colbert-v2

8191

jina-embeddings-v3

8191

Silicon Flow

Official Model Information Reference

Name
Max Input

BAAI/bge-m3

8191

netease-youdao/bce-embedding-base_v1

512

BAAI/bge-large-zh-v1.5

512

BAAI/bge-large-en-v1.5

512

Pro/BAAI/bge-m3

8191

Gemini

Official Model Information Reference

Name
Max Input

text-embedding-004

2048

Nomic AI

Official Model Information Reference

Name
Max Input

nomic-embed-text-v1

8192

nomic-embed-text-v1.5

8192

gte-multilingual-base

8192

Upstage

Official Model Information Reference

Name
Max Input

embedding-query

4000

embedding-passage

4000

Cohere

Official Model Information Reference

Name
Max Input

embed-english-v3.0

512

embed-english-light-v3.0

512

embed-multilingual-v3.0

512

embed-multilingual-light-v3.0

512

embed-english-v2.0

512

embed-english-light-v2.0

512

embed-multilingual-v2.0

256

Advanced Tutorial

Personalization

Custom CSS

Customizing CSS allows you to modify the software's appearance to better suit your preferences, like this:

Custom CSS
:root {
  --color-background: #1a462788;
  --color-background-soft: #1a4627aa;
  --color-background-mute: #1a462766;
  --navbar-background: #1a4627;
  --chat-background: #1a4627;
  --chat-background-user: #28b561;
  --chat-background-assistant: #1a462722;
}

#content-container {
  background-color: #2e5d3a !important;
}

Built-In Variables

See: https://github.com/CherryHQ/cherry-studio/tree/main/src/renderer/src/assets/styles

Related Recommendations

Some Chinese-style Cherry Studio themes to share: https://linux.do/t/topic/325119/129

Font Recommendations

Cover

Monaspace

English Font Commercial License

GitHub has launched an open-source font family named Monaspace, offering five styles: Neon (modern), Argon (humanist), Xenon (serif), Radon (handwritten), and Krypton (mechanical).

Cover

MiSans Global

Multilingual Commercial License

MiSans Global is a global language font customization project led by Xiaomi, in collaboration with Monotype and Hanyi Font Library.

It's a vast font family covering over 20 writing systems and supporting more than 600 languages.

Modifying Storage Location

Cherry Studio data storage follows system conventions, and data is automatically placed in the user directory. The specific directory locations are as follows:

macOS: /Users/username/Library/Application Support/CherryStudioDev

Windows: C:\Users\username\AppData\Roaming\CherryStudio

If you wish to modify the storage location, you can do so by creating a symbolic link (symlink). Exit the application, move the data to your desired location, and then create a link in the original location pointing to the moved location.

For detailed steps, please refer to: https://github.com/CherryHQ/cherry-studio/issues/621#issuecomment-2588652880

Networking Models

How to Use Networking Models in Cherry Studio

Notion Configuration

Cherry Studio supports importing topics into Notion's database.

Preparation

First you need to create a Notion database and Notion Integration and connect the Integration to the Notion database as shown in the following figure.

Note: The database must have a field with the same name as the "Page Title Field Name" in the settings, the default is Name, otherwise the import will fail.

Then you need to configure the Notion database ID and Notion key in Cherry Studio:

If your Notion database URL looks something like this:

https://www.notion.so/<long_hash_1>?v=<long_hash_2>

So the Notion database ID is <long_hash_1> this part.

Note that the "page title field name" here needs to be the same as the field name in the Notion database, otherwise it will cause import failure.

Using

Right-click the topic and select [Import to Notion].

Obsidian Configuration

To connect Cherry Studio with Obsidian

Cherry Studio supports linking with Obsidian to export complete conversations or individual conversations to the Obsidian library.

No additional Obsidian plugins need to be installed for this process, but since Cherry Studio's import to Obsidian works on the same principle as the Obsidian Web Clipper, it is recommended that users upgrade to the latest version of Obsidian (the current Obsidian version should be at least greater than 1.7.2) in order to avoid import failures if the dialogue is too long.

Preparation of Obsidian

Open your Obsidian vault and create a folder to save the exported conversations (the "Cherry Studio" folder is used as an example in the image):

Pay attention to and remember the text framed in the bottom left corner; this is your vault name.

Configuration of Cherry Studio

In the Settings → Data Settings → Obsidian Configuration menu of Cherry Studio, enter the repository name and folder name that you obtained in the first step:

The global tags are optional and can be set for all dialogues exported to Obsidian. Fill in as needed.

Exporting a Conversation

Exporting Complete Conversation

Go back to the Cherry Studio conversation interface, right-click on the conversation, select export, and click export to Obsidian.

Export Complete Conversation

At this time, a window will pop up, used to adjust the Properties of this dialogue note exported to Obsidian, as well as the processing method for exporting to Obsidian. There are three optional processing methods for exporting to Obsidian:

  • Create new (overwrite if exists): Create a new conversation note in the folder filled in during step two, overwriting the old note if a note with the same name exists.

  • Prepend: When a note with the same name already exists, export the selected conversation content and add it to the beginning of that note.

  • Append: Export and add selected dialog to the end of a note of the same name, if it already exists

Configure Note Properties

Only the first method will include Properties, the latter two methods will not.

Exporting a Single Conversation

To export a single conversation, click the three-line menu below the conversation, select "Export," and then click "Export to Obsidian."

Exporting a Single Conversation

After that, the same window as when exporting the complete conversation will pop up, asking you to configure the note properties and how to handle the notes. Just follow the tutorial above to complete it.

Export Success

🎉 By this point, congratulations on completing all the configurations for Cherry Studio linked Obsidian and walking through the export process in its entirety, ENJOY YOURSELVES!

Export Success to Obsidian
View Your Notes

Questions & Feedback

FAQ

Problems

Common Error Codes

  • 4xx (client error status code): Generally indicate that the request cannot be completed because of request syntax error, authentication failure, or authentication failure.

  • 5xx (server error status code): generally server-side errors, server downtime, request processing timeout, etc.

Error Status Code
Possible Scenarios
Solution

400

Wrong request format, etc.

Check the contents of the error returned by the dialog or the console to see what is reported as an error, and follow the prompts. [Common Case 1]: If it is a Gemini model, you may need bind a card; [Common Case 2]: Data volume exceeds the limit. Commonly in Visual Models, the image volume exceeds the upper limit of the upstream single request traffic will return the error code; [Common Case 3]: Add unsupported parameters or fill in the parameters incorrectly. Try to create a new pure assistant to test whether it is normal; [Common Scenario 4]: Context exceeds the limit, clear the context or create a new dialog or reduce the number of context entries.

401

Authentication failed: Model not supported or the server account is banned, etc.

Contact or check the status of the corresponding service provider's account

403

Operation not authorized

Act according to the error messages returned by the dialog or the console error message prompts.

404

Resource not found

Checking request paths, etc.

429

Request rate limit reached

Request rate (TPM or RPM) has reached the limit, please try again after a while.

500

Internal server error, unable to complete request

Contact upstream service provider if persistent

501

The server does not support the requested function and cannot complete the request

502

A server working as a gateway or proxy receives an invalid response from a remote server when it tries to execute a request

503

The server is temporarily unable to process client requests due to overload or system maintenance. The length of the delay can be included in the server's Retry-After header information

504

Servers acting as gateways or proxies that do not get requests from remote servers in a timely manner


How to View Console Errors

  • Click on the CherryStudio client window and press the shortcut Ctrl+Shift+I (Mac: Command+Option+I).

  • The currently active window must be a CherryStudio client window to bring up the console.

  • You need to open the console and then click on a request such as test or initiate a dialog to collect the request information.

In the pop-up console window, click Network → Click to view the last "completions" (for errors encountered in dialogue, translation, model connectivity checks, etc.) or "generations" (for errors encountered in drawing) marked with a red × at ② → Click Response to view the complete return content (the area in ④ in the image).

If you can't determine the cause of the error, please send a screenshot of this screen to the Official Communication Group for help.

This checking method can be used to get error information not only during dialogs, but also during model testing, when adding knowledge bases, when painting, etc. In either case it is necessary to open the debugging window first and then perform the request operation to get the request information.

The name in the Name (② above) column will be different in different scenarios.

Dialog, translation, model checking: completions

Painting: generations

Knowledge base creation: embeddings


Formula Not Rendered / Formula Rendering Error

  • Check if the formula has delimiters when the formula code is displayed directly instead of being rendered.

Usage of delimiter

Inline formula

  • Use a single dollar sign: $formula$

  • Or use \( and \): \(formula\)

Independent formula block

  • Use double dollar symbols: $$formula$$

  • Or use \[formula\]

  • Example: $$\sum_{i=1}^n x_i$$, ∑i=1nxi\sum_{i=1}^n x_i∑i=1n​xi​

Formula rendering error/gibberishni commonly when the formula contains Chinese(CJK) content, try to switch the formula engine to KateX.


Failed to Create Knowledge Base / Failed to Get Embedding Dimensions

  1. Model state unavailable

  2. Non-embedding model is used

Attention:

  1. Embedding models, dialog models, drawing models, etc. have their own functions, and their request methods and return contents and structures are different, so please don't force other types of models to be used as embedded models;

  2. CherryStudio will automatically categorize the embedding models in the embedding model list (as shown in the above figure). If the model is confirmed to be an embeding model but has not been categorized correctly, you can go to the model list and click on the Settings button at the back of the corresponding model to check the embedding option;

  3. If you cannot confirm which models are embedding models, you can check the model information from the corresponding service provider.


Model Cannot Recognize Images / Unable to Upload or Select Images

First, you need to confirm whether the model supports image recognition. CherryStudio categorizes popular models, and those with a small eye icon after the model name support image recognition.

Image recognition models will support uploading image files. If the model function is not correctly matched, you can find the model in the model list of the corresponding service provider, click the settings button after its name, and check the image option.

You can find the specific information of the model from the corresponding service provider. Like embedding models, models that do not support vision do not need to enable the image function, and selecting the image option will have no effect.

How to Ask Questions Effectively

CherryStudio is a free and open source project. As the project grows, the workload of the project team also increases day by day. In order to reduce the cost of communication as well as to quickly and efficiently solve your problems, we hope that before you ask a question, as far as possible, according to the following steps and ways to deal with the problems encountered, for the project team to set aside more time for the maintenance and development of the project. Thank you for your cooperation!

1. Document Browsing & Searching

Most of the basics can be solved by consulting the documentation.

  • The features and usage issues of the software can be checked in the feature introduction document.

  • Frequently asked questions will be included on the FAQ page. You can first check the FAQ page to see if there is a solution.

  • Complex problems can be solved directly through searching or by asking questions in the search box;

  • Be sure to carefully read the content of the hint boxes in each document, which can help you avoid many problems.

  • Check or search for similar issues and solutions on GitHub's Issue page.

2. Search on Web & Ask AI

For issues unrelated to client functionality (such as model errors, unexpected responses, parameter settings, etc.), it is recommended to first search for relevant solutions online, or describe the error content and problem to an AI to find solutions.

3. Ask Questions in the Official Community & Submit Issues on Github

If the above steps one and two do not provide an answer or solve your problem, you can go to the official Telegram channel, Discord channel, or QQ group (one-click access) to describe the problem in detail and seek help.

  1. If the model reports an error, please provide a complete screenshot of the interface and the error information from the console. Sensitive information can be redacted, but the model name, parameter settings, and error content must be retained in the screenshot. Click here to see how to view the console error information.

  2. If it's a software bug, please provide a specific error description and detailed steps to reproduce it, so that developers can debug and fix it. If it's an occasional problem that cannot be reproduced, please describe the relevant scenarios, background, and configuration parameters as detailed as possible when the problem occurred.

    In addition to this, you also need to include platform information (Windows, Mac, or Linux), software version number, and other information in the problem description.

Request document or provide document suggestions

You can contact the Telegram channel @Wangmouuu or QQ (1355873789), or you can send an email to: sunrise@cherry-ai.com

2. Search on web & Ask AI

General Knowledge

What're tokens?

Tokens are the basic units that AI models use to process text, and can be understood as the smallest units that the model "thinks" with. They are not exactly the same as the characters or words we understand, but rather a special way the model itself divides text.

1. Word Splitting for Chinese

  • One Chinese character is usually encoded as 1-2 tokens

  • For example: "你好" ≈ 2-4 tokens

2. Word Splitting for English

  • Common words are usually 1 token

  • Longer or less common words will be broken down into multiple tokens

  • For example:

    • "hello" = 1 token

    • "indescribable" = 4 tokens

3. Special characters

  • Spaces and punctuation also consume tokens

  • A line break is usually 1 token

Tokenizers differ across service providers, and even across different models from the same provider. This information is only for clarifying the concept of a token.


What's a Tokenizer?

A tokenizer is a tool that AI models use to convert text into tokens. It determines how to split input text into the smallest units that the model can understand.

Why Do Different Models Have Different Tokenizers?

1. Different Training Data

  • Different corpora lead to different optimization directions

  • Varying degrees of multilingual support

  • Specialized optimization for specific domains (medical, legal, etc.)

2. Differences in Participle Algorithms

  • BPE (Byte Pair Encoding) - OpenAI GPT Serious

  • WordPiece - Google BERT

  • SentencePiece - Suitable for multilingual scenarios

3. Different Optimization Goals

  • Some prioritize compression efficiency

  • Some prioritize semantic preservation

  • Some prioritize processing speed

Practical Effect

The same text may have different token counts in different models:

输入:"Hello, world!"
GPT-3: 4 tokens
BERT: 3 tokens
Claude: 3 tokens

What's an Embedding Model?

Basic concept: Embedding models are a technology that transforms high-dimensional discrete data (text, images, etc.) into low-dimensional continuous vectors. This transformation allows machines to better understand and process complex data. Imagine it as simplifying a complex jigsaw puzzle into a simple coordinate point, but this point still retains the key features of the puzzle. In the large model ecosystem, it acts as a "translator," converting human-understandable information into a numerical form that AI can compute.

Working principle: Taking natural language processing as an example, embedding models can map words to specific locations in a vector space. In this space, words with similar semantics will automatically cluster together. For example:

  • The vectors for "king" and "queen" will be very close

  • Pet words like "cat" and "dog" can be close together

  • Semantically unrelated words like "car" and "bread" will be farther away

Main Application Scenarios:

  • Text analytics: Document categorization, sentiment analysis

  • Recommender system: Personalized content recommendation

  • Image processing: Similar image retrieval

  • Search engine: Semantic search optimization

Core Advantages

  1. Dimensionality reduction effect: Simplifies complex data into easily processed vector forms

  2. Semantic preservation: Retaining key semantic information from the original data

  3. Computational efficiency: Significantly improves the training and inference efficiency of machine learning models.

Technical Value: Embedded model is a fundamental component of modern AI systems, providing high-quality data representation for machine learning tasks, and is a key technology that drives the development of natural language processing, computer vision, and other fields.


How Embedding Models Work in Knowledge Retrieval

Basic workflow:

  1. Knowledge Base Preprocessing Phase

  • Split the document into chunks of appropriate size

  • Convert each chunk into a vector using an embedding model

  • Store the vectors and original text in a vector database

  1. Query Processing Phase

  • Convert user questions into vectors

  • Retrieve similar content in the vector database

  • Provide the retrieved relevant content as context to the LLM

Feedback & Suggestions

Telegram Group

Members of our group will share their experiences and help you solve problems

Join the Telegram discussion group for help: https://t.me/CherryStudioAI

QQ Group

Members of QQ group can help each other and share download links

QQ Group (1025067911)

Github Issues

Suitable for recording to prevent developers from forgetting, or to participate in discussions here

GitHub Issue: https://github.com/CherryHQ/cherry-studio/issues/new/choose

Email

If no other feedback channels are available, you can contact the developer for assistance

Email the developer: support@cherry-ai.com

Contact Us

Business Cooperation

Contact: Mr. Wang

📮: yinsenho@cherry-ai.com

📱: 18954281942 (This is NOT a service number)

Please email us at support@cherry-ai.com

Or raise a issue: https://github.com/CherryHQ/cherry-studio/issues

For commercial use, please note our licensing terms.

Free Commercial Use (Unmodified Code Only):

We hereby grant you a non-exclusive, worldwide, non-transferable, royalty-free license to use, reproduce, distribute, copy, and distribute the unmodified materials, including for commercial purposes, subject to the terms and conditions of this Agreement, based on the intellectual property or other rights we own or embody in the materials.

Commercial License:

You must contact us and obtain explicit written commercial authorization prior to continuing the use of Cherry Studio materials under any of the following circumstances:

  1. Modification and Derivatives: You modify or create derivative works based on Cherry Studio materials (including but not limited to changing the application name, logo, code, functionality, interface, etc.).

  2. Enterprise Services: You utilize Cherry Studio internally within your enterprise, or offer services based on Cherry Studio to enterprise customers, and such services support cumulative usage by 10 or more users.

  3. Hardware Bundling Sales: You pre-install or integrate Cherry Studio into hardware devices or products for bundled sales.

  4. Large-scale Procurement by Government or Educational Institutions: Your usage scenario involves large-scale procurement projects by government or educational institutions, particularly when security, data privacy, or other sensitive requirements are involved.

  5. Public-facing Cloud Services: You provide cloud services based on Cherry Studio that are publicly accessible.

Details of Authorization:https://docs.cherry-ai.com/en-us/contact-us/business-cooperation/cherry-studio-license-agreement

Cherry Studio License Agreement

By using or distributing any part or element of Cherry Studio materials, you acknowledge that you have read, understood, and agreed to the terms of this Agreement, which shall become effective immediately upon such use.

1. Definitions

This Cherry Studio License Agreement (the “Agreement”) defines the terms and conditions governing the use, reproduction, distribution, and modification of the Materials.

• “We” (or “us”) means Shanghai Qianhui Technology Co., Ltd.

• “You” refers to any natural person or legal entity exercising rights granted by this Agreement, and/or using the Materials for any purpose and within any field of use.

• “Third Party” means any individual or legal entity not under common control with either You or us.

• “Cherry Studio” refers to the software suite, including but not limited to [e.g., core libraries, editors, plugins, example projects], and its source code, documentation, sample code, and other elements distributed by us. (Please specify based on the actual composition of CherryStudio.)

• “Materials” collectively refers to the proprietary Cherry Studio software and documentation (or any part thereof) provided by Shanghai Qianhui Technology Co., Ltd. under this Agreement.

• “Source” form means the preferred form for modifications, including but not limited to source code, documentation source files, and configuration files.

• “Object” form means any form resulting from mechanical transformation or translation of Source form, including but not limited to compiled object code, generated documentation, or forms converted into other media types.

• “Commercial Use” means use for direct or indirect commercial gain or advantage, including but not limited to sales, licensing, subscriptions, advertising, marketing, training, consulting services, etc.

• “Modification” refers to any alteration, adaptation, derivative, or secondary development of Materials in Source form, including but not limited to changing application names, logos, code, functions, and interfaces.

2. Grant of Rights

Free Commercial Use (without modified code):

We hereby grant you a non-exclusive, worldwide, non-transferable, royalty-free license, under our intellectual property or other rights embodied in the Materials, to use, copy, distribute, and redistribute the unmodified Materials, including for commercial use, subject to compliance with the terms and conditions herein.

Commercial Authorization (when required):

You must obtain express written commercial authorization from us to exercise rights under this Agreement when the conditions stated in Section 3 (“Commercial Authorization”) are met.

3. Commercial Authorization

Under any of the following conditions, you must contact us and obtain explicit written commercial authorization before proceeding with the use of Cherry Studio Materials:

• Modification and Derivatives: You modify or develop derivative works based on Cherry Studio Materials, including but not limited to changing the application name, logo, code, functionality, or interface.

• Enterprise Services: You use Cherry Studio internally within your enterprise, or offer Cherry Studio-based services to enterprise clients, supporting cumulative usage by 10 or more users.

• Hardware Bundling Sales: You pre-install or integrate Cherry Studio into hardware devices or products for bundled sales.

• Large-scale Procurement by Government or Educational Institutions: Your usage scenario involves large-scale procurement projects by government or educational institutions, particularly involving sensitive requirements such as security or data privacy.

• Public-facing Cloud Services: You provide publicly accessible cloud services based on Cherry Studio.

4. Redistribution

You may distribute unmodified copies of the Materials, or provide them as part of a product or service containing unmodified Materials, in Source or Object form, subject to the following conditions:

• You must include a copy of this Agreement with all copies of the Materials distributed.

• You must retain the following attribution statement within any copy distributed, included in a “NOTICE” or similar text file distributed as part of such copies:

"Cherry Studio is licensed under the Cherry Studio LICENSE AGREEMENT, Copyright (c) Shanghai Qianhui Technology Co., Ltd. All Rights Reserved."

5. Usage Rules

Materials may be subject to export control or restrictions. You must comply with applicable laws and regulations when using the Materials.

If you use the Materials or their outputs or results to create, train, fine-tune, or improve software or models to be distributed or made available, we encourage prominently marking your relevant product documentation with phrases such as “Built with Cherry Studio” or “Powered by Cherry Studio”.

6. Intellectual Property Rights

We retain all intellectual property rights in and to the Materials and derivative works created by or for us. Subject to the terms of this Agreement, ownership of intellectual property rights in modifications and derivative works created by you will be governed by specific commercial authorization agreements. Without obtaining commercial authorization, you shall not acquire ownership rights in modifications and derivative works, and all intellectual property rights remain vested with us.

No license to use our trade names, trademarks, service marks, or product names is granted unless necessary to fulfill obligations under this Agreement or reasonably customary to describe and redistribute the Materials.

If you institute any legal proceeding against us or any entity (including counterclaims or countersuits), alleging that the Materials or their outputs infringe upon any intellectual property or other rights owned or licensable by you, all licenses granted under this Agreement shall terminate immediately upon initiation of such proceedings.

7. Disclaimer and Limitation of Liability

We are not obligated to support, update, provide training for, or develop further versions of Cherry Studio Materials, nor to grant any related licenses.

Materials are provided on an “as-is” basis without warranties of any kind, express or implied, including warranties of merchantability, non-infringement, or fitness for a particular purpose. We make no warranties or assurances concerning the security or stability of Materials or their outputs.

In no event shall we be liable to you for any damages arising from your use or inability to use the Materials or any outputs thereof, including but not limited to direct, indirect, special, or consequential damages, regardless of the cause.

You agree to defend, indemnify, and hold us harmless against any third-party claims arising from or related to your use or distribution of the Materials.

8. Term and Termination

The term of this Agreement begins upon your acceptance or access to the Materials and remains effective until terminated in accordance with its terms.

We may terminate this Agreement upon your breach of any terms or conditions. Upon termination, you must cease using the Materials. Sections 7, 9, and “2. Contributor Agreement” will survive termination.

9. Governing Law and Jurisdiction

This Agreement and any disputes arising out of or related to it shall be governed by the laws of the People’s Republic of China.

The Shanghai People’s Court shall have exclusive jurisdiction over any disputes arising from this Agreement.

ABOUT

Privacy Policy

Welcome to Cherry Studio (“this software” or “we”). We highly value your privacy protection. This Privacy Policy outlines how we handle and protect your personal information and data. Please carefully read and understand this policy before using the software:

I. Scope of Information Collection

To optimize user experience and improve software quality, we may anonymously collect the following non-personal information only:

• Software version information;

• Activity and usage frequency of software functions;

• Anonymous crash reports and error logs;

The above information is fully anonymized, does not involve any personally identifiable data, and cannot be associated with your personal information.

II. Information We Do NOT Collect

To maximize the protection of your privacy and security, we explicitly promise that we will:

• NOT collect, store, transmit, or process any API Key information for model services that you input into this software;

• NOT collect, store, transmit, or process any conversational data generated while using this software, including but not limited to chat content, instruction data, knowledge base data, vector data, and other customized content;

• NOT collect, store, transmit, or process any personally identifiable sensitive information.

III. Data Interaction Description

This software uses the third-party model service provider’s API Key that you apply for and configure independently, to implement related model calls and conversation functionalities. The model services (such as large models, API interfaces, etc.) you use are provided by the third-party providers you choose and are entirely their responsibility. Cherry Studio acts solely as a local tool offering the interface to call third-party model services.

Therefore:

• All conversational data generated by your interaction with the large model services is independent of Cherry Studio. We neither participate in data storage nor perform any form of data transmission or relay;

• You are responsible for reviewing and accepting the privacy policy and related policies of the corresponding third-party model service providers. Privacy policies of these services are available on each provider’s official website.

IV. Third-party Model Service Providers’ Privacy Policy Statement

You shall independently bear privacy risks potentially involved with third-party model service providers. Specific privacy policies, data security measures, and relevant responsibilities can be found on the official websites of the selected model service providers. We do not assume any liability in this regard.

V. Agreement Updates and Amendments

This policy may be adjusted appropriately according to software version updates. Please check it regularly. In the event of substantial changes to this policy, we will notify you in an appropriate manner.

VI. Contact Us

If you have any questions regarding this policy or Cherry Studio’s privacy protection measures, please feel free to contact us at any time.

Thank you for choosing and trusting Cherry Studio. We will continue providing you with a safe and reliable product experience.