Follow us on social media: Twitter (X),Rednote, Weibo, Bilibili, Douyin
Join our communities: QQ Group (575014769), Telegram, Discord,
CherryStudio is a versatile AI assistant platform integrating multi-model dialogue, knowledge base management, AI art generation, translation, and more. With its highly customizable design, powerful extensibility, and user-friendly experience, CherryStudio is the ideal choice for both professional users and AI enthusiasts. Whether you are a beginner or a developer, you can find suitable AI functionalities in CherryStudio to enhance your work efficiency and creativity.
One Question, Multiple Answers: Supports generating responses from multiple models simultaneously for the same question, allowing users to easily compare the performance of different models.
Automatic Grouping: Dialogue history for each assistant is automatically grouped for easy and quick access to past conversations.
Dialogue Export: Supports exporting complete dialogues in various formats (e.g., Markdown, PDF, etc.) for convenient storage and sharing.
Highly Customizable Parameters: In addition to basic parameter adjustments, users can also fill in custom parameters to meet personalized needs.
Assistant Marketplace: Built-in marketplace with over a thousand industry-specific assistants, covering fields such as translation, programming, and writing, while also supporting user-defined assistants.
Multiple Format Rendering: Supports Markdown rendering, formula rendering, HTML real-time preview, and other features to enhance content presentation.
AI Art Generation: Provides a dedicated drawing panel, allowing users to generate high-quality images through natural language descriptions.
AI Mini-Programs: Integrates various free web-based AI tools, allowing direct use without switching browsers.
Translation Function: Supports dedicated translation panels, dialogue translation, prompt translation, and other translation scenarios.
File Management: Unified and categorized management of files from dialogues, art generation, and knowledge bases, avoiding tedious searching.
Global Search: Supports quickly locating historical records and knowledge base content, improving work efficiency.
Service Provider Model Aggregation: Supports unified calling of models from mainstream service providers such as OpenAI, Gemini, Anthropic, Azure, etc.
Automatic Model Acquisition: One-click acquisition of the complete model list, eliminating the need for manual configuration.
Multi-Key Rotation: Supports the use of multiple API keys in rotation to avoid rate limit issues.
Precise Avatar Matching: Automatically matches exclusive avatars for each model, enhancing recognition.
Custom Service Providers: Supports the接入 (jiē rù - integration/access) of third-party service providers that comply with OpenAI, Gemini, Anthropic, and other specifications, ensuring strong compatibility. (Keeping "接入" as "integration/access" for technical accuracy, could also be "connecting to")
Custom CSS: Supports global style customization to create an exclusive interface style.
Custom Dialogue Layout: Supports list or bubble style layouts, and customizable message styles (such as code snippet styles).
Custom Avatars: Supports setting personalized avatars for both the software and assistants.
Custom Sidebar Menu: Users can hide or sort sidebar functions according to their needs, optimizing the user experience.
Multiple Format Support: Supports importing various file formats such as PDF, DOCX, PPTX, XLSX, TXT, MD, etc.
Multiple Data Source Support: Supports local files, URLs, sitemaps, and even manual input as knowledge base sources.
Knowledge Base Export: Supports exporting processed knowledge bases and sharing them with others.
Search and Check Support: After importing the knowledge base, users can perform real-time search tests to view processing results and segmentation effects.
Quick Q&A: Invoke a quick assistant in any scenario (such as WeChat, browser) to quickly obtain answers.
Quick Translation: Supports quickly translating words or text in other scenarios.
Content Summarization: Quickly summarize lengthy text content to improve information extraction efficiency.
Explanation: No need for complex prompts, one-click explanation of questions you don't understand.
Multiple Backup Solutions: Supports local backup, WebDAV backup, and scheduled backups to ensure data security.
Data Security: Supports full local usage scenarios, combined with local large models, to avoid data leakage risks.
Beginner-Friendly: CherryStudio is committed to lowering the technical threshold, allowing users with no prior experience to get started quickly, so users can focus on work, study, or creation.
Comprehensive Documentation: Provides detailed user documentation and troubleshooting manuals to help users quickly solve problems.
Continuous Iteration: The project team actively responds to user feedback and continuously optimizes features to ensure the healthy development of the project.
Open Source & Extensibility: Supports users to customize and extend through open-source code to meet personalized needs.
Knowledge Management and Query: Quickly build and query exclusive knowledge bases through the local knowledge base function, suitable for research, education, and other fields.
Multi-Model Dialogue and Creation: Supports simultaneous dialogue with multiple models, helping users quickly obtain information or generate content.
Translation and Office Automation: Built-in translation assistant and file processing functions, suitable for users who need cross-language communication or document processing.
AI Art Generation and Design: Generate images through natural language descriptions, meeting creative design needs.
Windows Installer (Setup)
Main Download Link (GitHub):
【Download】
Alternative Download Links:
Windows Portable Version (Portable)
Main Download Link (GitHub):
【Download】
Alternative Download Links:
macOS Intel Version (x64)
Main Download Link (GitHub):
【Download】
Alternative Download Links:
macOS Apple Silicon Version (ARM64-M Series Chip)
Main Download Link (GitHub):
【Download】
Alternative Download Links:
Linux x86_64 Version
Main Download Link (GitHub):
【Download】
Alternative Download Links:
Linux ARM64 Version
Main Download Link (GitHub):
【Download】
Alternative Download Links:
May not be the latest version, it is recommended to use the links below.
Quark Drive: https://pan.quark.cn/s/c8533a1ec63e#/list/share
Assistants are used to personalize the settings of a selected model, such as preset prompts and parameter presets. These settings allow the chosen model to better meet your expected workflow.
Default System Assistant
presets a relatively general parameter set (no prompt). You can use it directly or find the presets you need on the Agents Page.
Assistants are parent sets of Topics. Multiple topics (i.e., dialogues) can be created under a single assistant. All topics
share the assistant's
parameter settings and preset words (prompts), and other model settings.
Model settings are synchronized with the Settings
parameters in the assistant settings. See the Assistant Settings section of the documentation for details.
Message Separator Line
:
Use a separator line to separate the message body from the operation bar.
Use Serif Font
:
Font style switching. You can now also change the font through Custom CSS.
Show Line Numbers for Code
:
Displays line numbers for code blocks when the model outputs code snippets.
Collapsible Code Blocks
:
When enabled, long code outputs within code snippets will be automatically collapsed.
Message Style
:
Allows switching the dialogue interface to bubble style or list style.
Code Style
:
Allows switching the display style of code snippets.
Mathematical Formula Engine
:
KaTeX renders faster because it is specifically designed for performance optimization.
MathJax renders slower but is more feature-rich and supports more mathematical symbols and commands.
Message Font Size
:
Adjust the font size of the dialogue interface.
Show Estimated Token Count
:
Displays the estimated number of tokens consumed by the input text in the input box (not the actual tokens consumed by the context, for reference only).
Paste Long Text as File
:
When a long text is copied from elsewhere and pasted into the input box, it will automatically be displayed as a file style to reduce interference when entering subsequent content.
Markdown Render Input Messages
:
When turned off, only the messages replied by the model are rendered, and the sent messages are not rendered.
Translate with 3 Quick Spacebar Taps
:
After entering a message in the dialogue interface input box, quickly tapping the spacebar three times can translate the entered content into English.
In the assistant interface, select the assistant name you need to set → select the corresponding setting in the right-click menu
Prompt Settings
Name
:
You can customize an assistant name for easy identification.
Prompt
:
That is, the prompt. You can refer to the prompt writing on the agents page to edit the content.
Default Model
:
You can fix a default model for this assistant. When adding from the agent page or copying an assistant, the initial model will be this model. If this item is not set, the initial model will be the global initial model (i.e., Default Assistant Model).
Auto Reset Model
:
When turned on - when switching to use other models during use under this topic, creating a new topic will reset the new topic to the assistant's default model. When this item is turned off, the model of the new topic will follow the model used in the previous topic.
For example, if the default model of the assistant is gpt-3.5-turbo, and I create topic 1 under this assistant, and switch to gpt-4o during the dialogue in topic 1, then:
If auto-reset is turned on: When creating a new topic 2, the default model selected for topic 2 is gpt-3.5-turbo;
If auto-reset is turned off: When creating a new topic 2, the default model selected for topic 2 is gpt-4o.
Temperature
:
The temperature parameter controls the randomness and creativity of the text generated by the model (default value is 0.7). Specifically:
Low temperature value (0-0.3):
Output is more deterministic and focused
Suitable for code generation, data analysis and other scenarios requiring accuracy
Tends to choose the most probable words for output
Medium temperature value (0.4-0.7):
Balances creativity and coherence
Suitable for daily conversations, general writing
Recommended for chatbot dialogues (around 0.5)
High temperature value (0.8-1.0):
Produces more creative and diverse output
Suitable for creative writing, brainstorming and other scenarios
But may reduce the coherence of the text
Top P (Nucleus Sampling)
:
The default value is 1. The smaller the value, the more monotonous and easier to understand the content generated by AI; the larger the value, the wider the range of vocabulary the AI replies with, and the more diversified it is.
Nucleus sampling affects the output by controlling the probability threshold for vocabulary selection:
Smaller value (0.1-0.3):
Only consider the highest probability vocabulary
Output is more conservative and controllable
Suitable for code comments, technical documentation and other scenarios
Medium value (0.4-0.6):
Balances vocabulary diversity and accuracy
Suitable for general conversation and writing tasks
Larger value (0.7-1.0):
Consider a wider range of vocabulary choices
Produces richer and more diverse content
Suitable for creative writing and other scenarios requiring diversified expression
Context Window
The number of messages to keep in the context. The larger the value, the longer the context and the more tokens consumed:
5-10: Suitable for normal conversations
>10: Complex tasks requiring longer memory (e.g., tasks that generate long articles step-by-step according to a writing outline, which requires ensuring coherent context logic)
Note: The more messages, the more tokens are consumed.
Enable Message Length Limit (MaxToken)
Maximum number of Tokens for a single response. In large language models, max token (maximum number of tokens) is a key parameter that directly affects the quality and length of the model's generated responses. The specific setting depends on your needs, and you can also refer to the following suggestions.
For example: When testing whether the model is connected after filling in the key in CherryStudio, you only need to know whether the model returns messages correctly without specific content. In this case, set MaxToken to 1.
The MaxToken limit for most models is 4k Tokens, and of course there are also 2k, 16k or even more. You need to check the corresponding introduction page for details.
Suggestions:
General chat: 500-800
Short article generation: 800-2000
Code generation: 2000-3600
Long article generation: 4000 and above (requires model support)
In general, the model's generated responses will be limited to the MaxToken range. Of course, there may also be truncation (such as when writing long code) or incomplete expressions. In special cases, you also need to flexibly adjust according to the actual situation.
Stream Output
Stream output is a data processing method that allows data to be transmitted and processed in a continuous stream form, instead of sending all data at once. This method allows data to be processed and output immediately after it is generated, greatly improving real-time performance and efficiency.
In CherryStudio client and similar environments, it is simply a typewriter effect.
When turned off (non-stream): The model outputs the entire paragraph of information at once after generation (imagine the feeling of receiving a message on WeChat);
When turned on: Output word by word. It can be understood that the large model sends you each word as soon as it generates it, until all words are sent.
Custom Parameters
Add extra request parameters to the request body (body), such as fields like presence_penalty
. Generally, most people will not need to use this.
How to fill in: Parameter Name - Parameter Type (Text, Number, etc.) - Value. Reference documentation: Click to go
The above parameters such as top-p, maxtokens, stream are among these parameters.
The Agents page is an assistant marketplace where you can select or search for your desired model presets. Clicking on an agent card will add the assistant to your assistant list in the chat page.
You can also edit and create your own assistants on this page.
Click on Agents
and then click the create agent card to start creating your own assistant.
The drawing feature currently only supports drawing models from Silicon Flow. You can register an account at Silicon Flow and add it to Providers to use this feature.
If you have questions about the parameters, you can hover your mouse over the ?
icon in the corresponding area to view the description.
Cherry Studio's Translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.
The translation interface mainly consists of the following parts:
Source Language Selection Area:
Automatic Detection: Cherry Studio will automatically detect the source language and perform the translation.
Target Language Selection Area:
Drop-down Menu: Select the language you want to translate the text into.
Configuration Button
Allows you to set the default model for translation.
Text Input Box (Left):
Enter or paste the text you need to translate.
Translation Result Box (Right):
Displays the translated text.
Copy Button: Click the button to copy the translation result to the clipboard.
Translate Button:
Click this button to start translation.
Select Languages:
In the Source Language Selection Area (1), select the original language (or choose "Automatic Detection").
In the Target Language Selection Area (2), select the language you want to translate to.
Enter Text:
Enter or paste the text you want to translate in the text input box (4) on the left.
Start Translation:
Click the "Translate" button (6).
View and Copy Results:
The translation result will be displayed in the Translation Result Box (5) on the right.
Click the copy button to copy the translation result to the clipboard.
Automatic Detection: Make full use of the "Automatic Detection" feature to avoid the hassle of manually selecting the source language.
Long Text Translation: Cherry Studio's translation function supports long text translation. If the text is too long, you may need to wait a moment.
Settings: You can select a model for the translation function via the settings button.
Q: What if the translation is inaccurate?
A: Although AI translation is powerful, it is not perfect. For texts in professional fields or complex contexts, manual proofreading is recommended. You can also try switching to different models.
Q: Which languages are supported?
A: Cherry Studio's translation function supports multiple mainstream languages. For a list of supported languages, please refer to the official Cherry Studio website or the in-app instructions.
Q: Can I translate an entire file?
A: The current interface is mainly for text translation. For file translation, you may need to go to the chat page of Cherry Studio and add the file for translation.
Q: What if the translation speed is slow?
A: Translation speed may be affected by factors such as network connection, text length, and server load. Please ensure your network connection is stable and wait patiently.
On the Mini Programs page, you can use the web versions of AI-related programs from various service providers within the client. Currently, custom adding and deleting are not supported.
For how to use the Knowledge Base, please refer to the Knowledge Base Tutorial in the advanced tutorials.
The Files interface displays all files related to conversations, drawings, knowledge bases, etc. You can centrally manage and view them on this page.
Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations such as asking questions, translating, summarizing, and explaining.
Open Settings: Navigate to Settings
-> Quick Assistant
.
Turn on the switch: Find and click the "Quick Assistant" enable button.
Set Shortcut Key (Optional):
The default shortcut key is Ctrl+ E.
You can customize the shortcut key here to avoid conflicts or to better fit your usage habits.
Activate: In any application, press your set shortcut key (default Ctrl + E) to open Quick Assistant.
Interact: In the Quick Assistant window, you can directly perform the following operations:
Ask a Question: Ask the AI any question.
Text Translation: Enter the text to be translated.
Summarize Content: Enter long text to summarize.
Explanation: Enter the concept or term to be explained.
Close: Press the ESC key or click anywhere outside the Quick Assistant window to close it.
You can specify a default AI model for Quick Assistant to get a more consistent and personalized experience.
Open Settings: Navigate to Settings
-> Default Model
-> Default Assistant Model
.
Select Model: Select the model you want to use from the dropdown list.
Shortcut Key Conflicts: If the default shortcut key conflicts with other applications, please change the shortcut key.
Explore More Features: In addition to the features mentioned in the documentation, Quick Assistant may also support other operations, such as code generation, style transfer, etc. We encourage you to explore further as you use it.
Feedback and Improvement: If you encounter any problems or have any suggestions for improvement during use, please give feedback to the Cherry Studio team feedback.
This page is only an introduction to the interface functions. For configuration tutorials, please refer to the Provider Configuration tutorial in the basic tutorials.
In CherryStudio, a single provider supports multiple API Keys for polling usage. The polling method is a cyclic approach from the front to the back of the list.
Separate multiple keys with English commas when adding them. See the example below:
When using built-in providers, you generally do not need to fill in the API address. If you need to modify it, please strictly follow the address provided in the official documentation.
If the address provided by the provider is in the format of https://xxx.xxx.com/v1/chat/completions, you only need to fill in the root address part (https://xxx.xxx.com).
The CherryStudio client will automatically append the remaining path (/v1/chat/completions). Failure to fill in as required may result in the inability to use it properly.
Generally, clicking the Manage button in the lower left corner of the provider configuration page will automatically retrieve all models supported by the provider. Click the "+" sign in the retrieved list to add them to the model list.
Note: The models in the pop-up list when you click the Manage button need to be added to the model list on the provider configuration page by clicking the "+" sign after the model before they can appear in the model selection list.
Click the check button after the API Key input box to test if the configuration is successful.
After successful configuration, be sure to turn on the switch in the upper right corner; otherwise, the provider will still be in an unenabled state, and the corresponding models cannot be found in the model list.
When the assistant does not have a default assistant model set, the model selected by default in its new conversations will be the model set here.
Additionally, the model used for optimizing prompts when creating a new assistant is also the model set here.
After each conversation, a model will be called to generate a topic name for the conversation. The model set here is the model used for naming.
The translation function in the input boxes for conversations, paintings, etc., and the translation model in the translation interface all use the model set here.
On this page, you can set the interface language of the software and configure proxy settings, etc.
On this page, you can set the software's color theme, page layout, or customize CSS for personalized settings.
You can set the default interface color mode here (Light Mode, Dark Mode, or Follow System).
This setting is for the layout of the conversation interface.
Topic Layout
When this setting is enabled, clicking on the assistant name will automatically switch to the corresponding topic page.
When enabled, it will display the topic creation time below the topic title.
Through this setting, you can flexibly make personalized changes and settings to the interface. For specific methods, refer to the Personalization Settings in the advanced tutorials.
On this page, you can enable/disable and configure keyboard shortcuts for various functions. Please follow the instructions on the interface to set them up.
On this page, you can perform cloud and local backups of client data, query the local data directory, and clear cache, among other operations.
Currently, data backup only supports the WebDAV method. You can choose a service that supports WebDAV for cloud backups.
You can also achieve multi-device data synchronization by using the A
—Backup → WebDAV
—Restore → B
method.
Log in to Nutstore, click on your username in the top right corner, and select “Account Info”:
Select “Security Options” and click “Add Application”:
Enter the application name and generate a random password:
Copy and record the password:
Obtain the server address, account, and password:
In CherryStudio Settings - Data Settings, fill in the WebDAV information:
Select to back up or restore data, and you can set the automatic backup time cycle.
WebDAV services with lower barriers to entry are generally cloud storage services:
123Pan (Requires membership)
Aliyun Drive (Requires purchase)
Box (Free storage capacity is 10GB, single file size limit is 250MB.)
Dropbox (Dropbox offers 2GB for free, and you can expand to 16GB by inviting friends.)
TeraCloud (Free space is 10GB, and another 5GB of extra space can be obtained through invitations.)
Yandex Disk (Free users get 10GB of storage.)
Secondly, some services require self-deployment:
Windows Installation Tutorial
Click "Download" to choose the appropriate version.
If your browser displays a warning that the file is not trusted, choose to keep it.
Select "Keep"
→Trust "Cherry-Studio"
MacOS Installation Tutorial
First, go to the homepage and click to download the Mac version, or click below to go directly:
Download completed, click here:
Drag and drop the icon to install:
Installation complete:
Click the Apple logo in the top left corner of your screen.
Click "About This Mac" in the expanded menu.
View processor information in the pop-up window.
If it is an Intel chip, download the Intel version installer.
If it is an Apple M* chip, download the Apple silicon installer.
On the official API Key page, click + Create new secret key
Copy the generated key and open CherryStudio's Provider Settings page.
Find the OpenAI provider and fill in the key you just obtained.
Click "Manage" or "Add" at the bottom to add supported models, and then turn on the provider switch in the upper right corner to start using it.
Before getting a Gemini API key, you need to have a Google Cloud project (if you already have one, you can skip this step).
Go to Google Cloud to create a project, fill in the project name and click "Create Project".
On the official API Key page, click Key Create API key
Copy the generated key and open CherryStudio's Provider Settings page.
Find the Gemini provider and fill in the key you just obtained.
Click "Manage" or "Add" at the bottom to add supported models, and then turn on the provider switch in the upper right corner to start using it.
In version 0.9.1, CherryStudio introduces the long-awaited knowledge base feature.
Below, we will present a step-by-step guide on how to use CherryStudio's knowledge base in detail.
Find models in the model management service. You can quickly filter by clicking "Embedding";
Find the desired model and add it to My Models.
Knowledge Base Entry: In the CherryStudio toolbar on the left, click the knowledge base icon to enter the management page;
Add Knowledge Base: Click "Add" to start creating a knowledge base;
Naming: Enter the name of the knowledge base and add an embedding model. Taking bge-m3 as an example, you can complete the creation.
Add Files: Click the "Add Files" button to open the file selection dialog;
Select Files: Choose supported file formats such as pdf, docx, pptx, xlsx, txt, md, mdx, etc., and open them;
Vectorization: The system will automatically perform vectorization. When "Completed" (green ✓) is displayed, it indicates that vectorization is complete.
CherryStudio supports multiple ways to add data:
Folder Directory: You can add an entire folder directory. Files in supported formats under this directory will be automatically vectorized;
Website Link: Supports website URLs, such as https://docs.siliconflow.cn/introduction;
Sitemap: Supports sitemap in xml format, such as https://docs.siliconflow.cn/sitemap.xml;
Plain Text Notes: Supports entering custom content in plain text.
Once files and other materials are vectorized, you can perform queries:
Click the "Search Knowledge Base" button at the bottom of the page;
Enter the content to be queried;
The search results are presented;
And the matching score of the result is displayed.
Create a new topic. In the conversation toolbar, click "Knowledge Base". The list of created knowledge bases will expand. Select the knowledge base you need to reference;
Enter and send a question. The model will return an answer generated through retrieval results;
At the same time, the data source of the reference will be attached below the answer, allowing you to quickly view the source file.
All data added to the Knowledge Base in Cherry Studio is stored locally. During the adding process, a copy of the document will be placed in the Cherry Studio data storage directory.
Vector Database: https://turso.tech/libsql
After a document is added to the Cherry Studio Knowledge Base, the file will be split into several fragments. These fragments will then be processed by the embedding model.
When using a large model for question answering, text fragments related to the question will be queried and passed to the large language model for processing together.
If you have data privacy requirements, it is recommended to use a local embedding database and a local large language model.
To prevent errors, the max input values for some models in this document are not listed as their absolute limits. For example, when the official maximum input value is 8k (with no specific number given), the reference value provided in this document is 8191 or 8000, etc. (If this is unclear, please disregard and just fill in the reference values as provided in the document).
Official Model Information Reference
Doubao-embedding
4095
Doubao-embedding-vision
8191
Doubao-embedding-large
4095
Official Model Information Reference
text-embedding-v3
8192
text-embedding-v2
2048
text-embedding-v1
2048
text-embedding-async-v2
2048
text-embedding-async-v1
2048
Official Model Information Reference
text-embedding-3-small
8191
text-embedding-3-large
8191
text-embedding-ada-002
8191
Official Model Information Reference
Embedding-V1
384
tao-8k
8192
Official Model Information Reference
embedding-2
1024
embedding-3
2048
Official Model Information Reference
hunyuan-embedding
1024
Official Model Information Reference
Baichuan-Text-Embedding
512
Official Model Information Reference
M2-BERT-80M-2K-Retrieval
2048
M2-BERT-80M-8K-Retrieval
8192
M2-BERT-80M-32K-Retrieval
32768
UAE-Large-v1
512
BGE-Large-EN-v1.5
512
BGE-Base-EN-v1.5
512
Official Model Information Reference
jina-embedding-b-en-v1
512
jina-embeddings-v2-base-en
8191
jina-embeddings-v2-base-zh
8191
jina-embeddings-v2-base-de
8191
jina-embeddings-v2-base-code
8191
jina-embeddings-v2-base-es
8191
jina-colbert-v1-en
8191
jina-reranker-v1-base-en
8191
jina-reranker-v1-turbo-en
8191
jina-reranker-v1-tiny-en
8191
jina-clip-v1
8191
jina-reranker-v2-base-multilingual
8191
reader-lm-1.5b
256000
reader-lm-0.5b
256000
jina-colbert-v2
8191
jina-embeddings-v3
8191
Official Model Information Reference
BAAI/bge-m3
8191
netease-youdao/bce-embedding-base_v1
512
BAAI/bge-large-zh-v1.5
512
BAAI/bge-large-en-v1.5
512
Pro/BAAI/bge-m3
8191
Official Model Information Reference
text-embedding-004
2048
Official Model Information Reference
nomic-embed-text-v1
8192
nomic-embed-text-v1.5
8192
gte-multilingual-base
8192
Official Model Information Reference
embedding-query
4000
embedding-passage
4000
Official Model Information Reference
embed-english-v3.0
512
embed-english-light-v3.0
512
embed-multilingual-v3.0
512
embed-multilingual-light-v3.0
512
embed-english-v2.0
512
embed-english-light-v2.0
512
embed-multilingual-v2.0
256
Customizing CSS allows you to modify the software's appearance to better suit your preferences, like this:
See: https://github.com/CherryHQ/cherry-studio/tree/main/src/renderer/src/assets/styles
Some Chinese-style Cherry Studio themes to share: https://linux.do/t/topic/325119/129
Monaspace
English Font
Commercial License
GitHub has launched an open-source font family named Monaspace, offering five styles: Neon (modern), Argon (humanist), Xenon (serif), Radon (handwritten), and Krypton (mechanical).
MiSans Global
Multilingual
Commercial License
MiSans Global is a global language font customization project led by Xiaomi, in collaboration with Monotype and Hanyi Font Library.
It's a vast font family covering over 20 writing systems and supporting more than 600 languages.
Cherry Studio data storage follows system conventions, and data is automatically placed in the user directory. The specific directory locations are as follows:
macOS: /Users/username/Library/Application Support/CherryStudioDev
Windows: C:\Users\username\AppData\Roaming\CherryStudio
If you wish to modify the storage location, you can do so by creating a symbolic link (symlink). Exit the application, move the data to your desired location, and then create a link in the original location pointing to the moved location.
For detailed steps, please refer to: https://github.com/CherryHQ/cherry-studio/issues/621#issuecomment-2588652880
How to Use Networking Models in Cherry Studio
Cherry Studio supports importing topics into Notion's database.
First you need to create a Notion database and Notion Integration and connect the Integration to the Notion database as shown in the following figure.
Then you need to configure the Notion database ID and Notion key in Cherry Studio:
If your Notion database URL looks something like this:
https://www.notion.so/<long_hash_1>?v=<long_hash_2>
So the Notion database ID is <long_hash_1>
this part.
Note that the "page title field name" here needs to be the same as the field name in the Notion database, otherwise it will cause import failure.
Right-click the topic and select [Import to Notion].
To connect Cherry Studio with Obsidian
Cherry Studio supports linking with Obsidian to export complete conversations or individual conversations to the Obsidian library.
No additional Obsidian plugins need to be installed for this process, but since Cherry Studio's import to Obsidian works on the same principle as the Obsidian Web Clipper, it is recommended that users upgrade to the latest version of Obsidian (the current Obsidian version should be at least greater than 1.7.2) in order to avoid import failures if the dialogue is too long.
Open your Obsidian vault and create a folder
to save the exported conversations (the "Cherry Studio" folder is used as an example in the image):
Pay attention to and remember the text framed in the bottom left corner; this is your vault
name.
In the Settings → Data Settings → Obsidian Configuration menu of Cherry Studio, enter the repository name and folder
name that you obtained in the first step:
The global tags
are optional and can be set for all dialogues exported to Obsidian. Fill in as needed.
Go back to the Cherry Studio conversation interface, right-click on the conversation, select export, and click export to Obsidian.
At this time, a window will pop up, used to adjust the Properties of this dialogue note exported to Obsidian, as well as the processing method for exporting to Obsidian. There are three optional processing methods for exporting to Obsidian:
Create new (overwrite if exists): Create a new conversation note in the folder
filled in during step two, overwriting the old note if a note with the same name exists.
Prepend: When a note with the same name already exists, export the selected conversation content and add it to the beginning of that note.
Append: Export and add selected dialog to the end of a note of the same name, if it already exists
To export a single conversation, click the three-line menu below the conversation, select "Export," and then click "Export to Obsidian."
After that, the same window as when exporting the complete conversation will pop up, asking you to configure the note properties and how to handle the notes. Just follow the tutorial above to complete it.
🎉 By this point, congratulations on completing all the configurations for Cherry Studio linked Obsidian and walking through the export process in its entirety, ENJOY YOURSELVES!
Problems
4xx (client error status code): Generally indicate that the request cannot be completed because of request syntax error, authentication failure, or authentication failure.
5xx (server error status code): generally server-side errors, server downtime, request processing timeout, etc.
400
Wrong request format, etc.
Check the contents of the error returned by the dialog or the console to see what is reported as an error, and follow the prompts. [Common Case 1]: If it is a Gemini model, you may need bind a card; [Common Case 2]: Data volume exceeds the limit. Commonly in Visual Models, the image volume exceeds the upper limit of the upstream single request traffic will return the error code; [Common Case 3]: Add unsupported parameters or fill in the parameters incorrectly. Try to create a new pure assistant to test whether it is normal; [Common Scenario 4]: Context exceeds the limit, clear the context or create a new dialog or reduce the number of context entries.
401
Authentication failed: Model not supported or the server account is banned, etc.
Contact or check the status of the corresponding service provider's account
403
Operation not authorized
Act according to the error messages returned by the dialog or the console error message prompts.
404
Resource not found
Checking request paths, etc.
429
Request rate limit reached
Request rate (TPM or RPM) has reached the limit, please try again after a while.
500
Internal server error, unable to complete request
Contact upstream service provider if persistent
501
The server does not support the requested function and cannot complete the request
502
A server working as a gateway or proxy receives an invalid response from a remote server when it tries to execute a request
503
The server is temporarily unable to process client requests due to overload or system maintenance. The length of the delay can be included in the server's Retry-After header information
504
Servers acting as gateways or proxies that do not get requests from remote servers in a timely manner
Click on the CherryStudio client window and press the shortcut Ctrl+Shift+I (Mac: Command+Option+I).
In the pop-up console window, click Network
→ Click to view the last "completions
" (for errors encountered in dialogue, translation, model connectivity checks, etc.) or "generations" (for errors encountered in drawing) marked with a red ×
at ② → Click Response
to view the complete return content (the area in ④ in the image).
If you can't determine the cause of the error, please send a screenshot of this screen to the Official Communication Group for help.
This checking method can be used to get error information not only during dialogs, but also during model testing, when adding knowledge bases, when painting, etc. In either case it is necessary to open the debugging window first and then perform the request operation to get the request information.
Check if the formula has delimiters when the formula code is displayed directly instead of being rendered.
Usage of delimiter
Inline formula
Use a single dollar sign:
$formula$
Or use
\(
and\)
:\(formula\)
Independent formula block
Use double dollar symbols:
$$formula$$
Or use
\[formula\]
Example:
$$\sum_{i=1}^n x_i$$
,
Formula rendering error/gibberishni commonly when the formula contains Chinese(CJK) content, try to switch the formula engine to KateX.
Model state unavailable
Non-embedding model is used
Attention:
Embedding models, dialog models, drawing models, etc. have their own functions, and their request methods and return contents and structures are different, so please don't force other types of models to be used as embedded models;
CherryStudio will automatically categorize the embedding models in the embedding model list (as shown in the above figure). If the model is confirmed to be an embeding model but has not been categorized correctly, you can go to the model list and click on the Settings button at the back of the corresponding model to check the embedding option;
If you cannot confirm which models are embedding models, you can check the model information from the corresponding service provider.
First, you need to confirm whether the model supports image recognition. CherryStudio categorizes popular models, and those with a small eye icon after the model name support image recognition.
Image recognition models will support uploading image files. If the model function is not correctly matched, you can find the model in the model list of the corresponding service provider, click the settings button after its name, and check the image option.
You can find the specific information of the model from the corresponding service provider. Like embedding models, models that do not support vision do not need to enable the image function, and selecting the image option will have no effect.
CherryStudio is a free and open source project. As the project grows, the workload of the project team also increases day by day. In order to reduce the cost of communication as well as to quickly and efficiently solve your problems, we hope that before you ask a question, as far as possible, according to the following steps and ways to deal with the problems encountered, for the project team to set aside more time for the maintenance and development of the project. Thank you for your cooperation!
Most of the basics can be solved by consulting the documentation.
The features and usage issues of the software can be checked in the feature introduction document.
Frequently asked questions will be included on the FAQ page. You can first check the FAQ page to see if there is a solution.
Complex problems can be solved directly through searching or by asking questions in the search box;
Be sure to carefully read the content of the hint boxes in each document, which can help you avoid many problems.
Check or search for similar issues and solutions on GitHub's Issue page.
For issues unrelated to client functionality (such as model errors, unexpected responses, parameter settings, etc.), it is recommended to first search for relevant solutions online, or describe the error content and problem to an AI to find solutions.
If the above steps one and two do not provide an answer or solve your problem, you can go to the official Telegram channel, Discord channel, or QQ group (one-click access) to describe the problem in detail and seek help.
If the model reports an error, please provide a complete screenshot of the interface and the error information from the console. Sensitive information can be redacted, but the model name, parameter settings, and error content must be retained in the screenshot. Click here to see how to view the console error information.
If it's a software bug, please provide a specific error description and detailed steps to reproduce it, so that developers can debug and fix it. If it's an occasional problem that cannot be reproduced, please describe the relevant scenarios, background, and configuration parameters as detailed as possible when the problem occurred.
In addition to this, you also need to include platform information (Windows, Mac, or Linux), software version number, and other information in the problem description.
Request document or provide document suggestions
You can contact the Telegram channel @Wangmouuu
or QQ (1355873789
), or you can send an email to: sunrise@cherry-ai.com
Tokens are the basic units that AI models use to process text, and can be understood as the smallest units that the model "thinks" with. They are not exactly the same as the characters or words we understand, but rather a special way the model itself divides text.
One Chinese character is usually encoded as 1-2 tokens
For example: "你好"
≈ 2-4 tokens
Common words are usually 1 token
Longer or less common words will be broken down into multiple tokens
For example:
"hello"
= 1 token
"indescribable"
= 4 tokens
Spaces and punctuation also consume tokens
A line break is usually 1 token
A tokenizer is a tool that AI models use to convert text into tokens. It determines how to split input text into the smallest units that the model can understand.
Different corpora lead to different optimization directions
Varying degrees of multilingual support
Specialized optimization for specific domains (medical, legal, etc.)
BPE (Byte Pair Encoding) - OpenAI GPT Serious
WordPiece - Google BERT
SentencePiece - Suitable for multilingual scenarios
Some prioritize compression efficiency
Some prioritize semantic preservation
Some prioritize processing speed
The same text may have different token counts in different models:
Basic concept: Embedding models are a technology that transforms high-dimensional discrete data (text, images, etc.) into low-dimensional continuous vectors. This transformation allows machines to better understand and process complex data. Imagine it as simplifying a complex jigsaw puzzle into a simple coordinate point, but this point still retains the key features of the puzzle. In the large model ecosystem, it acts as a "translator," converting human-understandable information into a numerical form that AI can compute.
Working principle: Taking natural language processing as an example, embedding models can map words to specific locations in a vector space. In this space, words with similar semantics will automatically cluster together. For example:
The vectors for "king" and "queen" will be very close
Pet words like "cat" and "dog" can be close together
Semantically unrelated words like "car" and "bread" will be farther away
Text analytics: Document categorization, sentiment analysis
Recommender system: Personalized content recommendation
Image processing: Similar image retrieval
Search engine: Semantic search optimization
Dimensionality reduction effect: Simplifies complex data into easily processed vector forms
Semantic preservation: Retaining key semantic information from the original data
Computational efficiency: Significantly improves the training and inference efficiency of machine learning models.
Technical Value: Embedded model is a fundamental component of modern AI systems, providing high-quality data representation for machine learning tasks, and is a key technology that drives the development of natural language processing, computer vision, and other fields.
Basic workflow:
Knowledge Base Preprocessing Phase
Split the document into chunks of appropriate size
Convert each chunk into a vector using an embedding model
Store the vectors and original text in a vector database
Query Processing Phase
Convert user questions into vectors
Retrieve similar content in the vector database
Provide the retrieved relevant content as context to the LLM
Join the Telegram discussion group for help: https://t.me/CherryStudioAI
GitHub Issue: https://github.com/CherryHQ/cherry-studio/issues/new/choose
Email the developer: support@cherry-ai.com
Contact: Mr. Wang
📮: yinsenho@cherry-ai.com
📱: 18954281942 (This is NOT a service number)
Please email us at support@cherry-ai.com
Or raise a issue: https://github.com/CherryHQ/cherry-studio/issues
For commercial use, please note our licensing terms.
Free Commercial Use (Unmodified Code Only):
We hereby grant you a non-exclusive, worldwide, non-transferable, royalty-free license to use, reproduce, distribute, copy, and distribute the unmodified materials, including for commercial purposes, subject to the terms and conditions of this Agreement, based on the intellectual property or other rights we own or embody in the materials.
Commercial License:
You must contact us and obtain explicit written commercial authorization prior to continuing the use of Cherry Studio materials under any of the following circumstances:
Modification and Derivatives: You modify or create derivative works based on Cherry Studio materials (including but not limited to changing the application name, logo, code, functionality, interface, etc.).
Enterprise Services: You utilize Cherry Studio internally within your enterprise, or offer services based on Cherry Studio to enterprise customers, and such services support cumulative usage by 10 or more users.
Hardware Bundling Sales: You pre-install or integrate Cherry Studio into hardware devices or products for bundled sales.
Large-scale Procurement by Government or Educational Institutions: Your usage scenario involves large-scale procurement projects by government or educational institutions, particularly when security, data privacy, or other sensitive requirements are involved.
Public-facing Cloud Services: You provide cloud services based on Cherry Studio that are publicly accessible.
Details of Authorization:https://docs.cherry-ai.com/en-us/contact-us/business-cooperation/cherry-studio-license-agreement
By using or distributing any part or element of Cherry Studio materials, you acknowledge that you have read, understood, and agreed to the terms of this Agreement, which shall become effective immediately upon such use.
This Cherry Studio License Agreement (the “Agreement”) defines the terms and conditions governing the use, reproduction, distribution, and modification of the Materials.
• “We” (or “us”) means Shanghai Qianhui Technology Co., Ltd.
• “You” refers to any natural person or legal entity exercising rights granted by this Agreement, and/or using the Materials for any purpose and within any field of use.
• “Third Party” means any individual or legal entity not under common control with either You or us.
• “Cherry Studio” refers to the software suite, including but not limited to [e.g., core libraries, editors, plugins, example projects], and its source code, documentation, sample code, and other elements distributed by us. (Please specify based on the actual composition of CherryStudio.)
• “Materials” collectively refers to the proprietary Cherry Studio software and documentation (or any part thereof) provided by Shanghai Qianhui Technology Co., Ltd. under this Agreement.
• “Source” form means the preferred form for modifications, including but not limited to source code, documentation source files, and configuration files.
• “Object” form means any form resulting from mechanical transformation or translation of Source form, including but not limited to compiled object code, generated documentation, or forms converted into other media types.
• “Commercial Use” means use for direct or indirect commercial gain or advantage, including but not limited to sales, licensing, subscriptions, advertising, marketing, training, consulting services, etc.
• “Modification” refers to any alteration, adaptation, derivative, or secondary development of Materials in Source form, including but not limited to changing application names, logos, code, functions, and interfaces.
Free Commercial Use (without modified code):
We hereby grant you a non-exclusive, worldwide, non-transferable, royalty-free license, under our intellectual property or other rights embodied in the Materials, to use, copy, distribute, and redistribute the unmodified Materials, including for commercial use, subject to compliance with the terms and conditions herein.
Commercial Authorization (when required):
You must obtain express written commercial authorization from us to exercise rights under this Agreement when the conditions stated in Section 3 (“Commercial Authorization”) are met.
Under any of the following conditions, you must contact us and obtain explicit written commercial authorization before proceeding with the use of Cherry Studio Materials:
• Modification and Derivatives: You modify or develop derivative works based on Cherry Studio Materials, including but not limited to changing the application name, logo, code, functionality, or interface.
• Enterprise Services: You use Cherry Studio internally within your enterprise, or offer Cherry Studio-based services to enterprise clients, supporting cumulative usage by 10 or more users.
• Hardware Bundling Sales: You pre-install or integrate Cherry Studio into hardware devices or products for bundled sales.
• Large-scale Procurement by Government or Educational Institutions: Your usage scenario involves large-scale procurement projects by government or educational institutions, particularly involving sensitive requirements such as security or data privacy.
• Public-facing Cloud Services: You provide publicly accessible cloud services based on Cherry Studio.
You may distribute unmodified copies of the Materials, or provide them as part of a product or service containing unmodified Materials, in Source or Object form, subject to the following conditions:
• You must include a copy of this Agreement with all copies of the Materials distributed.
• You must retain the following attribution statement within any copy distributed, included in a “NOTICE” or similar text file distributed as part of such copies:
"Cherry Studio is licensed under the Cherry Studio LICENSE AGREEMENT, Copyright (c) Shanghai Qianhui Technology Co., Ltd. All Rights Reserved."
Materials may be subject to export control or restrictions. You must comply with applicable laws and regulations when using the Materials.
If you use the Materials or their outputs or results to create, train, fine-tune, or improve software or models to be distributed or made available, we encourage prominently marking your relevant product documentation with phrases such as “Built with Cherry Studio” or “Powered by Cherry Studio”.
We retain all intellectual property rights in and to the Materials and derivative works created by or for us. Subject to the terms of this Agreement, ownership of intellectual property rights in modifications and derivative works created by you will be governed by specific commercial authorization agreements. Without obtaining commercial authorization, you shall not acquire ownership rights in modifications and derivative works, and all intellectual property rights remain vested with us.
No license to use our trade names, trademarks, service marks, or product names is granted unless necessary to fulfill obligations under this Agreement or reasonably customary to describe and redistribute the Materials.
If you institute any legal proceeding against us or any entity (including counterclaims or countersuits), alleging that the Materials or their outputs infringe upon any intellectual property or other rights owned or licensable by you, all licenses granted under this Agreement shall terminate immediately upon initiation of such proceedings.
We are not obligated to support, update, provide training for, or develop further versions of Cherry Studio Materials, nor to grant any related licenses.
Materials are provided on an “as-is” basis without warranties of any kind, express or implied, including warranties of merchantability, non-infringement, or fitness for a particular purpose. We make no warranties or assurances concerning the security or stability of Materials or their outputs.
In no event shall we be liable to you for any damages arising from your use or inability to use the Materials or any outputs thereof, including but not limited to direct, indirect, special, or consequential damages, regardless of the cause.
You agree to defend, indemnify, and hold us harmless against any third-party claims arising from or related to your use or distribution of the Materials.
The term of this Agreement begins upon your acceptance or access to the Materials and remains effective until terminated in accordance with its terms.
We may terminate this Agreement upon your breach of any terms or conditions. Upon termination, you must cease using the Materials. Sections 7, 9, and “2. Contributor Agreement” will survive termination.
This Agreement and any disputes arising out of or related to it shall be governed by the laws of the People’s Republic of China.
The Shanghai People’s Court shall have exclusive jurisdiction over any disputes arising from this Agreement.
Welcome to Cherry Studio (“this software” or “we”). We highly value your privacy protection. This Privacy Policy outlines how we handle and protect your personal information and data. Please carefully read and understand this policy before using the software:
To optimize user experience and improve software quality, we may anonymously collect the following non-personal information only:
• Software version information;
• Activity and usage frequency of software functions;
• Anonymous crash reports and error logs;
The above information is fully anonymized, does not involve any personally identifiable data, and cannot be associated with your personal information.
To maximize the protection of your privacy and security, we explicitly promise that we will:
• NOT collect, store, transmit, or process any API Key information for model services that you input into this software;
• NOT collect, store, transmit, or process any conversational data generated while using this software, including but not limited to chat content, instruction data, knowledge base data, vector data, and other customized content;
• NOT collect, store, transmit, or process any personally identifiable sensitive information.
This software uses the third-party model service provider’s API Key that you apply for and configure independently, to implement related model calls and conversation functionalities. The model services (such as large models, API interfaces, etc.) you use are provided by the third-party providers you choose and are entirely their responsibility. Cherry Studio acts solely as a local tool offering the interface to call third-party model services.
• All conversational data generated by your interaction with the large model services is independent of Cherry Studio. We neither participate in data storage nor perform any form of data transmission or relay;
• You are responsible for reviewing and accepting the privacy policy and related policies of the corresponding third-party model service providers. Privacy policies of these services are available on each provider’s official website.
You shall independently bear privacy risks potentially involved with third-party model service providers. Specific privacy policies, data security measures, and relevant responsibilities can be found on the official websites of the selected model service providers. We do not assume any liability in this regard.
This policy may be adjusted appropriately according to software version updates. Please check it regularly. In the event of substantial changes to this policy, we will notify you in an appropriate manner.
If you have any questions regarding this policy or Cherry Studio’s privacy protection measures, please feel free to contact us at any time.
Thank you for choosing and trusting Cherry Studio. We will continue providing you with a safe and reliable product experience.