Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is only an introduction to the interface functions. For configuration tutorials, please refer to the tutorial in the basic tutorials.
When using built-in providers, you only need to fill in the corresponding API keys.
Different providers may have different names for API keys, such as Secret Key, Key, API Key, Token, etc., but they all refer to the same thing.
In CherryStudio, a single provider supports multiple API Keys for polling usage. The polling method is a cyclic approach from the front to the back of the list.
Separate multiple keys with English commas when adding them. See the example below:
When using built-in providers, you generally do not need to fill in the API address. If you need to modify it, please strictly follow the address provided in the official documentation.
If the address provided by the provider is in the format of https://xxx.xxx.com/v1/chat/completions, you only need to fill in the root address part (https://xxx.xxx.com).
The CherryStudio client will automatically append the remaining path (/v1/chat/completions). Failure to fill in as required may result in the inability to use it properly.
Note: Most providers have unified routes for large language models, and generally, you do not need to perform the following operations. If the API path of the provider is v2, v3/chat/completions, or other versions, you can manually enter the corresponding version ending with "/
" in the address bar. When the provider's request route is not the conventional /v1/chat/completions, use the complete address provided by the provider ending with "#
".
That is:
When the API address ends with "/
", only "chat/completions" is appended.
When the API address ends with "#
", no appending operation is performed, and only the entered address is used.
Generally, clicking the Manage button in the lower left corner of the provider configuration page will automatically retrieve all models supported by the provider. Click the "+" sign in the retrieved list to add them to the model list.
Note: The models in the pop-up list when you click the Manage button need to be added to the model list on the provider configuration page by clicking the "+" sign after the model before they can appear in the model selection list.
Click the check button after the API Key input box to test if the configuration is successful.
The model check defaults to using the last dialogue model added in the model list. If there are failures during the check, please check if there are incorrect or unsupported models in the model list.
After successful configuration, be sure to turn on the switch in the upper right corner; otherwise, the provider will still be in an unenabled state, and the corresponding models cannot be found in the model list.
The drawing feature currently only supports drawing models from Silicon Flow. You can register an account at Silicon Flow and add it to Providers to use this feature.
More providers will be added in the future, so stay tuned.
If you have questions about the parameters, you can hover your mouse over the ?
icon in the corresponding area to view the description.
The Agents page is an assistant marketplace where you can select or search for your desired model presets. Clicking on an agent card will add the assistant to your assistant list in the chat page.
You can also edit and create your own assistants on this page.
Click on Agents
and then click the create agent card to start creating your own assistant.
The button in the upper right corner of the prompt input box is the AI optimize prompt button. Clicking it will overwrite the original text. The model used is the Global Default Assistant Model.
Cherry Studio's Translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.
The translation interface mainly consists of the following parts:
Source Language Selection Area:
Automatic Detection: Cherry Studio will automatically detect the source language and perform the translation.
Target Language Selection Area:
Drop-down Menu: Select the language you want to translate the text into.
Configuration Button
Allows you to set the default model for translation.
Text Input Box (Left):
Enter or paste the text you need to translate.
Translation Result Box (Right):
Displays the translated text.
Copy Button: Click the button to copy the translation result to the clipboard.
Translate Button:
Click this button to start translation.
Select Languages:
In the Source Language Selection Area (1), select the original language (or choose "Automatic Detection").
In the Target Language Selection Area (2), select the language you want to translate to.
Enter Text:
Enter or paste the text you want to translate in the text input box (4) on the left.
Start Translation:
Click the "Translate" button (6).
View and Copy Results:
The translation result will be displayed in the Translation Result Box (5) on the right.
Click the copy button to copy the translation result to the clipboard.
Automatic Detection: Make full use of the "Automatic Detection" feature to avoid the hassle of manually selecting the source language.
Long Text Translation: Cherry Studio's translation function supports long text translation. If the text is too long, you may need to wait a moment.
Settings: You can select a model for the translation function via the settings button.
Q: What if the translation is inaccurate?
A: Although AI translation is powerful, it is not perfect. For texts in professional fields or complex contexts, manual proofreading is recommended. You can also try switching to different models.
Q: Which languages are supported?
A: Cherry Studio's translation function supports multiple mainstream languages. For a list of supported languages, please refer to the official Cherry Studio website or the in-app instructions.
Q: Can I translate an entire file?
A: The current interface is mainly for text translation. For file translation, you may need to go to the chat page of Cherry Studio and add the file for translation.
Q: What if the translation speed is slow?
A: Translation speed may be affected by factors such as network connection, text length, and server load. Please ensure your network connection is stable and wait patiently.
The Files interface displays all files related to conversations, drawings, knowledge bases, etc. You can centrally manage and view them on this page.
For how to use the Knowledge Base, please refer to the Knowledge Base Tutorial in the advanced tutorials.
When the assistant does not have a default assistant model set, the model selected by default in its new conversations will be the model set here.
Additionally, the model used for optimizing prompts when creating a new assistant is also the model set here.
After each conversation, a model will be called to generate a topic name for the conversation. The model set here is the model used for naming.
The translation function in the input boxes for conversations, paintings, etc., and the translation model in the translation interface all use the model set here.
On the Mini Programs page, you can use the web versions of AI-related programs from various service providers within the client. Currently, custom adding and deleting are not supported.
Assistants are used to personalize the settings of a selected model, such as preset prompts and parameter presets. These settings allow the chosen model to better meet your expected workflow.
Default System Assistant
presets a relatively general parameter set (no prompt). You can use it directly or find the presets you need on the Page.
Assistants are parent sets of Topics. Multiple topics (i.e., dialogues) can be created under a single assistant. All topics
share the assistant's
parameter settings and preset words (prompts), and other model settings.
Model settings are synchronized with the Settings
parameters in the assistant settings. See the Assistant Settings section of the documentation for details.
In dialogue settings, only the model settings apply to the current assistant, while other settings apply globally. For example, if you set the message style to bubble, it will be the bubble style under any topic of any assistant.
Message Separator Line
:
Use a separator line to separate the message body from the operation bar.
Use Serif Font
:
Font style switching. You can now also change the font through Custom CSS.
Show Line Numbers for Code
:
Displays line numbers for code blocks when the model outputs code snippets.
Collapsible Code Blocks
:
When enabled, long code outputs within code snippets will be automatically collapsed.
Message Style
:
Allows switching the dialogue interface to bubble style or list style.
Code Style
:
Allows switching the display style of code snippets.
Mathematical Formula Engine
:
KaTeX renders faster because it is specifically designed for performance optimization.
MathJax renders slower but is more feature-rich and supports more mathematical symbols and commands.
Message Font Size
:
Adjust the font size of the dialogue interface.
Show Estimated Token Count
:
Displays the estimated number of tokens consumed by the input text in the input box (not the actual tokens consumed by the context, for reference only).
Paste Long Text as File
:
When a long text is copied from elsewhere and pasted into the input box, it will automatically be displayed as a file style to reduce interference when entering subsequent content.
Markdown Render Input Messages
:
When turned off, only the messages replied by the model are rendered, and the sent messages are not rendered.
Translate with 3 Quick Spacebar Taps
:
After entering a message in the dialogue interface input box, quickly tapping the spacebar three times can translate the entered content into English.
Note: This operation will overwrite the original text.
In the assistant interface, select the assistant name you need to set → select the corresponding setting in the right-click menu
Assistant settings apply to all topics under this assistant.
Prompt Settings
Name
:
You can customize an assistant name for easy identification.
Prompt
:
That is, the prompt. You can refer to the prompt writing on the agents page to edit the content.
Default Model
:
You can fix a default model for this assistant. When adding from the agent page or copying an assistant, the initial model will be this model. If this item is not set, the initial model will be the global initial model (i.e., Default Assistant Model).
There are two types of default models for assistants: one is the Global Default Dialogue Model, and the other is the assistant's default model. The assistant's default model has a higher priority than the global default dialogue model. When the assistant's default model is not set, the assistant's default model = global default dialogue model.
Auto Reset Model
:
When turned on - when switching to use other models during use under this topic, creating a new topic will reset the new topic to the assistant's default model. When this item is turned off, the model of the new topic will follow the model used in the previous topic.
For example, if the default model of the assistant is gpt-3.5-turbo, and I create topic 1 under this assistant, and switch to gpt-4o during the dialogue in topic 1, then:
If auto-reset is turned on: When creating a new topic 2, the default model selected for topic 2 is gpt-3.5-turbo;
If auto-reset is turned off: When creating a new topic 2, the default model selected for topic 2 is gpt-4o.
Temperature
:
The temperature parameter controls the randomness and creativity of the text generated by the model (default value is 0.7). Specifically:
Low temperature value (0-0.3):
Output is more deterministic and focused
Suitable for code generation, data analysis and other scenarios requiring accuracy
Tends to choose the most probable words for output
Medium temperature value (0.4-0.7):
Balances creativity and coherence
Suitable for daily conversations, general writing
Recommended for chatbot dialogues (around 0.5)
High temperature value (0.8-1.0):
Produces more creative and diverse output
Suitable for creative writing, brainstorming and other scenarios
But may reduce the coherence of the text
Top P (Nucleus Sampling)
:
The default value is 1. The smaller the value, the more monotonous and easier to understand the content generated by AI; the larger the value, the wider the range of vocabulary the AI replies with, and the more diversified it is.
Nucleus sampling affects the output by controlling the probability threshold for vocabulary selection:
Smaller value (0.1-0.3):
Only consider the highest probability vocabulary
Output is more conservative and controllable
Suitable for code comments, technical documentation and other scenarios
Medium value (0.4-0.6):
Balances vocabulary diversity and accuracy
Suitable for general conversation and writing tasks
Larger value (0.7-1.0):
Consider a wider range of vocabulary choices
Produces richer and more diverse content
Suitable for creative writing and other scenarios requiring diversified expression
These two parameters can be used independently or in combination.
Choose appropriate parameter values according to the specific task type.
It is recommended to find the parameter combination that best suits specific application scenarios through experiments.
The above content is for reference and conceptual understanding only, and the given parameter range may not be suitable for all models. Please refer to the parameter recommendations given in the relevant model documentation.
Context Window
The number of messages to keep in the context. The larger the value, the longer the context and the more tokens consumed:
5-10: Suitable for normal conversations
>10: Complex tasks requiring longer memory (e.g., tasks that generate long articles step-by-step according to a writing outline, which requires ensuring coherent context logic)
Note: The more messages, the more tokens are consumed.
Enable Message Length Limit (MaxToken)
Maximum number of Tokens for a single response. In large language models, max token (maximum number of tokens) is a key parameter that directly affects the quality and length of the model's generated responses. The specific setting depends on your needs, and you can also refer to the following suggestions.
For example: When testing whether the model is connected after filling in the key in CherryStudio, you only need to know whether the model returns messages correctly without specific content. In this case, set MaxToken to 1.
The MaxToken limit for most models is 4k Tokens, and of course there are also 2k, 16k or even more. You need to check the corresponding introduction page for details.
Suggestions:
General chat: 500-800
Short article generation: 800-2000
Code generation: 2000-3600
Long article generation: 4000 and above (requires model support)
In general, the model's generated responses will be limited to the MaxToken range. Of course, there may also be truncation (such as when writing long code) or incomplete expressions. In special cases, you also need to flexibly adjust according to the actual situation.
Stream Output
Stream output is a data processing method that allows data to be transmitted and processed in a continuous stream form, instead of sending all data at once. This method allows data to be processed and output immediately after it is generated, greatly improving real-time performance and efficiency.
In CherryStudio client and similar environments, it is simply a typewriter effect.
When turned off (non-stream): The model outputs the entire paragraph of information at once after generation (imagine the feeling of receiving a message on WeChat);
When turned on: Output word by word. It can be understood that the large model sends you each word as soon as it generates it, until all words are sent.
If some special models do not support stream output, you need to turn off this switch, such as o1-mini, which initially only supported non-stream output.
Custom Parameters
Add extra request parameters to the request body (body), such as fields like presence_penalty
. Generally, most people will not need to use this.
How to fill in: Parameter Name - Parameter Type (Text, Number, etc.) - Value. Reference documentation: Click to go
The above parameters such as top-p, maxtokens, stream are among these parameters.
Custom parameters have higher priority than built-in parameters. That is, if custom parameters are duplicated with built-in parameters, the custom parameters will override the built-in parameters.
For example: After setting model
to gpt-4o
in custom parameters, no matter which model is selected in the dialogue, the gpt-4o
model is used.
Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations such as asking questions, translating, summarizing, and explaining.
Open Settings: Navigate to Settings
-> Quick Assistant
.
Turn on the switch: Find and click the "Quick Assistant" enable button.
Set Shortcut Key (Optional):
The default shortcut key is Ctrl+ E.
You can customize the shortcut key here to avoid conflicts or to better fit your usage habits.
Activate: In any application, press your set shortcut key (default Ctrl + E) to open Quick Assistant.
Interact: In the Quick Assistant window, you can directly perform the following operations:
Ask a Question: Ask the AI any question.
Text Translation: Enter the text to be translated.
Summarize Content: Enter long text to summarize.
Explanation: Enter the concept or term to be explained.
Close: Press the ESC key or click anywhere outside the Quick Assistant window to close it.
You can specify a default AI model for Quick Assistant to get a more consistent and personalized experience.
Open Settings: Navigate to Settings
-> Default Model
-> Default Assistant Model
.
Select Model: Select the model you want to use from the dropdown list.
Shortcut Key Conflicts: If the default shortcut key conflicts with other applications, please change the shortcut key.
Explore More Features: In addition to the features mentioned in the documentation, Quick Assistant may also support other operations, such as code generation, style transfer, etc. We encourage you to explore further as you use it.
Feedback and Improvement: If you encounter any problems or have any suggestions for improvement during use, please give feedback to the Cherry Studio team feedback.
On this page, you can set the interface language of the software and configure proxy settings, etc.
On this page, you can enable/disable and configure keyboard shortcuts for various functions. Please follow the instructions on the interface to set them up.
On this page, you can perform cloud and local backups of client data, query the local data directory, and clear cache, among other operations.
Currently, data backup only supports the WebDAV method. You can choose a service that supports WebDAV for cloud backups.
You can also achieve multi-device data synchronization by using the A
—Backup → WebDAV
—Restore → B
method.
Log in to Nutstore, click on your username in the top right corner, and select “Account Info”:
Select “Security Options” and click “Add Application”:
Enter the application name and generate a random password:
Copy and record the password:
Obtain the server address, account, and password:
In CherryStudio Settings - Data Settings, fill in the WebDAV information:
Select to back up or restore data, and you can set the automatic backup time cycle.
WebDAV services with lower barriers to entry are generally cloud storage services:
123Pan (Requires membership)
Aliyun Drive (Requires purchase)
Box (Free storage capacity is 10GB, single file size limit is 250MB.)
Dropbox (Dropbox offers 2GB for free, and you can expand to 16GB by inviting friends.)
TeraCloud (Free space is 10GB, and another 5GB of extra space can be obtained through invitations.)
Yandex Disk (Free users get 10GB of storage.)
Secondly, some services require self-deployment:
On this page, you can set the software's color theme, page layout, or customize CSS for personalized settings.
You can set the default interface color mode here (Light Mode, Dark Mode, or Follow System).
This setting is for the layout of the conversation interface.
Topic Layout
When this setting is enabled, clicking on the assistant name will automatically switch to the corresponding topic page.
When enabled, it will display the topic creation time below the topic title.
Through this setting, you can flexibly make personalized changes and settings to the interface. For specific methods, refer to the Personalization Settings in the advanced tutorials.