This document was translated from Chinese by AI and has not yet been reviewed.
An Assistant
is a personalized configuration for the selected model, such as preset prompts and parameters. These settings help the model better align with your expected workflow.
The System Default Assistant
comes with relatively universal parameters (without prompts). You can use it directly or find the presets you need on the Agents page.
The Assistant
is the parent set of Topics
. A single assistant can create multiple topics (i.e., conversations). All Topics
share the parameter settings and preset prompts (prompt) of the Assistant
.
New Topic
Creates a new topic under the current assistant.
Upload Image or Document
Image upload requires model support. Uploading documents will automatically parse text as context for the model.
Web Search
Requires configuration of web search information in settings. Search results are provided as context to the LLM. See Web Search Mode.
Knowledge Base
Enables the knowledge base. See Knowledge Base Tutorial.
MCP Server
Enables MCP server functionality. See MCP Usage Tutorial.
Generate Image
Hidden by default. For models that support image generation (e.g., Gemini), manually enable this button to generate images.
Select Model
Switches to the specified model for subsequent conversations while preserving context.
Quick Phrases
Requires predefined common phrases in settings. Invoke them here for direct input, supporting variables.
Clear Messages
Deletes all content in this topic.
Expand
Enlarges the dialog box for long text input.
Clear Context
Truncates the context available to the model without deleting content—the model "forgets" previous conversation content.
Estimated Token Count
Shows estimated token usage: Current Context
, Max Context
(∞ indicates unlimited context), Current Message Word Count
, and Estimated Tokens
.
Translate
Translates the current input box content into English.
Model settings synchronize with the Model Settings
parameters in assistant settings. See Edit Assistant.
Message Separator
:
Uses a divider to separate message content from the action bar.
Use Serif Font
:
Switches font style. You can also change fonts via Custom CSS.
Show Line Numbers for Code
:
Displays line numbers in code blocks generated by the model.
Collapsible Code Blocks
:
Automatically collapses code blocks when code snippets are long.
Wrap Lines in Code Blocks
:
Enables automatic line wrapping when single-line code exceeds window width.
Auto-Collapse Reasoning Content
:
Automatically collapses reasoning processes for models that support step-by-step thinking.
Message Style
:
Switches chat interface to bubble style or list style.
Code Style
:
Changes display style for code snippets.
Math Formula Engine
:
KaTeX: Faster rendering with performance optimization
MathJax: Slower rendering with comprehensive symbol and command support
Message Font Size
:
Adjusts font size in the chat interface.
Show Estimated Token Count
:
Displays estimated token consumption for input text in the input box (reference only, not actual context tokens).
Paste Long Text as File
:
Long text pasted into the input box automatically appears as files to reduce input interference.
Render Input Messages with Markdown
:
When disabled, only renders model responses, not sent messages.
Triple-Space Translation
:
Tap spacebar three times to translate input content to English after typing a message.
Note: This action overwrites the original text.
Target Language
:
Sets target language for both translation button and triple-space translation.
In the assistant interface, select the assistant name → choose corresponding settings in the right-click menu
Prompt Settings
Name
:
Customizable assistant name for easy identification.
Prompt
:
i.e., prompt. Edit content following prompt writing examples on the Agents page.
Model Settings
Default Model
:
Sets a fixed default model for the assistant. When adding from Agents page or copying assistant, initial model uses this setting. If unset, initial model = global default model (see Default Assistant Model).
Auto-Reset Model
:
When enabled: After switching models during conversation, creating a new topic resets to assistant's default model. When disabled: New topics inherit the previous topic's model.
Example: Assistant default model = gpt-3.5-turbo. Create Topic 1 → switch to gpt-4o during conversation.
Enabled Auto-Reset: Topic 2 uses gpt-3.5-turbo
Disabled Auto-Reset: Topic 2 uses gpt-4o
Temperature
:
Controls randomness/creativity of text generation (default=0.7):
Low (0-0.3): More deterministic output. Ideal for code generation, data analysis
Medium (0.4-0.7): Balanced creativity/coherence. Recommended for chatbots (~0.5)
High (0.8-1.0): High creativity/diversity. Ideal for creative writing, but reduces coherence
Top P (Nucleus Sampling)
:
Default=1. Lower values → more focused/comprehensible responses. Higher values → wider vocabulary diversity.
Sampling controls token probability thresholds:
Low (0.1-0.3): Conservative output. Ideal for code comments/tech docs
Medium (0.4-0.6): Balanced diversity/accuracy. General dialogue/writing
High (0.7-1.0): Diverse expression. Creative writing scenarios
Context Window
Number of messages to retain in context. Higher values → longer context → higher token usage:
5-10: Normal conversations
>10: Complex tasks requiring longer memory (e.g., multi-step content generation)
Note: More messages = higher token consumption
Enable Message Length Limit (MaxToken)
Sets maximum tokens per response. Directly impacts answer quality/length.
Example: When testing model connectivity, set MaxToken=1 to confirm response without specific content.
Most models support up to 32k tokens (some 64k+—check model documentation).
Suggestions:
Normal chat: 500-800
Short text gen: 800-2000
Code gen: 2000-3600
Long text gen: 4000+ (requires model support)
Responses are truncated at MaxToken limit. Incomplete expressions or truncation (e.g., long code) may occur—adjust as needed.
Stream Output
Enables continuous data stream processing instead of batch transmission. Provides real-time response generation (typing effect) in clients like CherryStudio.
Disabled: Full response delivered at once (like WeChat messages)
Enabled: Character-by-character output (generates → transmits each token immediately)
Custom Parameters
Adds extra request parameters to the body (e.g., presence_penalty
). Generally not needed for regular use.
Parameters like top-p, max_tokens, and stream belong to this category.
Format: Parameter name—Parameter type (text/number/etc.)—Value. See documentation: Click here
This document was translated from Chinese by AI and has not yet been reviewed.
The Agents page is an assistant marketplace where you can select or search for desired model presets. Click on a card to add the assistant to your conversation page's assistant list.
You can also edit and create your own assistants on this page.
Click My
, then Create Agent
to start building your own assistant.
This document was translated from Chinese by AI and has not yet been reviewed.
The drawing feature currently supports painting models from DMXAPI, TokenFlux, AiHubMix, and SiliconFlow. You can register an account at SiliconFlow and add it as a provider to use this feature.
For questions about parameters, hover your mouse over the ?
icon in corresponding areas to view descriptions.
This document was translated from Chinese by AI and has not yet been reviewed.
Note: Gemini image generation must be used in the chat interface because Gemini performs multi-modal interactive image generation and does not support parameter adjustment.
This document was translated from Chinese by AI and has not yet been reviewed.
Cherry Studio's translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.
The translation interface mainly consists of the following components:
Source Language Selection Area:
Any Language: Cherry Studio will automatically identify the source language and perform translation.
Target Language Selection Area:
Dropdown Menu: Select the language you wish to translate the text into.
Settings Button:
Clicking will jump to Default Model Settings.
Scroll Synchronization:
Toggle to enable scroll sync (scrolling in either side will synchronize the other).
Text Input Box (Left):
Input or paste the text you need to translate.
Translation Result Box (Right):
Displays the translated text.
Copy Button: Click to copy the translation result to clipboard.
Translate Button:
Click this button to start translation.
Translation History (Top Left):
Click to view translation history records.
Select Target Language:
Choose your desired translation language in the Target Language Selection Area.
Input or Paste Text:
Enter or paste the text to be translated in the left text input box.
Start Translation:
Click the Translate
button.
View and Copy Results:
Translation results will appear in the right result box.
Click the copy button to save the result to clipboard.
Q: What to do about inaccurate translations?
A: While AI translation is powerful, it's not perfect. For professional fields or complex contexts, manual proofreading is recommended. You may also try switching different models.
Q: Which languages are supported?
A: Cherry Studio translation supports multiple major languages. Refer to Cherry Studio's official website or in-app instructions for the specific supported languages list.
Q: Can entire files be translated?
A: The current interface primarily handles text translation. For document translation, please use Cherry Studio's conversation page to add files for translation.
Q: How to handle slow translation speeds?
A: Translation speed may be affected by network connection, text length, or server load. Ensure stable network connectivity and be patient.
This document was translated from Chinese by AI and has not yet been reviewed.
For knowledge base usage, please refer to the Knowledge Base Tutorial in the advanced tutorials.
This document was translated from Chinese by AI and has not yet been reviewed.
Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations like asking questions, translation, summarization, and explanations.
Open Settings: Navigate to Settings
-> Shortcuts
-> Quick Assistant
.
Enable Switch: Find and toggle on the Quick Assistant
button.
Set Shortcut Key (Optional):
Default shortcut for Windows: Ctrl + E
Default shortcut for macOS: ⌘ + E
Customize your shortcut here to avoid conflicts or match your usage habits.
Activate: Press your configured shortcut key (or default shortcut) in any application to open Quick Assistant.
Interact: Within the Quick Assistant window, you can directly perform:
Quick Questions: Ask any question to the AI.
Text Translation: Input text to be translated.
Content Summarization: Input long text for summarization.
Explanation: Input concepts or terms requiring explanation.
Close: Press ESC or click anywhere outside the Quick Assistant window.
Shortcut Conflicts: Modify shortcuts if defaults conflict with other applications.
Explore More Functions: Beyond documented features, Quick Assistant may support operations like code generation and style transfer. Continuously explore during usage.
Feedback & Improvements: Report issues or suggestions to the Cherry Studio team via feedback.
This document was translated from Chinese by AI and has not yet been reviewed.
This page only introduces the interface features. For configuration tutorials, please refer to the Provider Configuration tutorial in the Basic Tutorials.
In Cherry Studio, a single provider supports multiple keys for round-robin usage, with the polling method being sequential from front to back.
Add multiple keys separated by English commas. For example:
sk-xxxx1,sk-xxxx2,sk-xxxx3,sk-xxxx4
Must use English commas.
When using built-in providers, it's generally not necessary to fill in the API address. If modification is needed, strictly follow the address provided in the official documentation.
If the provider gives an address in the format https://xxx.xxx.com/v1/chat/completions, only fill in the root address part (https://xxx.xxx.com).
Cherry Studio will automatically append the remaining path (/v1/chat/completions). Failure to comply may result in failure to function properly.
Generally, clicking the Manage
button at the bottom left of the provider configuration page will automatically fetch all supported models. Click +
in the fetched list to add models to the model list.
Click the check button next to the API Key input box to test whether the configuration is successful.
After successful configuration, be sure to turn on the switch in the upper right corner. Otherwise the provider will remain disabled and you won't find corresponding models in the model list.
This document was translated from Chinese by AI and has not yet been reviewed.
When the assistant does not have a default assistant model configured, the model selected by default for new conversations is the one set here. The model used for prompt optimization and word-selection assistant is also configured in this section.
After each conversation, a model is called to generate a topic name for the dialog. The model set here is the one used for naming.
The translation feature in input boxes for conversations, painting, etc., and the translation model in the translation interface all use the model set here.
The model used for the quick assistant feature. For details, see Quick Assistant.
This document was translated from Chinese by AI and has not yet been reviewed.
On this page, you can set the software's color theme, page layout, or use Custom CSS for personalized configurations.
Here you can set the default interface color mode (Light mode, Dark mode, or Follow System).
These settings apply to the layout of the conversation interface.
Conversation Panel Position
Auto-Switch to Conversation
When enabled, clicking an assistant name will automatically switch to the corresponding conversation.
Show Conversation Time
When enabled, displays the conversation creation time below the conversation.
This setting allows flexible customization of the interface. Refer to the advanced tutorial on Custom CSS for specific methods.
This document was translated from Chinese by AI and has not yet been reviewed.
This interface allows for local and cloud data backup and recovery, local data directory inquiry and cache clearing, export settings, and third-party connections.
Currently, data backup supports three methods: local backup, WebDAV backup, and S3-compatible storage (object storage) backup. For specific introductions and tutorials, please refer to the following documents:
Export settings allow you to configure the export options displayed in the export menu, as well as set the default path for Markdown exports, display styles, and more.
Third-party connections allow you to configure Cherry Studio's connection with third-party applications for quickly exporting conversation content to your familiar knowledge management applications. Currently supported applications include: Notion, Obsidian, SiYuan Note, Yuque, Joplin. For specific configuration tutorials, please refer to the following documents: