所有页面
由 GitBook 提供支持
1 / 16

Feature Introduction

This document was translated from Chinese by AI and has not yet been reviewed.

Feature Overview

Chat Interface

This document was translated from Chinese by AI and has not yet been reviewed.

Assistants and Topics

Assistants

An Assistant allows you to personalize the selected model with settings such as prompt presets and parameter presets, making the model work more in line with your expectations.

The System Default Assistant provides a general parameter preset (no prompt). You can use it directly or find the preset you need on the Agents Page.

Topics

An Assistant is the parent set of Topics. Multiple topics (i.e., conversations) can be created under a single assistant. All Topics share the Assistant's parameter settings and prompt presets, among other model settings.

Buttons in the Chatbox

New Topic creates a new topic within the current assistant.

Upload Image or Document requires model support for image uploads. Uploading documents will automatically parse them into text to be provided as context to the model.

Web Search requires configuring web search related information in the settings. Search results are returned to the large model as context. For details, see Web Search Mode.

Knowledge Base enables the knowledge base feature. For details, see Knowledge Base Tutorial.

MCP Server enables the MCP server feature. For details, see MCP Usage Tutorial.

Generate Image is not displayed by default. For models that support image generation (e.g., Gemini), it needs to be manually enabled before images can be generated.

Due to technical reasons, you must manually enable the button to generate images. This button will be removed after this feature is optimized.

Select Model switches to the specified model for subsequent conversations while preserving the context.

Quick Phrases requires pre-setting common phrases in the settings to be called here, allowing direct input and supporting variables.

Clear Messages deletes all content under this topic.

Expand makes the chatbox larger for entering longer texts.

Clear Context truncates the context available to the model without deleting content, meaning the model will "forget" previous conversation content.

Estimated Token Count displays the estimated token count. The four values represent Current Context Count, Maximum Context Count (∞ indicates unlimited context), Current Input Box Message Character Count, and Estimated Token Count.

This function is only for estimating the token count. The actual token count varies for each model, so please refer to the data provided by the model provider.

Translate translates the content in the current input box into English.

Conversation Settings

Model Settings

Model settings are synchronized with the Model Settings parameters in the assistant settings. For details, see Assistant Settings.

In the conversation settings, only the model settings apply to the current assistant; other settings apply globally. For example, if you set the message style to bubbles, it will be in bubble style for any topic of any assistant.

Message Settings

Message Separator:

Uses a separator to divide the message body from the action bar.

Use Serif Font:

Font style toggle. You can now also change fonts via custom CSS.

Display Line Numbers for Code:

Displays line numbers for code blocks when the model outputs code snippets.

Collapsible Code Blocks:

When enabled, code blocks will automatically collapse if the code snippet is long.

Code Block Word Wrap:

When enabled, single lines of code within a code snippet will automatically wrap if they are too long (exceed the window).

Auto-collapse Thinking Content:

When enabled, models that support thinking will automatically collapse the thinking process after completion.

Message Style:

Allows switching the conversation interface to bubble style or list style.

Code Style:

Allows switching the display style of code snippets.

Mathematical Formula Engine:

  • KaTeX renders faster because it is specifically designed for performance optimization.

  • MathJax renders slower but is more comprehensive, supporting more mathematical symbols and commands.

Message Font Size:

Adjusts the font size of the conversation interface.

Input Settings

Display Estimated Token Count:

Displays the estimated token count consumed by the input text in the input box (not the actual context token consumption, for reference only).

Paste Long Text as File:

When copying and pasting a long block of text from elsewhere into the input box, it will automatically display as a file, reducing interference during subsequent input.

Markdown Render Input Messages:

When disabled, only model replies are rendered, not sent messages.

Triple Space for Translation:

After entering a message in the conversation interface input box, pressing the spacebar three times consecutively will translate the input content into English.

Note: This operation will overwrite the original text.

Target Language:

Sets the target language for the input box translation button and the triple space translation feature.

Assistant Settings

In the assistant interface, select the assistant name you want to set → select the corresponding setting in the right-click menu.

Edit Assistant

Assistant settings apply to all topics under that assistant.

Prompt Settings

Name:

Customizable assistant name for easy identification.

Prompt:

The prompt itself. You can refer to the prompt writing style on the agent page to edit the content.

Model Settings

Default Model:

You can fix a default model for this assistant. When adding from the agent page or copying an assistant, the initial model will be this model. If this item is not set, the initial model will be the global initial model (i.e., Default Assistant Model).

There are two types of default models for assistants: one is the global default conversation model, and the other is the assistant's default model. The assistant's default model has a higher priority than the global default conversation model. If the assistant's default model is not set, the assistant's default model will be equal to the global default conversation model.

Auto Reset Model:

When enabled - if you switch to another model during use in this topic, creating a new topic will reset the new topic's model to the assistant's default model. When this item is disabled, the model for a new topic will follow the model used in the previous topic.

For example, if the assistant's default model is gpt-3.5-turbo, and I create Topic 1 under this assistant, then switch to gpt-4o during the conversation in Topic 1:

If auto-reset is enabled: When creating Topic 2, the default model selected for Topic 2 will be gpt-3.5-turbo.

If auto-reset is not enabled: When creating Topic 2, the default model selected for Topic 2 will be gpt-4o.

Temperature:

The temperature parameter controls the degree of randomness and creativity in the text generated by the model (default value is 0.7). Specifically:

  • Low temperature values (0-0.3):

    • Output is more deterministic and focused.

    • Suitable for scenarios requiring accuracy, such as code generation and data analysis.

    • Tends to select the most probable words for output.

  • Medium temperature values (0.4-0.7):

    • Balances creativity and coherence.

    • Suitable for daily conversations and general writing.

    • Recommended for chatbot conversations (around 0.5).

  • High temperature values (0.8-1.0):

    • Produces more creative and diverse output.

    • Suitable for creative writing, brainstorming, etc.

    • May reduce text coherence.

Top P (Nucleus Sampling):

The default value is 1. A smaller value makes the AI-generated content more monotonous and easier to understand; a larger value gives the AI a wider and more diverse range of vocabulary for replies.

Nucleus sampling affects output by controlling the probability threshold for vocabulary selection:

  • Smaller values (0.1-0.3):

    • Only consider words with the highest probability.

    • Output is more conservative and controllable.

    • Suitable for scenarios like code comments and technical documentation.

  • Medium values (0.4-0.6):

    • Balances vocabulary diversity and accuracy.

    • Suitable for general conversations and writing tasks.

  • Larger values (0.7-1.0):

    • Considers a wider range of vocabulary.

    • Produces richer and more diverse content.

    • Suitable for creative writing and other scenarios requiring diverse expression.

  • These two parameters can be used independently or in combination.

  • Choose appropriate parameter values based on the specific task type.

  • It is recommended to experiment to find the parameter combination best suited for a particular application scenario.

  • The above content is for reference and conceptual understanding only; the given parameter ranges may not be suitable for all models. Please refer to the parameter recommendations in the model's documentation.

Context Window

The number of messages to keep in the context. A larger value means a longer context and consumes more tokens:

  • 5-10: Suitable for general conversations.

  • 10: For complex tasks requiring longer memory (e.g., generating long texts step-by-step according to an outline, where generated context needs to ensure logical coherence).

  • Note: More messages consume more tokens.

Enable Message Length Limit (MaxToken)

The maximum Token count for a single response. In large language models, max token (maximum token count) is a crucial parameter that directly affects the quality and length of the model's generated responses.

For example, when testing if a model is connected after filling in the key in CherryStudio, if you only need to know if the model returns a message correctly without specific content, you can set MaxToken to 1.

The MaxToken limit for most models is 32k Tokens, but some support 64k or even more; you need to check the relevant introduction page for details.

The specific setting depends on your needs, but you can also refer to the following suggestions.

Suggestions:

  • General Chat: 500-800

  • Short Text Generation: 800-2000

  • Code Generation: 2000-3600

  • Long Text Generation: 4000 and above (requires model support)

Generally, the model's generated response will be limited within the MaxToken range. However, it may also be truncated (e.g., when writing long code) or expressed incompletely. In special cases, adjustments need to be made flexibly according to the actual situation.

Streaming Output (Stream)

Streaming output is a data processing method that allows data to be transmitted and processed in a continuous stream, rather than sending all data at once. This method allows data to be processed and output immediately after it is generated, greatly improving real-time performance and efficiency.

In environments like the CherryStudio client, it basically means a "typewriter effect."

When disabled (non-streaming): The model generates information and outputs the entire segment at once (imagine receiving a message on WeChat).

When enabled: Outputs character by character. This can be understood as the large model sending you each character as soon as it generates it, until all characters are sent.

If certain special models do not support streaming output, this switch needs to be turned off, such as o1-mini and others that initially only supported non-streaming.

Custom Parameters

Adds additional request parameters to the request body, such as presence_penalty and other fields. Generally, most users will not need this.

Parameters like top-p, maxtokens, stream, etc., mentioned above are some of these parameters.

Input format: Parameter Name — Parameter Type (text, number, etc.) — Value. Reference documentation: Click to go.

Each model provider may have its own unique parameters. You need to consult the provider's documentation for usage methods.

  • Custom parameters have higher priority than built-in parameters. That is, if a custom parameter conflicts with a built-in parameter, the custom parameter will override the built-in parameter.

For example: if model is set to gpt-4o in custom parameters, then gpt-4o will be used in the conversation regardless of which model is selected.

  • Settings like ParameterName:undefined can be used to exclude parameters.

Agents

This document was translated from Chinese by AI and has not yet been reviewed.

Agent

The Agents page is an assistant marketplace where you can select or search for desired model presets. Click on a card to add the assistant to your conversation page's assistant list.

You can also edit and create your own assistants on this page.

  • Click My, then Create Agent to start building your own assistant.

The button in the upper right corner of the prompt input box optimizes prompts using AI. Clicking it will overwrite the original text. This feature uses the Global Default Assistant Model.

Drawing

This document was translated from Chinese by AI and has not yet been reviewed.

Drawing

The drawing feature currently supports painting models from DMXAPI, TokenFlux, AiHubMix, and SiliconFlow. You can register an account at SiliconFlow and add it as a provider to use this feature.

For questions about parameters, hover your mouse over the ? icon in corresponding areas to view descriptions.

More providers will be added in the future. Stay tuned.

This document was translated from Chinese by AI and has not yet been reviewed.

Note: Gemini image generation must be used in the chat interface because Gemini performs multi-modal interactive image generation and does not support parameter adjustment.

Translation

This document was translated from Chinese by AI and has not yet been reviewed.

Translation

Cherry Studio's translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.

Interface Overview

The translation interface mainly consists of the following components:

  1. Source Language Selection Area:

    • Any Language: Cherry Studio will automatically identify the source language and perform translation.

  2. Target Language Selection Area:

    • Dropdown Menu: Select the language you wish to translate the text into.

  3. Settings Button:

    • Clicking will jump to Default Model Settings.

  4. Scroll Synchronization:

    • Toggle to enable scroll sync (scrolling in either side will synchronize the other).

  5. Text Input Box (Left):

    • Input or paste the text you need to translate.

  6. Translation Result Box (Right):

    • Displays the translated text.

    • Copy Button: Click to copy the translation result to clipboard.

  7. Translate Button:

    • Click this button to start translation.

  8. Translation History (Top Left):

    • Click to view translation history records.

Usage Steps

  1. Select Target Language:

    • Choose your desired translation language in the Target Language Selection Area.

  2. Input or Paste Text:

    • Enter or paste the text to be translated in the left text input box.

  3. Start Translation:

    • Click the Translate button.

  4. View and Copy Results:

    • Translation results will appear in the right result box.

    • Click the copy button to save the result to clipboard.

Frequently Asked Questions (FAQ)

  • Q: What to do about inaccurate translations?

    • A: While AI translation is powerful, it's not perfect. For professional fields or complex contexts, manual proofreading is recommended. You may also try switching different models.

  • Q: Which languages are supported?

    • A: Cherry Studio translation supports multiple major languages. Refer to Cherry Studio's official website or in-app instructions for the specific supported languages list.

  • Q: Can entire files be translated?

    • A: The current interface primarily handles text translation. For document translation, please use Cherry Studio's conversation page to add files for translation.

  • Q: How to handle slow translation speeds?

    • A: Translation speed may be affected by network connection, text length, or server load. Ensure stable network connectivity and be patient.

Mini-Apps

This document was translated from Chinese by AI and has not yet been reviewed.

Mini Programs

On the Mini Programs page, you can use web versions of AI-related programs from major service providers within the client. Currently, custom addition and removal are not supported.

Knowledge Base

This document was translated from Chinese by AI and has not yet been reviewed.

Knowledge Base

For knowledge base usage, please refer to the Knowledge Base Tutorial in the advanced tutorials.

Files

This document was translated from Chinese by AI and has not yet been reviewed.

Files

The Files interface displays all files related to conversations, paintings, knowledge bases, etc. You can centrally manage and view these files on this page.

Quick Assistant

This document was translated from Chinese by AI and has not yet been reviewed.

Quick Assistant

Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations like asking questions, translation, summarization, and explanations.

Enable Quick Assistant

  1. Open Settings: Navigate to Settings -> Shortcuts -> Quick Assistant.

  2. Enable Switch: Find and toggle on the Quick Assistant button.

Schematic diagram for enabling Quick Assistant
  1. Set Shortcut Key (Optional):

    • Default shortcut for Windows: Ctrl + E

    • Default shortcut for macOS: ⌘ + E

    • Customize your shortcut here to avoid conflicts or match your usage habits.

Using Quick Assistant

  1. Activate: Press your configured shortcut key (or default shortcut) in any application to open Quick Assistant.

  2. Interact: Within the Quick Assistant window, you can directly perform:

    • Quick Questions: Ask any question to the AI.

    • Text Translation: Input text to be translated.

    • Content Summarization: Input long text for summarization.

    • Explanation: Input concepts or terms requiring explanation.

      Schematic diagram of Quick Assistant interface
  3. Close: Press ESC or click anywhere outside the Quick Assistant window.

Quick Assistant uses the Global Default Conversation Model.

Tips & Tricks

  • Shortcut Conflicts: Modify shortcuts if defaults conflict with other applications.

  • Explore More Functions: Beyond documented features, Quick Assistant may support operations like code generation and style transfer. Continuously explore during usage.

  • Feedback & Improvements: Report issues or suggestions to the Cherry Studio team via feedback.

Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Settings

Model Service Settings

This document was translated from Chinese by AI and has not yet been reviewed.

This page only introduces the interface functions. For configuration tutorials, please refer to the Provider Configuration tutorial in the basic tutorials.

  • When using built-in service providers, you only need to fill in the corresponding secret key.

  • Different service providers may use different names for the secret key, such as Secret Key, Key, API Key, Token, etc., all refer to the same thing.

API Key

In Cherry Studio, a single service provider supports multi-key round-robin usage, where the keys are rotated from first to last in a list.

  • Add multiple keys separated by English commas, as shown in the example below:

sk-xxxx1,sk-xxxx2,sk-xxxx3,sk-xxxx4

You must use an English comma.

API Address

When using built-in service providers, you generally do not need to fill in the API address. If you need to modify it, please strictly follow the address provided in the official documentation.

If the address provided by the service provider is in the format https://xxx.xxx.com/v1/chat/completions, you only need to fill in the root address part (https://xxx.xxx.com).

Cherry Studio will automatically concatenate the remaining path (/v1/chat/completions). Not filling it as required may lead to improper functionality.

Note: The large language model routes for most service providers are consistent, so the following operations are generally not necessary. If the service provider's API path is v2, v3/chat/completions, or another version, you can manually enter the corresponding version ending with "/" in the address bar; when the service provider's request route is not the conventional /v1/chat/completions, use the complete address provided by the service provider and end it with "#".

That is:

  • When the API address ends with /, only "chat/completions" is concatenated.

  • When the API address ends with #, no concatenation operation is performed; only the entered address is used.

Add Model

Typically, clicking the Manage button in the bottom left corner of the service provider configuration page will automatically fetch all models supported by that service provider. You can then click the + sign from the fetched list to add them to the model list.

Not all models in the pop-up list will be added when you click the Manage button. You need to click the + sign to the right of each model to add it to the model list on the service provider configuration page before it can appear in the model selection list.

Connectivity Check

Click the check button next to the API Key input box to test if the configuration is successful.

During the model check, the last conversation model added to the model list is used by default. If the check fails, please verify if there are any incorrect or unsupported models in the model list.

After successful configuration, make sure to turn on the switch in the upper right corner, otherwise, the service provider will remain disabled, and the corresponding models will not be found in the model list.

Default Model Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Default Model Settings

Default Assistant Model

When the assistant does not have a default assistant model configured, the model selected by default for new conversations is the one set here. The model used for prompt optimization and word-selection assistant is also configured in this section.

Topic Naming Model

After each conversation, a model is called to generate a topic name for the dialog. The model set here is the one used for naming.

Translation Model

The translation feature in input boxes for conversations, painting, etc., and the translation model in the translation interface all use the model set here.

Quick Assistant Model

The model used for the quick assistant feature. For details, see Quick Assistant.

General Settings

This document was translated from Chinese by AI and has not yet been reviewed.

General Settings

On this page, you can configure the software's interface language, proxy settings, and other options.

Display Settings

This document was translated from Chinese by AI and has not yet been reviewed.

On this page, you can set the software's color theme, page layout, or customize CSS for personalized settings.

Theme Selection

Here you can set the default interface color mode (light mode, dark mode, or follow system).

Topic Settings

These settings are for the layout of the chat interface.

Topic Position

Automatically Switch to Topic

When this setting is enabled, clicking on the assistant's name will automatically switch to the corresponding topic page.

Display Topic Time

When enabled, the creation time of the topic will be displayed below it.

Custom CSS

This setting allows flexible customization of the interface. For specific methods, please refer to Custom CSS in the advanced tutorial.

Hotkey Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Shortcut Keys Settings

This interface allows you to enable/disable and configure shortcut keys for certain functions. Please refer to the on-screen instructions for specific setup.

Data Settings

This document was translated from Chinese by AI and has not yet been reviewed.

This interface allows for local and cloud data backup and recovery, local data directory inquiry and cache clearing, export settings, and third-party connections.

Data Backup

Currently, data backup supports three methods: local backup, WebDAV backup, and S3-compatible storage (object storage) backup. For specific introductions and tutorials, please refer to the following documents:

  • WebDAV Backup Tutorial

  • S3-Compatible Storage Backup

Export Settings

Export settings allow you to configure the export options displayed in the export menu, as well as set the default path for Markdown exports, display styles, and more.

Third-Party Connections

Third-party connections allow you to configure Cherry Studio's connection with third-party applications for quickly exporting conversation content to your familiar knowledge management applications. Currently supported applications include: Notion, Obsidian, SiYuan Note, Yuque, Joplin. For specific configuration tutorials, please refer to the following documents:

  • Notion Configuration Tutorial

  • Obsidian Configuration Tutorial

  • SiYuan Note Configuration Tutorial