Chat Interface

Chat Interface

Assistants and Topics

Assistants

An Assistant is a personalized configuration for the selected model, such as preset prompts and parameters. These settings help the model better align with your expected workflow.

The System Default Assistant comes with relatively universal parameters (without prompts). You can use it directly or find the presets you need on the Agents page.

Topics

The Assistant is the parent set of Topics. A single assistant can create multiple topics (i.e., conversations). All Topics share the parameter settings and preset prompts (prompt) of the Assistant.

Chat Buttons

New Topic Creates a new topic under the current assistant.

Upload Image or Document Image upload requires model support. Uploading documents will automatically parse text as context for the model.

Web Search Requires configuration of web search information in settings. Search results are provided as context to the LLM. See Web Search Mode.

Knowledge Base Enables the knowledge base. See Knowledge Base Tutorial.

MCP Server Enables MCP server functionality. See MCP Usage Tutorial.

Generate Image Hidden by default. For models that support image generation (e.g., Gemini), manually enable this button to generate images.

Due to technical reasons, you must manually enable this button to generate images. This button will be removed after optimization.

Select Model Switches to the specified model for subsequent conversations while preserving context.

Quick Phrases Requires predefined common phrases in settings. Invoke them here for direct input, supporting variables.

Clear Messages Deletes all content in this topic.

Expand Enlarges the dialog box for long text input.

Clear Context Truncates the context available to the model without deleting content—the model "forgets" previous conversation content.

Estimated Token Count Shows estimated token usage: Current Context, Max Context (∞ indicates unlimited context), Current Message Word Count, and Estimated Tokens.

This function is for estimation only. Actual token counts vary by model—refer to model provider data.

Translate Translates the current input box content into English.

Chat Settings

Model Settings

Model settings synchronize with the Model Settings parameters in assistant settings. See Edit Assistant.

In chat settings, only the model settings apply to the current assistant. Other settings apply globally. For example, setting the message style to bubbles applies to all topics under all assistants.

Message Settings

Message Separator:

Uses a divider to separate message content from the action bar.

Use Serif Font:

Switches font style. You can also change fonts via Custom CSS.

Show Line Numbers for Code:

Displays line numbers in code blocks generated by the model.

Collapsible Code Blocks:

Automatically collapses code blocks when code snippets are long.

Wrap Lines in Code Blocks:

Enables automatic line wrapping when single-line code exceeds window width.

Auto-Collapse Reasoning Content:

Automatically collapses reasoning processes for models that support step-by-step thinking.

Message Style:

Switches chat interface to bubble style or list style.

Code Style:

Changes display style for code snippets.

Math Formula Engine:

  • KaTeX: Faster rendering with performance optimization

  • MathJax: Slower rendering with comprehensive symbol and command support

Message Font Size:

Adjusts font size in the chat interface.

Input Settings

Show Estimated Token Count:

Displays estimated token consumption for input text in the input box (reference only, not actual context tokens).

Paste Long Text as File:

Long text pasted into the input box automatically appears as files to reduce input interference.

Render Input Messages with Markdown:

When disabled, only renders model responses, not sent messages.

Triple-Space Translation:

Tap spacebar three times to translate input content to English after typing a message.

Target Language:

Sets target language for both translation button and triple-space translation.

Assistant Settings

In the assistant interface, select the assistant name → choose corresponding settings in the right-click menu

Edit Assistant

Assistant settings apply to all topics under that assistant.

Prompt Settings

Name:

Customizable assistant name for easy identification.

Prompt:

i.e., prompt. Edit content following prompt writing examples on the Agents page.

Model Settings

Default Model:

Sets a fixed default model for the assistant. When adding from Agents page or copying assistant, initial model uses this setting. If unset, initial model = global default model (see Default Assistant Model).

Two default models exist: Global Default Chat Model and Assistant Default Model. The assistant's model has higher priority. When unset: Assistant Default Model = Global Default Chat Model.

Auto-Reset Model:

When enabled: After switching models during conversation, creating a new topic resets to assistant's default model. When disabled: New topics inherit the previous topic's model.

Example: Assistant default model = gpt-3.5-turbo. Create Topic 1 → switch to gpt-4o during conversation.

  • Enabled Auto-Reset: Topic 2 uses gpt-3.5-turbo

  • Disabled Auto-Reset: Topic 2 uses gpt-4o

Temperature:

Controls randomness/creativity of text generation (default=0.7):

  • Low (0-0.3): More deterministic output. Ideal for code generation, data analysis

  • Medium (0.4-0.7): Balanced creativity/coherence. Recommended for chatbots (~0.5)

  • High (0.8-1.0): High creativity/diversity. Ideal for creative writing, but reduces coherence

Top P (Nucleus Sampling):

Default=1. Lower values → more focused/comprehensible responses. Higher values → wider vocabulary diversity.

Sampling controls token probability thresholds:

  • Low (0.1-0.3): Conservative output. Ideal for code comments/tech docs

  • Medium (0.4-0.6): Balanced diversity/accuracy. General dialogue/writing

  • High (0.7-1.0): Diverse expression. Creative writing scenarios

  • Parameters work independently or combined

  • Choose values based on task type

  • Experiment to find optimal combinations

  • Ranges are illustrative—consult model documentation for specifics

Context Window

Number of messages to retain in context. Higher values → longer context → higher token usage:

  • 5-10: Normal conversations

  • >10: Complex tasks requiring longer memory (e.g., multi-step content generation)

  • Note: More messages = higher token consumption

Enable Message Length Limit (MaxToken)

Sets maximum tokens per response. Directly impacts answer quality/length.

Example: When testing model connectivity, set MaxToken=1 to confirm response without specific content.

Most models support up to 32k tokens (some 64k+—check model documentation).

Stream Output

Enables continuous data stream processing instead of batch transmission. Provides real-time response generation (typing effect) in clients like CherryStudio.

  • Disabled: Full response delivered at once (like WeChat messages)

  • Enabled: Character-by-character output (generates → transmits each token immediately)

Disable for models without streaming support (e.g., initial o1-mini versions).

Custom Parameters

Adds extra request parameters to the body (e.g., presence_penalty). Generally not needed for regular use.

Parameters like top-p, max_tokens, and stream belong to this category.

Format: Parameter name—Parameter type (text/number/etc.)—Value. See documentation: Click here

Model providers often have unique parameters—consult their documentation.

  • Custom parameters override built-in parameters when names conflict.

Example: Setting model: gpt-4o forces all conversations to use gpt-4o regardless of selection.

  • Use parameter_name: undefined to exclude parameters.

最后更新于

这有帮助吗?