仅本页所有页面
由 GitBook 提供支持
1 / 86

English

Cherry Studio

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Basic Tutorial

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Knowledge Base Tutorial

Loading...

Loading...

Loading...

Loading...

Advanced Tutorial

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Project Contribution

Loading...

Loading...

Questions & Feedback

Loading...

Loading...

Loading...

Loading...

Contact Us

Loading...

Loading...

About

Loading...

Other Content

Loading...

Loading...

Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Settings

Feature Introduction

This document was translated from Chinese by AI and has not yet been reviewed.

Feature Overview

Files

This document was translated from Chinese by AI and has not yet been reviewed.

Files

The Files interface displays all files related to conversations, paintings, knowledge bases, etc. You can centrally manage and view these files on this page.

Default Model Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Default Model Settings

Default Assistant Model

When the assistant does not have a default assistant model configured, the model selected by default for new conversations is the one set here. The model used for prompt optimization and word-selection assistant is also configured in this section.

Topic Naming Model

After each conversation, a model is called to generate a topic name for the dialog. The model set here is the one used for naming.

Translation Model

The translation feature in input boxes for conversations, painting, etc., and the translation model in the translation interface all use the model set here.

Quick Assistant Model

The model used for the quick assistant feature. For details, see Quick Assistant.

Mini-Programs

This document was translated from Chinese by AI and has not yet been reviewed.

Mini Programs

On the Mini Programs page, you can use web versions of AI-related programs from major service providers within the client. Currently, custom addition and removal are not supported.

General Settings

This document was translated from Chinese by AI and has not yet been reviewed.

General Settings

On this page, you can configure the software's interface language, proxy settings, and other options.

Installation Tutorial

This document was translated from Chinese by AI and has not yet been reviewed.

Installation Tutorial

Drawing

This document was translated from Chinese by AI and has not yet been reviewed.

Drawing

The drawing feature currently supports painting models from DMXAPI, TokenFlux, AiHubMix, and SiliconFlow. You can register an account at SiliconFlow and add it as a provider to use this feature.

For questions about parameters, hover your mouse over the ? icon in corresponding areas to view descriptions.

More providers will be added in the future. Stay tuned.

Knowledge Base

This document was translated from Chinese by AI and has not yet been reviewed.

Knowledge Base

For knowledge base usage, please refer to the in the advanced tutorials.

Quick Assistant

This document was translated from Chinese by AI and has not yet been reviewed.

Quick Assistant

Quick Assistant is a convenient tool provided by Cherry Studio that allows you to quickly access AI functions in any application, enabling instant operations like asking questions, translation, summarization, and explanations.

Enable Quick Assistant

  1. Open Settings: Navigate to Settings -> Shortcuts -> Quick Assistant.

  2. Enable Switch: Find and toggle on the Quick Assistant button.

  1. Set Shortcut Key (Optional):

    • Default shortcut for Windows: Ctrl + E

    • Default shortcut for macOS: ⌘ + E

    • Customize your shortcut here to avoid conflicts or match your usage habits.

Using Quick Assistant

  1. Activate: Press your configured shortcut key (or default shortcut) in any application to open Quick Assistant.

  2. Interact: Within the Quick Assistant window, you can directly perform:

    • Quick Questions: Ask any question to the AI.

    • Text Translation: Input text to be translated.

    • Content Summarization: Input long text for summarization.

    • Explanation: Input concepts or terms requiring explanation.

  3. Close: Press ESC or click anywhere outside the Quick Assistant window.

Quick Assistant uses the .

Tips & Tricks

  • Shortcut Conflicts: Modify shortcuts if defaults conflict with other applications.

  • Explore More Functions: Beyond documented features, Quick Assistant may support operations like code generation and style transfer. Continuously explore during usage.

  • Feedback & Improvements: Report issues or suggestions to the Cherry Studio team via .

macOS

macOS 版本安装教程

This document was translated from Chinese by AI and has not yet been reviewed.

macOS

  1. First, visit the official download page to download the Mac version, or click the direct link below

    Please download the chip-specific version matching your Mac

If unsure which chip version to use for your Mac:

  • Click the  in the top-left menu bar

  • Select "About This Mac" in the expanded menu

  • Check the processor information in the pop-up window

If using Intel chip → download Intel version installer If using Apple M* chip → download Apple chip installer

  1. After downloading, click here

  1. Drag the icon to install

Find the Cherry Studio icon in Launchpad and click it. If the Cherry Studio main interface opens, the installation is successful.

Data Settings

This document was translated from Chinese by AI and has not yet been reviewed.

This interface allows for local and cloud data backup and recovery, local data directory inquiry and cache clearing, export settings, and third-party connections.

Data Backup

Currently, data backup supports three methods: local backup, WebDAV backup, and S3-compatible storage (object storage) backup. For specific introductions and tutorials, please refer to the following documents:

Export Settings

Export settings allow you to configure the export options displayed in the export menu, as well as set the default path for Markdown exports, display styles, and more.

Third-Party Connections

Third-party connections allow you to configure Cherry Studio's connection with third-party applications for quickly exporting conversation content to your familiar knowledge management applications. Currently supported applications include: Notion, Obsidian, SiYuan Note, Yuque, Joplin. For specific configuration tutorials, please refer to the following documents:

Agents

This document was translated from Chinese by AI and has not yet been reviewed.

Agent

The Agents page is an assistant marketplace where you can select or search for desired model presets. Click on a card to add the assistant to your conversation page's assistant list.

You can also edit and create your own assistants on this page.

  • Click My, then Create Agent to start building your own assistant.

The button in the upper right corner of the prompt input box optimizes prompts using AI. Clicking it will overwrite the original text. This feature uses the .

Windows

Windows 版本安装教程

This document was translated from Chinese by AI and has not yet been reviewed.

Windows

Visit the Official Website

Note: Cherry Studio does not support installation on Windows 7.

Click to download and select the appropriate version

Wait for the Download to Complete

If the browser warns that the file is not trusted, simply choose to keep it Keep → Trust Cherry-Studio

Open the File

Install

Knowledge Base Tutorial
WebDAV Backup Tutorial
S3-Compatible Storage Backup
Notion Configuration Tutorial
Obsidian Configuration Tutorial
SiYuan Note Configuration Tutorial
Global Default Assistant Model
Global Default Conversation Model
feedback
Schematic diagram for enabling Quick Assistant
Schematic diagram of Quick Assistant interface

Client Download

This document was translated from Chinese by AI and has not yet been reviewed.

Current latest official version: v1.4.8

Direct Download

Windows Version

Note: Windows 7 is not supported for installing Cherry Studio.

Installer (Setup)

x64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】

ARM64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】

Portable Version

x64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】

ARM64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】


macOS Version

Intel Chip Version (x64)

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】

Apple Silicon Version (ARM64, M-series chips)

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】


Linux Version

x86_64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】

ARM64 Version

Main Line:

【Cherry Studio Official Website】 【GitHub】

Alternative Lines:

【Line 1】 【Line 2】 【Line 3】


Cloud Drive Download

Quark

Project Planning

This document was translated from Chinese by AI and has not yet been reviewed.

Project Planning

To-Do List


Key translation notes:

  1. Preserved all Markdown formatting (headings, checkboxes)

  2. Technical terms remain unchanged: "JavaScript", "SSO", "iOS", "Android"

  3. Action descriptions standardized: "Quick Pop-up" for 快捷弹窗, "multi-model" for 多模型

  4. Functional translations: "划词翻译" → "text selection translation"

  5. Feature localization: "AI 通话" → "AI calls"

  6. Maintained present tense for consistency

  7. Preserved special characters and list formatting

  8. Translated bracket content while keeping technical references (JavaScript)

  9. Proper noun capitalization: "AI Notes"

Model Service Configuration

This document was translated from Chinese by AI and has not yet been reviewed.

Model Service Configuration

OneAPI and its branches

This document was translated from Chinese by AI and has not yet been reviewed.

OneAPI and its Fork Projects

Visit the official website
Edge browser downloading
Edge download list
Software installation interface

Model Service Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Provider Settings

This page only introduces the interface features. For configuration tutorials, please refer to the Provider Configuration tutorial in the Basic Tutorials.

  • When using built-in providers, only fill in the corresponding key.

  • Different providers may have different names for the key, such as Secret Key, Key, API Key, Token, etc., all referring to the same thing.

API Key

In Cherry Studio, a single provider supports multiple keys for round-robin usage, with the polling method being sequential from front to back.

  • Add multiple keys separated by English commas. For example:

sk-xxxx1,sk-xxxx2,sk-xxxx3,sk-xxxx4

Must use English commas.

API Address

When using built-in providers, it's generally not necessary to fill in the API address. If modification is needed, strictly follow the address provided in the official documentation.

If the provider gives an address in the format https://xxx.xxx.com/v1/chat/completions, only fill in the root address part (https://xxx.xxx.com).

Cherry Studio will automatically append the remaining path (/v1/chat/completions). Failure to comply may result in failure to function properly.

Note: Most providers have a unified routing for large language models, so the following operations are generally unnecessary. If the provider's API path is v2, v3/chat/completions or other versions, manually enter the corresponding version ending with '/'. When the provider's request route is not the conventional /v1/chat/completions, use the full address provided by the provider and end it with #.

That is:

  • When the API address ends with /, only "chat/completions" will be appended

  • When the API address ends with #, no appending will be performed, only the filled-in address will be used

Adding Models

Generally, clicking the Manage button at the bottom left of the provider configuration page will automatically fetch all supported models. Click + in the fetched list to add models to the model list.

Models in the popup list won't be automatically added. You must click + next to the model to add it to the provider configuration's model list before it appears in the model selection list.

Connectivity Check

Click the check button next to the API Key input box to test whether the configuration is successful.

By default, the connectivity check uses the last conversation model in the added model list. If the check fails, please check if there are any incorrect or unsupported models in the model list.

After successful configuration, be sure to turn on the switch in the upper right corner. Otherwise the provider will remain disabled and you won't find corresponding models in the model list.

Translation

This document was translated from Chinese by AI and has not yet been reviewed.

Translation

Cherry Studio's translation feature provides you with fast and accurate text translation services, supporting mutual translation between multiple languages.

Interface Overview

The translation interface mainly consists of the following components:

  1. Source Language Selection Area:

    • Any Language: Cherry Studio will automatically identify the source language and perform translation.

  2. Target Language Selection Area:

    • Dropdown Menu: Select the language you wish to translate the text into.

  3. Settings Button:

    • Clicking will jump to .

  4. Scroll Synchronization:

    • Toggle to enable scroll sync (scrolling in either side will synchronize the other).

  5. Text Input Box (Left):

    • Input or paste the text you need to translate.

  6. Translation Result Box (Right):

    • Displays the translated text.

    • Copy Button: Click to copy the translation result to clipboard.

  7. Translate Button:

    • Click this button to start translation.

  8. Translation History (Top Left):

    • Click to view translation history records.

Usage Steps

  1. Select Target Language:

    • Choose your desired translation language in the Target Language Selection Area.

  2. Input or Paste Text:

    • Enter or paste the text to be translated in the left text input box.

  3. Start Translation:

    • Click the Translate button.

  4. View and Copy Results:

    • Translation results will appear in the right result box.

    • Click the copy button to save the result to clipboard.

Frequently Asked Questions (FAQ)

  • Q: What to do about inaccurate translations?

    • A: While AI translation is powerful, it's not perfect. For professional fields or complex contexts, manual proofreading is recommended. You may also try switching different models.

  • Q: Which languages are supported?

    • A: Cherry Studio translation supports multiple major languages. Refer to Cherry Studio's official website or in-app instructions for the specific supported languages list.

  • Q: Can entire files be translated?

    • A: The current interface primarily handles text translation. For document translation, please use Cherry Studio's conversation page to add files for translation.

  • Q: How to handle slow translation speeds?

    • A: Translation speed may be affected by network connection, text length, or server load. Ensure stable network connectivity and be patient.

Default Model Settings

MCP Usage Tutorial

This document was translated from Chinese by AI and has not yet been reviewed.

MCP Usage Tutorial

Personalization Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Personalization Settings

Contribute Documentation

This document was translated from Chinese by AI and has not yet been reviewed.

Contributing Documentation

Email [email protected] to obtain editing privileges

Subject: Request for Cherry Studio Docs Editing Privileges

Body: State your reason for applying

Data Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Data Settings

Free Web Mode

This document was translated from Chinese by AI and has not yet been reviewed.

Free Internet Mode

This document was translated from Chinese by AI and has not yet been reviewed.

Note: Gemini image generation must be used in the chat interface because Gemini performs multi-modal interactive image generation and does not support parameter adjustment.

Project Introduction

This document was translated from Chinese by AI and has not yet been reviewed.

Follow our social accounts: Twitter (X), Xiaohongshu, Weibo, Bilibili, Douyin

Join our community: QQ Group (575014769), Telegram, Discord, WeChat Group (Click to view)


Cherry Studio is an all-in-one AI assistant platform integrating multi-model conversations, knowledge base management, AI painting, translation, and more. Cherry Studio's highly customizable design, powerful extensibility, and user-friendly experience make it an ideal choice for both professional users and AI enthusiasts. Whether you are a beginner or a developer, you can find suitable AI features in Cherry Studio to improve work efficiency and creativity.


Core Features and Highlights

1. Basic Conversation Features

  • Multi-model Responses: Supports generating responses to the same question simultaneously from multiple models, allowing users to compare the performance of different models. For details, see Chat Interface.

  • Automatic Grouping: Conversation records for each assistant are automatically grouped for easy access to historical conversations.

  • Conversation Export: Supports exporting full or partial conversations to various formats (e.g., Markdown, Word), convenient for storage and sharing.

  • Highly Customizable Parameters: In addition to basic parameter adjustments, it also supports users filling in custom parameters to meet personalized needs.

  • Assistant Marketplace: Built-in over a thousand industry-specific assistants covering translation, programming, writing, and more, also supporting user-defined assistants.

  • Multi-format Rendering: Supports Markdown rendering, formula rendering, real-time HTML preview, and other features to enhance content display.

2. Integrated Unique Features

  • AI Painting: Provides a dedicated painting panel, allowing users to generate high-quality images through natural language descriptions.

  • AI Mini-Apps: Integrates various free web-based AI tools, allowing direct use without switching browsers.

  • Translation Function: Supports various translation scenarios, including dedicated translation panels, conversation translation, and prompt translation.

  • File Management: Files in conversations, paintings, and knowledge bases are uniformly categorized and managed to avoid tedious searching.

  • Global Search: Supports quickly locating historical records and knowledge base content, improving work efficiency.

3. Unified Multi-Provider Management Mechanism

  • Provider Model Aggregation: Supports unified invocation of models from mainstream providers such as OpenAI, Gemini, Anthropic, Azure.

  • Automatic Model Retrieval: One-click retrieval of the complete model list, no manual configuration required.

  • Multi-key Rotation: Supports rotating multiple API keys to avoid rate limit issues.

  • Accurate Avatar Matching: Automatically matches exclusive avatars for each model to improve recognition.

  • Custom Providers: Supports the integration of third-party providers that comply with OpenAI, Gemini, Anthropic, and other specifications, offering strong compatibility.

4. Highly Customizable Interface and Layout

  • Custom CSS: Supports global style customization to create a unique interface style.

  • Custom Conversation Layout: Supports list or bubble style layouts, and allows custom message styles (e.g., code snippet styles).

  • Custom Avatars: Supports setting personalized avatars for software and assistants.

  • Custom Sidebar Menu: Users can hide or sort sidebar features according to their needs to optimize the user experience.

5. Local Knowledge Base System

  • Multi-format Support: Supports importing various file formats such as PDF, DOCX, PPTX, XLSX, TXT, MD.

  • Multiple Data Source Support: Supports local files, URLs, sitemaps, and even manually entered content as knowledge base sources.

  • Knowledge Base Export: Supports exporting processed knowledge bases for sharing with others.

  • Search Inspection Support: After importing the knowledge base, users can perform real-time retrieval tests to check processing results and segmentation effects.

6. Highlighted Focus Features

  • Quick Q&A: Invoke a quick assistant in any scenario (e.g., WeChat, browser) to quickly get answers.

  • Quick Translate: Supports quick translation of words or text in other scenarios.

  • Content Summarization: Quickly summarizes long text content to improve information extraction efficiency.

  • Explanation: Explains unclear questions with one click, no complex prompts needed.

7. Data Security

  • Multiple Backup Solutions: Supports local backup, WebDAV backup, and timed backup to ensure data security.

  • Data Security: Supports full local usage scenarios, combined with local large models, to avoid data leakage risks.


Project Advantages

  1. Beginner-Friendly: Cherry Studio is committed to lowering technical barriers, allowing beginners to quickly get started and focus on work, study, or creation.

  2. Comprehensive Documentation: Provides detailed user documentation and a common issues handbook to help users quickly resolve problems.

  3. Continuous Iteration: The project team actively responds to user feedback and continuously optimizes features to ensure healthy project development.

  4. Open Source and Extensibility: Supports users in customizing and extending through open-source code to meet personalized needs.


Applicable Scenarios

  • Knowledge Management and Query: Quickly build and query exclusive knowledge bases through the local knowledge base feature, suitable for research, education, and other fields.

  • Multi-model Conversation and Creation: Supports simultaneous multi-model conversations, helping users quickly obtain information or generate content.

  • Translation and Office Automation: Built-in translation assistant and file processing features, suitable for users needing cross-language communication or document processing.

  • AI Painting and Design: Generates images through natural language descriptions, meeting creative design needs.

Star History

Star History

Follow Our Social Accounts

Alibaba Cloud BaiLian

This document was translated from Chinese by AI and has not yet been reviewed.

Alibaba Cloud Bailian

  1. Log in to Alibaba Cloud Bailian. If you don't have an Alibaba Cloud account, you'll need to register.

  2. Click the Create My API-KEY button in the upper-right corner.

    Create API Key in Alibaba Cloud Bailian
  3. In the popup window, select the default workspace (or customize it if desired). You can optionally add a description.

    API Key Creation Popup in Alibaba Cloud Bailian
  4. Click the Confirm button in the lower-right corner.

  5. You should now see a new entry in the list. Click the View button on the right.

    View API Key in Alibaba Cloud Bailian
  6. Click the Copy button.

    Copy API Key in Alibaba Cloud Bailian
  7. Go to Cherry Studio, navigate to Settings → Model Services → Alibaba Cloud Bailian, and paste the copied API key into the API Key field.

    Paste API Key in Alibaba Cloud Bailian
  8. You can adjust related settings as described in Model Services, then start using the service.

If Alibaba Cloud Bailian models don't appear in the model list, ensure you've added models according to the instructions in Model Services and enabled this provider.

OpenAI

This document was translated from Chinese by AI and has not yet been reviewed.

OpenAI

Obtaining an API Key

  • On the official API Key page, click + Create new secret key

  • Copy the generated key, then open CherryStudio's Vendor Settings

  • Find the OpenAI vendor and enter the key you just obtained

  • Click Manage or Add at the bottom, add supported models, and toggle the vendor switch at the top right to start using.

  • OpenAI services cannot be directly accessed in mainland China (except Taiwan); users must resolve proxy issues themselves.

  • Account balance is required.

S3 Compatible Storage Backup

This document was translated from Chinese by AI and has not yet been reviewed.

Cherry Studio data backup supports backup via S3 compatible storage (object storage). Common S3 compatible storage services include: AWS S3, Cloudflare R2, Alibaba Cloud OSS, Tencent Cloud COS, and MinIO.

Based on S3 compatible storage, multi-terminal data synchronization can be achieved by the method of Computer A →Backup\xrightarrow{\text{Backup}}Backup​ S3 Storage →Restore\xrightarrow{\text{Restore}}Restore​ Computer B.

Configure S3 Compatible Storage

  1. Create an object storage bucket, and record the bucket name. It is highly recommended to set the bucket to private read/write to prevent backup data leakage!!

  2. Refer to the documentation, go to the cloud service console to obtain information such as Access Key ID, Secret Access Key, Endpoint, Bucket, Region for S3 compatible storage.

    • Endpoint: The access address for S3 compatible storage, usually in the form of https://<bucket-name>.<region>.amazonaws.com or https://<ACCOUNT_ID>.r2.cloudflarestorage.com.

    • Region: The region where the bucket is located, such as us-west-1, ap-southeast-1, etc. For Cloudflare R2, please fill in auto.

    • Bucket: The bucket name.

    • Access Key ID and Secret Access Key: Credentials used for authentication.

    • Root Path: Optional, specifies the root path when backing up to the bucket, default is empty.

    • Related Documentation

      • AWS S3: Obtain Access Key ID and Secret Access Key

      • Cloudflare R2: Obtain Access Key ID and Secret Access Key

      • Alibaba Cloud OSS: Obtain Access Key ID and Access Key Secret

      • Tencent Cloud COS: Obtain SecretId and SecretKey

  3. Fill in the above information in the S3 backup settings, click the backup button to perform backup, and click the manage button to view and manage the list of backup files.

Knowledge Base Data

This document was translated from Chinese by AI and has not yet been reviewed.

Data Storage Instructions

Data added to the Cherry Studio knowledge base is entirely stored locally. During the addition process, a copy of the document will be placed in the Cherry Studio data storage directory.

Knowledge Base Processing Flowchart

Vector Database: https://turso.tech/libsql

After documents are added to the Cherry Studio knowledge base, they are segmented into multiple fragments. These fragments are then processed by the embedding model.

When using large models for Q&A, relevant text fragments matching the query will be retrieved and processed together by the large language model.

If you have data privacy requirements, it is recommended to use local embedding databases and local large language models.

Google Gemini

This document was translated from Chinese by AI and has not yet been reviewed.

Google Gemini

Obtaining API Key

  • Before obtaining the Gemini API key, you need a Google Cloud project (skip this step if you already have one).

  • Go to to create a project, fill in the project name, and click "Create Project".

  • On the official , click Create API Key.

  • Copy the generated key, then open CherryStudio's .

  • Find the Gemini service provider and enter the key you just obtained.

  • Click "Manage" or "Add" at the bottom, add the supported models, then toggle the service provider switch at the top right to start using.

  • Google Gemini service is not directly accessible in China except for Taiwan, requiring users to resolve proxy issues independently.

Vertex AI

暂时不支持Claude模型

This document was translated from Chinese by AI and has not yet been reviewed.

Vertex AI

Tutorial Overview

1. Obtaining API Key

  • Before obtaining the Gemini API Key, you need to have a Google Cloud project (if you already have one, this process can be skipped)

  • Go to to create a project, fill in the project name and click Create Project

  • Access the

  • Enable in the created project

2. Setting API Access Permissions

  • Open the permissions page and create a service account

  • On the service account management page, find the service account you just created, click Keys and create a new JSON key

  • After successful creation, the key file will be automatically saved to your computer in JSON format. Please store it securely

3. Configuring Vertex AI in Cherry Studio

  • Select Vertex AI as the service provider

  • Fill in the corresponding fields from the JSON file

Click , and you can start using it!

Clear CSS Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Clear CSS Settings

Use this method to clear CSS settings when incorrect CSS has been applied, or when you cannot access the settings interface after applying CSS.

  • Open the console by clicking on the CherryStudio window and pressing Ctrl+Shift+I (MacOS: command+option+I).

  • In the opened console window, click Console

  • Then manually enter document.getElementById('user-defined-custom-css').remove(). Copying and pasting will most likely not work.

  • After entering, press Enter to confirm and clear the CSS settings. Then, go back to CherryStudio's display settings and remove the problematic CSS code.

GitHub Copilot

This document was translated from Chinese by AI and has not yet been reviewed.

GitHub Copilot

To use GitHub Copilot, you must first have a GitHub account and subscribe to the GitHub Copilot service. The free subscription tier is acceptable, but note that it does not support the latest Claude 3.7 model. For details, refer to the .

Obtain Device Code

Click "Log in with GitHub" to generate and copy your Device Code.

Enter Device Code in Browser and Authorize

After obtaining your Device Code, click the link to open your browser. Log in to your GitHub account, enter the Device Code, and grant authorization.

After successful authorization, return to Cherry Studio and click "Connect GitHub". Your GitHub username and avatar will appear upon successful connection.

Click "Manage" to Get Model List

Click the "Manage" button below, which will automatically fetch the currently supported models list online.

Common Issues

Failed to Obtain Device Code, Please Retry

The current implementation uses Axios for network requests. Note that Axios does not support SOCKS proxies. Please use a system proxy or HTTP proxy, or avoid setting proxies within CherryStudio and use a global proxy instead. First, ensure your network connection is stable to prevent Device Code retrieval failures.

Knowledge Base Document Pre-processing

This document was translated from Chinese by AI and has not yet been reviewed.

Knowledge base document preprocessing requires upgrading Cherry Studio to v1.4.8 or higher.

Configure OCR Provider

After clicking 'Get API KEY', the application address will open in your browser. Click 'Apply Now' to fill out the form and obtain the API KEY, then fill it into the API KEY field.

Configure Knowledge Base Document Preprocessing

Configure as shown above in the created knowledge base to complete the knowledge base document preprocessing configuration.

Upload Documents

You can check knowledge base results by searching in the top right corner.

Usage in Conversation

Knowledge Base Usage Tips: When using more capable models, you can change the knowledge base search mode to intent recognition. Intent recognition can describe your questions more accurately and broadly.

Enable Knowledge Base Intent Recognition

Silicon Cloud

This document was translated from Chinese by AI and has not yet been reviewed.

SiliconFlow

1. Configuring SiliconCloud Model Service

1.2 Click the settings at the bottom left, and select 【SiliconFlow】 in the model service

1.2 Click the link to get SiliconCloud API Key

  1. Log in to (if not registered, the first login will automatically register an account)

  2. Visit to create a new key or copy an existing one

1.3 Click Manage to add models

2. Using the Model Service

  1. Click the "Chat" button in the left menu bar

  2. Enter text in the input box to start chatting

  3. You can switch models by selecting the model name in the top menu

MCP Environment Installation

This document was translated from Chinese by AI and has not yet been reviewed.

MCP Environment Installation

MCP (Model Context Protocol) is an open-source protocol designed to provide context information to large language models (LLMs) in a standardized manner. For more details about MCP, see .

Using MCP in Cherry Studio

The following demonstrates how to use MCP in Cherry Studio using the fetch feature as an example. Detailed information can be found in the .

Prerequisites: Installing uv and bun

Cherry Studio currently only uses built-in and , and will not reuse uv or bun already installed on your system.

In Settings > MCP Server, click the Install button to automatically download and install them. Downloads are sourced directly from GitHub and may be slow with a high probability of failure. Installation success should be verified by checking for files in the directories mentioned below.

Executable Installation Directories:

Windows: C:\Users\username\.cherrystudio\bin

macOS/Linux: ~/.cherrystudio/bin

If unable to install normally:

You can create symlinks from the system's corresponding commands to these locations. If the directory doesn't exist, create it manually. Alternatively, manually download executables and place them in this directory:

Bun: UV:

Font Recommendations

This document was translated from Chinese by AI and has not yet been reviewed.

Font Recommendations

Common Issues

This document was translated from Chinese by AI and has not yet been reviewed.

Frequently Asked Questions

1. mcp-server-time

Solution

Enter in the "Parameters" field:

Configure and Use MCP

This document was translated from Chinese by AI and has not yet been reviewed.

Configuring and Using MCP

  1. Open Cherry Studio settings.

  2. Find the MCP Server option.

  3. Click Add Server.

  4. Fill in the parameters for the MCP Server (). This may include:

    • Name: Custom name, e.g., fetch-server

    • Type: Select STDIO

    • Command: Enter uvx

    • Arguments: Enter mcp-server-fetch

    • (Additional parameters may be required depending on the specific Server)

  5. Click Save.

After completing the above configuration, Cherry Studio will automatically download the required MCP Server - fetch server. Once downloaded, we can start using it! Note: If mcp-server-fetch configuration fails, try restarting your computer.

Enabling MCP Service in the Chatbox

  • After successfully adding the MCP server in MCP Server settings

Demonstration of Usage Effects

As shown above, after integrating MCP's fetch functionality, Cherry Studio can better understand user query intentions. It retrieves relevant information from the web to provide more accurate and comprehensive responses.

Business Cooperation

This document was translated from Chinese by AI and has not yet been reviewed.

Business Cooperation

Contact Person: Mr. Wang 📮: [email protected] 📱: 18954281942 (Not a customer service hotline)

For usage inquiries: • Join our user community at the bottom of the official website homepage • Email [email protected] • Or submit issues:

For additional guidance, join our

Commercial license details:

Google Cloud
API Key page
Service Provider Settings
Google Cloud
Vertex AI console
Vertex AI API
Service Accounts
Add Model
GitHub Copilot official website
Example image showing Device Code retrieval
Obtaining Device Code
Example image of GitHub authorization
GitHub Authorization
Example image of successful GitHub connection
GitHub Connected Successfully
Example image showing model list retrieval
Fetching Model List
Example image of Device Code retrieval failure
Device Code Retrieval Failed

Monaspace

English Font Commercial Use

GitHub has launched an open-source font family called Monaspace, featuring five styles: Neon (modern style), Argon (humanist style), Xenon (serif style), Radon (handwritten style), and Krypton (mechanical style).

MiSans Global

Multilingual Commercial Use

MiSans Global is a global language font customization project led by Xiaomi, created in collaboration with Monotype and Hanyi.

This comprehensive font family covers over 20 writing systems and supports more than 600 languages.

mcp-server-time
--local-timezone
<Your standard timezone, e.g., Asia/Shanghai>
Error screenshot
reference link
https://github.com/CherryHQ/cherry-studio/issues
Knowledge Planet
https://docs.cherry-ai.com/contact-us/questions/cherrystudio-xu-ke-xie-yi
Drawing

Built-in MCP Configuration

This document was translated from Chinese by AI and has not yet been reviewed.

Built-in MCP Configurations

@cherry/mcp-auto-install

Automatically install MCP service (Beta)

@cherry/memory

Persistence memory base implementation based on local knowledge graph. This enables models to remember user-related information across different conversations.

MEMORY_FILE_PATH=/path/to/your/file.json

@cherry/sequentialthinking

An MCP server implementation providing tools for dynamic and reflective problem-solving through structured thought processes.

@cherry/brave-search

An MCP server implementation integrated with Brave Search API, offering dual functionality for both web and local searches.

BRAVE_API_KEY=YOUR_API_KEY

@cherry/fetch

MCP server for retrieving web content from URLs.

@cherry/filesystem

Node.js server implementing the Model Context Protocol (MCP) for file system operations.

Feedback & Suggestions

This document was translated from Chinese by AI and has not yet been reviewed.

Feedback & Suggestions

Telegram Discussion Group

Discussion group members share their usage experiences to help you solve problems.

Join the Telegram discussion group for assistance: https://t.me/CherryStudioAI

QQ Group

QQ group members can mutually assist each other and share download links.

QQ Group (1025067911)

GitHub Issues

Suitable for recording issues to prevent developers from forgetting, or participating in discussions here.

GitHub Issues: https://github.com/CherryHQ/cherry-studio/issues/new/choose

Email

If other feedback channels aren't accessible, contact the developers for help.

Email the developers: [email protected]

Contribute Code

This document was translated from Chinese by AI and has not yet been reviewed.

Contribute Code

We welcome contributions to Cherry Studio! You can contribute in the following ways:

  1. Contribute code: Develop new features or optimize existing code.

  2. Fix bugs: Submit bug fixes you discover.

  3. Maintain issues: Help manage GitHub issues.

  4. Product design: Participate in design discussions.

  5. Write documentation: Improve user manuals and guides.

  6. Community engagement: Join discussions and assist users.

  7. Promote usage: Spread the word about Cherry Studio.

How to Participate

Email [email protected]

Email subject: Application to Become a Developer Email content: Reason for Application

Common Model Reference Information

This document was translated from Chinese by AI and has not yet been reviewed.

Common Model Reference Information

  • The following information is for reference only. Please contact us to correct any errors. Note that model specifications such as context window size may vary across different service providers.

  • When inputting data in the client, "k" should be converted to actual numerical values (theoretically 1k = 1024 tokens; 1m = 1024k tokens). For example, 8k equals 8×1024 = 8192 tokens. To prevent errors during practical use, we recommend multiplying by 1000 instead (e.g., 8k ≈ 8×1000 = 8000, 1m ≈ 1×1000000 = 1000000).

  • Models marked with "-" under "Max Output" indicate that no explicit maximum output information was found in official documentation.

Model Name
Max Input
Max Output
Function Calling
Capabilities
Provider
Description

360gpt-pro

8k

-

Not Supported

Dialogue

360AI_360gpt

360's flagship trillion-parameter large model, widely applicable to complex tasks across various domains.

glm-4v-flash

2k

1k

Not Supported

Dialogue, Visual Understanding

Zhipu_glm

Free model with robust image comprehension capabilities.

Privacy Policy

This document was translated from Chinese by AI and has not yet been reviewed.

Privacy Policy

Welcome to Cherry Studio (hereinafter referred to as "this software" or "we"). We highly value your privacy protection, and this Privacy Policy explains how we handle and protect your personal information and data. Please read and understand this agreement carefully before using this software:

1. Scope of Information We Collect

To optimize user experience and improve software quality, we may only collect the following anonymous non-personal information: • Software version information; • Function activity and usage frequency; • Anonymous crash and error log information;

The above information is completely anonymous, does not involve any personally identifiable data, and cannot be associated with your personal information.

2. Information We Do Not Collect

To maximize the protection of your privacy, we explicitly commit: • Will not collect, store, transmit, or process the model service API Key information you input into this software; • Will not collect, store, transmit, or process any conversation data generated during your use of this software, including but not limited to chat content, instruction information, knowledge base information, vector data, and other custom content; • Will not collect, store, transmit, or process any personally identifiable sensitive information.

3. Data Interaction Description

This software uses the API Key of third-party model service providers that you apply for and configure yourself to complete model invocation and conversation functions. The model services you use (such as large language models, API interfaces, etc.) are provided and fully managed by the third-party provider you choose, and Cherry Studio only serves as a local tool providing interface invocation functionality with third-party model services.

Therefore: • All conversation data generated between you and the model service is unrelated to Cherry Studio. We neither participate in data storage nor perform any form of data transmission or relay; • You need to independently review and accept the privacy policies and relevant regulations of the corresponding third-party model service providers. These services' privacy policies can be viewed on each provider's official website.

4. Third-Party Model Service Provider Privacy Policy Statement

You shall bear any privacy risks that may arise from using third-party model service providers. For specific privacy policies, data security measures, and related responsibilities, please refer to the relevant content on the official websites of your chosen model service providers. We assume no responsibility for these matters.

5. Agreement Updates and Modifications

This agreement may be appropriately adjusted with software version updates. Please check it periodically. When substantive changes occur to the agreement, we will notify you through appropriate means.

6. Contact Us

If you have any questions regarding this agreement or Cherry Studio's privacy protection measures, please feel free to contact us at any time.

Thank you for choosing and trusting Cherry Studio. We will continue to provide you with a secure and reliable product experience.

Hotkey Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Shortcut Keys Settings

This interface allows you to enable/disable and configure shortcut keys for certain functions. Please refer to the on-screen instructions for specific setup.

Xiaohongshu

Bilibili

Weibo

Douyin

Twitter (X)

Cover
Cover
Cover
Cover
Cover
​
​
SiliconCloud
API Keys
​
​
documentation
uv
bun
https://github.com/oven-sh/bun/releases
https://github.com/astral-sh/uv/releases
bin directory

Wuwenshinqiong

This document was translated from Chinese by AI and has not yet been reviewed.

Infini-AI Cloud

Have you ever experienced: saving 26 insightful articles in WeChat that you never open again, storing over 10 scattered files in a "Study Materials" folder on your computer, or trying to recall a theory you read half a year ago only to remember fragmented keywords? And when daily information exceeds your brain's processing limit, 90% of valuable knowledge gets forgotten within 72 hours. Now, by leveraging the Infini-AI large model service platform API + Cherry Studio to build a personal knowledge base, you can transform dust-collecting WeChat articles and fragmented course content into structured knowledge for precise retrieval.

1. Building Personal Knowledge Base

1. Infini-AI API Service: The Stable "Thinking Hub" of Knowledge Bases

Serving as the "thinking hub" of knowledge bases, the Infini-AI large model service platform offers robust API services including full-capacity DeepSeek R1 and other model versions. Currently free to use without barriers after registration. Supports mainstream embedding models (bge, jina) for knowledge base construction. The platform continuously updates with the latest, most powerful open-source models, covering multiple modalities like images, videos, and audio.

2. Cherry Studio: Zero-Code Knowledge Base Setup

Compared to RAG knowledge base development requiring 1-2 months deployment time, Cherry Studio offers a significant advantage: zero-code operation. Instantly import multiple formats like Markdown/PDF/web pages – parsing 40MB files in 1 minute. Easily add local folders, saved WeChat articles, and course notes.

II. 3 Steps to Build Your Personal Knowledge Manager

Step 1: Basic Preparation

  1. Download the suitable version from Cherry Studio official site (https://cherry-ai.com/)

  2. Register account: Log in to Infini-AI platform (https://cloud.infini-ai.com/genstudio/model?cherrystudio)

  • Get API key: Select "deepseek-r1" in Model Square, create and copy the API key + model name

Step 2: CherryStudio Settings

Go to model services → Select Infini-AI → Enter API key → Activate Infini-AI service

After setup, select the large model during interaction to use Infini-AI API in CherryStudio. Optional: Set as default model for convenience

Step 3: Add Knowledge Base

Choose any version of bge-series or jina-series embedding models from Infini-AI platform

III. Real User Scenario Test

  • After importing study materials, query: "Outline core formula derivations in Chapter 3 of 'Machine Learning'"

Result Demonstration

Notion Configuration Tutorial

This document was translated from Chinese by AI and has not yet been reviewed.

Notion Configuration Tutorial

Cherry Studio supports importing topics into Notion databases.

Step One

Visit Notion Integrations to create an application

Click the plus sign to create an application

Step Two

Create an application

Fill in application information

Name: Cherry Studio Type: Select the first option Icon: You can save this image

Step Three

Copy the secret key and enter it in Cherry Studio settings

Click to copy the secret key
Enter the secret key in data settings

Step Four

Open Notion website and create a new page. Select database type below, name it Cherry Studio, and connect as shown

Create a new page and select database type
Enter the page name and select "Connect to APP"

Step Five

Copy the database ID

If your Notion database URL looks like this: https://www.notion.so/<long_hash_1>?v=<long_hash_2> Then the Notion database ID is the part <long_hash_1>

Enter database ID and click Check

Step Six

Fill in Page Title Field Name: If your interface is in English, enter Name If your interface is in Chinese, enter 名称

Enter page title field name

Step Seven

Congratulations! Notion configuration is complete ✅ You can now export Cherry Studio content to your Notion database

Export to Notion
View export results

Tavily Web Login & Registration Tutorial

如何注册tavily?

This document was translated from Chinese by AI and has not yet been reviewed.

Tavily Internet Registration Tutorial

1. Tavily Official Website

https://app.tavily.com/home

Access may be slow for some users. If available, using a proxy is recommended.

2. Detailed Tavily Registration Steps

Visit the official website above, or go to Cherry Studio > Settings > Web Search > click "Get API Key" to directly access Tavily's login/registration page.

First-time users must register (Sign up) before logging in (Log in). The default page is the login interface.

  1. Click "Sign up" and enter your email (or use Google/GitHub account) followed by your password.

Register account
  1. 🚨🚨🚨[Critical Step] After registration, a dynamic verification code is required. Scan the QR code to generate a one-time code.

Many users get stuck here - don't panic!

Two solutions:

  1. Download Microsoft Authenticator app (slightly complex)

  2. Use WeChat Mini Program: Tencent Authenticator (recommended, as simple as it gets).

  1. Search "Tencent Authenticator" in WeChat Mini Programs:

Search and open in WeChat Mini Programs
Scan the QR code from Tavily
Obtain the verification code
Enter the code on Tavily
Backup your recovery code as prompted

3. 🎉 Registration Successful 🎉

After completing these steps, you'll see the dashboard. Copy the API key to Cherry Studio to start using Tavily!

ByteDance (Doubao)

This document was translated from Chinese by AI and has not yet been reviewed.

ByteDance (Doubao)

  • Log in to Volcano Engine

  • Or click here to go directly

Obtaining the API Key

  • Click API Key Management in the sidebar

  • Create an API Key

  • After creation, click the eye icon next to the created API Key to reveal and copy it

  • Paste the copied API Key into Cherry Studio and then toggle the provider switch to ON

Enabling and Adding Models

  • Enable the models you need at the bottom of the sidebar in the Ark console under Enablement Management. You can enable the Doubao series, DeepSeek, and other models as required

  • In the Model List Documentation, find the model ID corresponding to the desired model

  • Open Cherry Studio's Model Services settings and locate Volcano Engine

  • Click Add, then paste the previously obtained model ID into the model ID text field

  • Follow this process to add models one by one

API Address

There are two ways to write the API address:

  • First, the default in the client: https://ark.cn-beijing.volces.com/api/v3/

  • Second: https://ark.cn-beijing.volces.com/api/v3/chat/completions#

There is no significant difference between the two formats; you can keep the default and no modification is needed.

For the difference between ending with / and #, refer to the API Address section in the provider settings documentation. Click to go there

Official Documentation cURL Example

SiYuan Note Configuration Tutorial

This document was translated from Chinese by AI and has not yet been reviewed.

SiYuan Note Configuration Tutorial

Supports exporting topics and messages to SiYuan Note.

Step 1

Open SiYuan Note and create a notebook

Click to create a new notebook

Step 2

Open notebook settings and copy the Notebook ID

Open notebook settings
Click the copy notebook ID button

Step 3

Paste the copied notebook ID into Cherry Studio settings

Fill the notebook ID in the data settings

Step 4

Fill in the SiYuan Note address

  • Local Typically http://127.0.0.1:6806

  • Self-hosted Use your domain http://note.domain.com

Fill in your SiYuan Note address

Step 5

Copy the SiYuan Note API Token

Copy SiYuan Note token

Paste it into Cherry Studio settings and check

Fill in the database ID and click Check

Step 6

Congratulations, the configuration for SiYuan Note is complete ✅ Now you can export content from Cherry Studio to your SiYuan Note

Export to SiYuan Note
View the export results

Huawei Cloud

This document was translated from Chinese by AI and has not yet been reviewed.

Huawei Cloud

  1. Create an account and log in at Huawei Cloud

  2. Click this link to enter the MaaS control panel

  3. Authorization

Authorization steps (skip if already authorized)
  1. After entering the link from step 2, follow the prompts to the authorization page (click IAM sub-account → Add Authorization → General User)

  2. After clicking create, return to the link from step 2

  3. If prompted "insufficient access permissions", click "click here" in the prompt

  4. Append existing authorization and confirm Note: This method is suitable for beginners; no need to read excessive content, just follow the prompts. If you can successfully authorize using your own method, proceed accordingly.

  1. Click Authentication Management in the sidebar, create an API Key (secret key) and copy it

    Then create a new service provider in CherryStudio

    After creation, enter the secret key

  2. Click Model Deployment in the sidebar, claim all offerings

  3. Click Call

    Copy the address from ①, paste it into CherryStudio's service provider address field and add a "#" at the end and add a "#" at the end and add a "#" at the end and add a "#" at the end and add a "#" at the end Why add "#"? [see here](https://docs.cherry-ai.com/cherrystudio/preview/settings/providers#api-di-zhi) > You can choose not to read that and just follow this tutorial; > Alternatively, you can fill it by removing v1/chat/completions - feel free to use your own method if you know how, but if not, strictly follow this tutorial.

    Then copy the model name from ②, and click the "+ Add" button in CherryStudio to create a new model

    Enter the model name exactly as shown - do not add or remove anything, and don't include quotes. Copy exactly as in the example.

    Click the Add Model button to complete.

In Huawei Cloud, since each model has a different endpoint, you'll need to create a new service provider for each model. Repeat the above steps accordingly.

Configure Dify Knowledge Base

This document was translated from Chinese by AI and has not yet been reviewed.

Configuring Dify Knowledge Base

Dify Knowledge Base MCP requires upgrading Cherry Studio to v1.2.9 or higher.

Adding Dify Knowledge Base MCP Server

  1. Open Search MCP.

  2. Add the dify-knowledge server.

Configuring Dify Knowledge Base

Requires configuring parameters and environment variables

  1. Dify Knowledge Base key can be obtained in the following way:

Using Dify Knowledge Base MCP

OneAPI

This document was translated from Chinese by AI and has not yet been reviewed.

OneAPI

  • Log in and navigate to the tokens page

  • Create a new token (you can also directly use the default token ↑)

  • Copy the token

  • Open CherryStudio's service provider settings and click Add at the bottom of the provider list

  • Enter a note name, select OpenAI as the provider, and click OK

  • Paste the key you just copied

  • Return to the API Key page, copy the root address from the browser's address bar, for example:

Only copy https://xxx.xxx.com - do not include content after "/"
  • When the address is IP + port, fill in http://IP:port, e.g., http://127.0.0.1:3000

  • Strictly distinguish between http and https - don't use https if SSL isn't enabled

  • Add models (click Manage to auto-fetch or enter manually) and toggle the switch in the top right corner to start using.

The interface may differ in other OneAPI themes, but the addition method follows the same workflow as above.

WebDAV Backup

This document was translated from Chinese by AI and has not yet been reviewed.

Cherry Studio data backup supports WebDAV for backup. You can choose a suitable WebDAV service for cloud backup.

Based on WebDAV, you can achieve multi-device data synchronization through Computer A WebDAV Computer B.

Example with Jianguoyun (Nutstore)

  1. Log in to Jianguoyun, click your username in the top right corner, and select "Account Info":

  1. Select "Security Options" and click "Add Application":

  1. Enter the application name and generate a random password;

  1. Copy and record the password;

  1. Get the server address, account, and password;

  1. In Cherry Studio Settings -> Data Settings, fill in the WebDAV information;

  1. Choose to back up or restore data, and you can set the automatic backup time period.

WebDAV services with a lower barrier to entry are generally cloud drives:

  • (requires membership)

  • (requires purchase)

  • (Free storage capacity is 10GB, single file size limit is 250MB.)

  • (Dropbox offers 2GB free, can expand by 16GB by inviting friends.)

  • (Free space is 10GB, an additional 5GB can be obtained through invitation.)

  • (Free users get 10GB capacity.)

Secondly, there are some services that require self-deployment:

Knowledge Base Tutorial

This document was translated from Chinese by AI and has not yet been reviewed.

Knowledge Base Tutorial

In version 0.9.1, Cherry Studio introduces the long-awaited knowledge base feature. Below is the step-by-step guide to using Cherry Studio's knowledge base.

Add Embedding Models

  1. Find models in the Model Management Service. You can quickly filter by clicking "Embedding Models".

  2. Locate the required model and add it to My Models.

Create Knowledge Base

  1. Access: Click the Knowledge Base icon in Cherry Studio's left toolbar to enter the management page.

  2. Add Knowledge Base: Click "Add" to start creating.

  3. Naming: Enter a name for the knowledge base and add an embedding model (e.g., bge-m3) to complete creation.

Add Files and Vectorize

  1. Add Files: Click the "Add Files" button to open file selection.

  2. Select Files: Choose supported file formats (pdf, docx, pptx, xlsx, txt, md, mdx, etc.) and open.

  3. Vectorization: The system automatically vectorizes files. A green checkmark (✓) indicates completion.

Add Data from Multiple Sources

Cherry Studio supports adding data through:

  1. Folder Directories: Entire directories where supported format files are auto-vectorized.

  2. URL Links: e.g., https://docs.siliconflow.cn/introduction

  3. Sitemaps: XML format sitemaps, e.g., https://docs.siliconflow.cn/sitemap.xml

  4. Plain Text Notes: Custom content input.

Tips:

  1. Illustrations in documents cannot currently be vectorized automatically - convert to text manually.

  2. URL-based imports may fail due to anti-scraping mechanisms (login requirements, etc.). Always test after creation.

  3. Most websites provide sitemaps (e.g., Cherry Studio's ). Try /sitemap.xml at root address (e.g., example.com/sitemap.xml).

  4. For custom sitemaps, use publicly accessible direct links (local files unsupported):

    1. Generate sitemaps using AI tools;

    2. Use OSS direct links or upload via .

Search Knowledge Base

After vectorization:

  1. Click "Search Knowledge Base" at bottom of page.

  2. Enter query content.

  3. View search results.

  4. Matching scores are displayed per result.

Reference Knowledge Base in Conversations

  1. Create new topic > Click "Knowledge Base" in toolbar > Select target knowledge base.

  2. Enter question > Model generates answer using retrieved knowledge.

  3. Data sources appear below answer for quick reference.

How to Ask Efficiently

This document was translated from Chinese by AI and has not yet been reviewed.

Effective Questioning Methods

Cherry Studio is a free and open-source project. As the project grows, the workload of the team increases significantly. To reduce communication costs and efficiently resolve your issues, we encourage everyone to follow the steps below when reporting problems. This will allow our team to dedicate more time to project maintenance and development. Thank you for your cooperation!

1. Documentation Review and Search

Most basic problems can be resolved by carefully reviewing the documentation:

  • Functionality and usage questions can be answered in the documentation;

  • High-frequency issues are compiled on the page—check there first for solutions;

  • Complex questions can often be resolved through search or using the search bar;

  • Carefully read all hint boxes in documents to avoid many common issues;

  • Check existing solutions in the GitHub .

2. Web Search or Consult AI

For model-related issues unrelated to client functionality (e.g., model errors, unexpected responses, parameter settings):

  • Search online for solutions first;

  • Provide error messages to AI assistants for resolution suggestions.

3. Ask in Official Communities or File GitHub Issues

If steps 1-2 don't solve your problem:

  • Seek help in our official , , or () When reporting:

  1. For model errors:

    • Provide full screenshots with error messages visible

    • Include console errors ()

    • Sensitive information can be redacted, but keep model names, parameters, and error details

  2. For software bugs:

    • Give detailed error descriptions

    • Provide precise reproduction

    • Include OS (Windows/Mac/Linux) and software version number

    • For intermittent issues, describe scenarios and configurations comprehensively

Request Documentation or Suggest Improvements Contact via Telegram @Wangmouuu, QQ (1355873789), or email [email protected].

Add ModelScope MCP Server

This document was translated from Chinese by AI and has not yet been reviewed.

Adding ModelScope MCP Servers

Requires upgrading Cherry Studio to v1.2.9 or higher.

In v1.2.9, Cherry Studio has officially partnered with ModelScope to simplify the process of adding MCP servers, avoiding configuration errors while enabling access to a vast number of MCP servers in the ModelScope community. Follow these steps to synchronize ModelScope's MCP servers in Cherry Studio.

Steps

Access Sync Feature

Go to MCP Server Settings in Settings and select Sync Servers

Discover MCP Services

Select ModelScope and browse available MCP services

View MCP Server Details

Register/login to ModelScope and view MCP service details

Connect Server

In the MCP service details, choose "Connect Service"

Obtain and Apply API Token

Click "Get API" token in Cherry Studio to visit ModelScope's official website, copy the API token, then paste it back in Cherry Studio.

Successful Sync

ModelScope-connected MCP services will appear in Cherry Studio's MCP server list and become available for conversations.

Incremental Updates

For future MCP servers connected via ModelScope website, simply click Sync Servers to add them incrementally.

With these steps, you've mastered how to efficiently synchronize ModelScope's MCP servers in Cherry Studio. The simplified configuration process eliminates manual setup complexities while unlocking access to ModelScope's extensive MCP server resources.

Start exploring these powerful MCP services to enhance your Cherry Studio experience!

Feature Introduction
FAQ
Issues
Telegram
Discord
Join Now
Viewing Method
→Backup\xrightarrow{\text{Backup}}Backup​
→Restore\xrightarrow{\text{Restore}}Restore​
Jianguoyun
123pan
Aliyundrive
Box
Dropbox
TeraCloud
Yandex Disk
Alist
Cloudreve
sharelist
sitemap
ocoolAI's tool
Volcano Engine Model ID List Example
Knowledge Popularization
Cherry Studio - 全能的AI助手Cherry Studio
Cherry Studio - 全能的AI助手Cherry Studio

Custom Provider

This document was translated from Chinese by AI and has not yet been reviewed.

Custom Providers

Cherry Studio not only integrates mainstream AI model services but also empowers you with powerful customization capabilities. Through the Custom AI Providers feature, you can easily integrate any AI model you require.

Why Do You Need Custom AI Providers?

  • Flexibility: Break free from predefined provider lists and freely choose the AI models that best suit your needs.

  • Diversity: Experiment with various AI models from different platforms to discover their unique advantages.

  • Controllability: Directly manage your API keys and access addresses to ensure security and privacy.

  • Customization: Integrate privately deployed models to meet the demands of specific business scenarios.

How to Add a Custom AI Provider?

Add your custom AI provider to Cherry Studio in just a few simple steps:

  1. Open Settings: Click the "Settings" (gear icon) in the left navigation bar of the Cherry Studio interface.

  2. Enter Model Services: Select the "Model Services" tab in the settings page.

  3. Add Provider: On the "Model Services" page, you'll see existing providers. Click the "+ Add" button below the list to open the "Add Provider" pop-up.

  4. Fill in Information: In the pop-up, provide the following details:

    • Provider Name: Give your custom provider a recognizable name (e.g., MyCustomOpenAI).

    • Provider Type: Select your provider type from the dropdown menu. Currently supports:

      • OpenAI

      • Gemini

      • Anthropic

      • Azure OpenAI

  5. Save Configuration: After filling in the details, click the "Add" button to save your configuration.

Configuring Custom AI Providers

After adding, locate your newly added provider in the list and configure it:

  1. Activation Status: Toggle the activation switch on the far right of the list to enable this custom service.

  2. API Key:

    • Enter the API key provided by your AI provider.

    • Click the "Test" button to verify the key's validity.

  3. API Address:

    • Enter the base URL to access the AI service.

    • Always refer to your AI provider's official documentation for the correct API address.

  4. Model Management:

    • Click the "+ Add" button to manually add model IDs you want to use under this provider (e.g., gpt-3.5-turbo, gemini-pro).

    • If unsure about specific model names, consult your AI provider's official documentation.

    • Click the "Manage" button to edit or delete added models.

Getting Started

After completing the above configurations, you can select your custom AI provider and model in Cherry Studio's chat interface and start conversing with AI!

Using vLLM as a Custom AI Provider

vLLM is a fast and easy-to-use LLM inference library similar to Ollama. Here's how to integrate vLLM into Cherry Studio:

  1. Install vLLM: Follow vLLM's official documentation (https://docs.vllm.ai/en/latest/getting_started/quickstart.html) to install vLLM.

    pip install vllm # if using pip
    uv pip install vllm # if using uv
  2. Launch vLLM Service: Start the service using vLLM's OpenAI-compatible interface via two main methods:

    • Using vllm.entrypoints.openai.api_server

    python -m vllm.entrypoints.openai.api_server --model gpt2
    • Using uvicorn

    vllm --model gpt2 --served-model-name gpt2

Ensure the service launches successfully, listening on the default port 8000. You can also specify a different port using the --port parameter.

  1. Add vLLM Provider in Cherry Studio:

    • Follow the steps above to add a new custom AI provider.

    • Provider Name: vLLM

    • Provider Type: Select OpenAI.

  2. Configure vLLM Provider:

    • API Key: Leave this field blank or enter any value since vLLM doesn't require an API key.

    • API Address: Enter vLLM's API address (default: http://localhost:8000/, adjust if using a different port).

    • Model Management: Add the model name loaded in vLLM (e.g., gpt2 for the command python -m vllm.entrypoints.openai.api_server --model gpt2).

  3. Start Chatting: Now select the vLLM provider and the gpt2 model in Cherry Studio to chat with the vLLM-powered LLM!

Tips and Tricks

  • Read Documentation Carefully: Before adding custom providers, thoroughly review your AI provider's official documentation for API keys, addresses, model names, etc.

  • Test API Keys: Use the "Test" button to quickly verify API key validity.

  • Verify API Addresses: Different providers and models may have varying API addresses—ensure correctness.

  • Add Models Judiciously: Only add models you'll actually use to avoid cluttering.

Ollama

This document was translated from Chinese by AI and has not yet been reviewed.

Ollama

Ollama is an excellent open-source tool that allows you to easily run and manage various large language models (LLMs) locally. Cherry Studio now supports Ollama integration, enabling you to interact directly with locally deployed LLMs through the familiar interface without relying on cloud services!

What is Ollama?

Ollama is a tool that simplifies the deployment and use of large language models (LLMs). It has the following features:

  • Local Operation: Models run entirely on your local computer without requiring internet connectivity, protecting your privacy and data security.

  • Simple and User-Friendly: Download, run, and manage various LLMs through simple command-line instructions.

  • Rich Model Selection: Supports popular open-source models like Llama 2, Deepseek, Mistral, Gemma, and more.

  • Cross-Platform: Compatible with macOS, Windows, and Linux systems.

  • OpenAPI: Features OpenAI-compatible interfaces for integration with other tools.

Why Use Ollama with Cherry Studio?

  • No Cloud Service Needed: Break free from cloud API quotas and fees, and fully leverage the power of local LLMs.

  • Data Privacy: All your conversation data remains locally stored with no privacy concerns.

  • Offline Availability: Continue interacting with LLMs even without an internet connection.

  • Customization: Choose and configure the most suitable LLMs according to your needs.

Configuring Ollama in Cherry Studio

1. Install and Run Ollama

First, you need to install and run Ollama on your computer. Follow these steps:

  • Download Ollama: Visit the Ollama official website and download the installation package for your operating system. For Linux systems, you can directly install Ollama by running:

    curl -fsSL https://ollama.com/install.sh | sh
  • Install Ollama: Complete the installation by following the installer instructions.

  • Download Models: Open a terminal (or command prompt) and use the ollama run command to download your desired model. For example, to download the Llama 2 model, run:

    ollama run llama3.2

    Ollama will automatically download and run the model.

  • Keep Ollama Running: Ensure Ollama remains running while using Cherry Studio to interact with its models.

2. Add Ollama as a Provider in Cherry Studio

Next, add Ollama as a custom AI provider in Cherry Studio:

  • Open Settings: Click on "Settings" (gear icon) in the left navigation bar of Cherry Studio.

  • Access Model Services: Select the "Model Services" tab in the settings page.

  • Add Provider: Click "Ollama" in the provider list.

3. Configure Ollama Provider

Locate the newly added Ollama provider in the list and configure it in detail:

  1. Enable Status:

    • Ensure the switch on the far right of the Ollama provider is turned on (indicating enabled status).

  2. API Key:

    • Ollama typically requires no API key. Leave this field blank or enter any content.

  3. API Address:

    • Enter Ollama's local API address. Normally, this is:

      http://localhost:11434/

      Adjust if you've modified the default port.

  4. Keep-Alive Time: Sets the session retention time in minutes. Cherry Studio will automatically disconnect from Ollama if no new interactions occur within this period to free up resources.

  5. Model Management:

    • Click "+ Add" to manually add the names of models you've downloaded in Ollama.

    • For example, if you downloaded llama3.2 via ollama run llama3.2, enter llama3.2 here.

    • Click "Manage" to edit or delete added models.

Getting Started

After completing the above configurations, select Ollama as the provider and choose your downloaded model in Cherry Studio's chat interface to start conversations with local LLMs!

Tips and Tricks

  • First-Time Model Execution: Ollama needs to download model files during initial runs, which may take considerable time. Please be patient.

  • View Available Models: Run ollama list in the terminal to see your downloaded Ollama models.

  • Hardware Requirements: Running large language models requires substantial computing resources (CPU, RAM, GPU). Ensure your computer meets the model's requirements.

  • Ollama Documentation: Click the View Ollama Documentation and Models link in the configuration page to quickly access Ollama's official documentation.

Web Mode

如何在 Cherry Studio 使用联网模式

This document was translated from Chinese by AI and has not yet been reviewed.

Internet Mode

Examples of scenarios that require internet access:

  • Time-sensitive information: such as today's/this week's/just now's gold futures prices.

  • Real-time data: such as weather, exchange rates, and other dynamic values.

  • Emerging knowledge: such as new things, new concepts, new technologies, etc.

How to Enable Internet Access

In the Cherry Studio question window, click the 【Globe】 icon to enable internet access.

Click the globe icon - Enable internet
Indicates - Internet function is enabled

Important Note: Two Modes for Internet Access

Mode 1: Models with built-in internet function from providers

When using such models, enabling internet access requires no extra steps - it's straightforward.

Quickly identify internet-enabled models by checking for a small globe icon next to the model name above the chat interface.

This method also helps quickly distinguish internet-enabled models in the Model Management page.

Cherry Studio currently supports internet-enabled models from

  • Google Gemini

  • OpenRouter (all models support internet)

  • Tencent Hunyuan

  • Zhipu AI

  • Alibaba Bailian, etc.

Important note:

Special cases exist where models may access internet without the globe icon, as explained in the tutorial below.


Mode 2: Models without internet function use Tavily service

When using models without built-in internet (no globe icon), use Tavily search service to process real-time information.

First-time Tavily setup triggers a setup prompt - simply follow the instructions!

Popup window, click: Go to settings
Click to get API key

After clicking, you'll be redirected to Tavily's website to register/login. Create and copy your API key back to Cherry Studio.

Registration guide available in Tavily tutorial within this documentation directory.

Tavily registration reference:

The following interface confirms successful registration.

Copy API key
Paste key - setup complete!

Test again for results: shows normal internet search with default result count (5).

Note: Tavily has monthly free tier limits - exceeding requires payment~~

PS: Please report any issues you encounter.

Display Settings

This document was translated from Chinese by AI and has not yet been reviewed.

Display Settings

On this page, you can set the software's color theme, page layout, or use Custom CSS for personalized configurations.

Theme Selection

Here you can set the default interface color mode (Light mode, Dark mode, or Follow System).

Conversation Settings

These settings apply to the layout of the conversation interface.

Conversation Panel Position

Auto-Switch to Conversation

When enabled, clicking an assistant name will automatically switch to the corresponding conversation.

Show Conversation Time

When enabled, displays the conversation creation time below the conversation.

Custom CSS

This setting allows flexible customization of the interface. Refer to the advanced tutorial on Custom CSS for specific methods.

Automatic MCP Installation

This document was translated from Chinese by AI and has not yet been reviewed.

Automatic Installation of MCP

Automatic installation of MCP requires upgrading Cherry Studio to v1.1.18 or higher.

Feature Introduction

In addition to manual installation, Cherry Studio has a built-in @mcpmarket/mcp-auto-install tool that provides a more convenient way to install MCP servers. You simply need to enter specific commands in conversations with large models that support MCP services.

Beta Phase Reminder:

  • @mcpmarket/mcp-auto-install is currently in beta phase

  • Effectiveness depends on the "intelligence" of the large model - some configurations are automatically added, while others still require manual parameter adjustments in MCP settings

  • Current search sources are from @modelcontextprotocol, but this can be customized (explained below)

Usage Instructions

For example, you can input:

Help me install a filesystem mcp server
Installing MCP server via command input
MCP server configuration interface

The system will automatically recognize your requirements and complete the installation via @mcpmarket/mcp-auto-install. This tool supports various types of MCP servers, including but not limited to:

  • filesystem (file system)

  • fetch (network requests)

  • sqlite (database)

  • etc.

The MCP_PACKAGE_SCOPES variable allows customization of MCP service search sources. Default value: @modelcontextprotocol.

Introduction to the @mcpmarket/mcp-auto-install Library

Default Configuration Reference:

// `axun-uUpaWEdMEMU8C61K` is the service ID - customizable
"axun-uUpaWEdMEMU8C61K": {
  "name": "mcp-auto-install",
  "description": "Automatically install MCP services (Beta version)",
  "isActive": false,
  "registryUrl": "https://registry.npmmirror.com",
  "command": "npx",
  "args": [
    "-y",
    "@mcpmarket/mcp-auto-install",
    "connect",
    "--json"
  ],
  "env": {
    "MCP_REGISTRY_PATH": "Details at https://www.npmjs.com/package/@mcpmarket/mcp-auto-install"
  },
  "disabledTools": []
}

@mcpmarket/mcp-auto-install is an open-source npm package. You can view its details and documentation in the npm registry. @mcpmarket is the official MCP service collection for Cherry Studio.

Obsidian Configuration Tutorial

数据设置→Obsidian配置

This document was translated from Chinese by AI and has not yet been reviewed.

Obsidian Configuration Tutorial

Cherry Studio supports integration with Obsidian, allowing you to export complete conversations or individual messages to your Obsidian vault.

This process does not require installing additional Obsidian plugins. However, since the mechanism used by Cherry Studio to import to Obsidian is similar to the Obsidian Web Clipper, it is recommended to upgrade Obsidian to the latest version (the current version should be at least greater than 1.7.2) to prevent .

Latest Tutorial

Compared to the old export to Obsidian feature, the new version can automatically select the vault path, eliminating the need to manually enter the vault name and folder name.

Step 1: Configure Cherry Studio

Open Cherry Studio's Settings → Data Settings → Obsidian Configuration menu. The drop-down box will automatically display the names of Obsidian vaults that have been opened on this machine. Select your target Obsidian vault:

Step 2: Export Conversations

Exporting a Complete Conversation

Return to Cherry Studio's conversation interface, right-click on the conversation, select Export, and click Export to Obsidian:

A window will pop up allowing you to configure the Properties of the note exported to Obsidian, its folder location, and the processing method:

  • Vault: Click the drop-down menu to select another Obsidian vault

  • Path: Click the drop-down menu to select the folder for storing the exported conversation note

  • As Obsidian note properties:

    • Tags

    • Created time

    • Source

  • Three available processing methods:

    • New (overwrite if exists): Create a new conversation note in the folder specified at Path. If a note with the same name exists, it will overwrite the old note

    • Prepend: When a note with the same name already exists, export the selected conversation content and add it to the beginning of that note

    • Append: When a note with the same name already exists, export the selected conversation content and add it to the end of that note

Only the first method includes Properties. The latter two methods do not include Properties.

After selecting all options, click OK to export the complete conversation to the specified folder in the corresponding Obsidian vault.

Exporting a Single Message

To export a single message, click the three-dash menu below the message, select Export, and click Export to Obsidian:

The same window as when exporting a complete conversation will appear, requiring you to configure the note properties and processing method. Complete the configuration following the same steps as in .

Successful Export

🎉 Congratulations! You've completed all configurations for Cherry Studio's integration with Obsidian and finished the entire export process. Enjoy yourselves!


Old Tutorial (for Cherry Studio < v1.1.13)

Step 1: Prepare Obsidian

Open your Obsidian vault and create a folder to store exported conversations (shown as "Cherry Studio" folder in the example):

Note the text highlighted in the bottom-left corner - this is your vault name.

Step 2: Configure Cherry Studio

In Cherry Studio's Settings → Data Settings → Obsidian Configuration menu, enter the vault name and folder name obtained in :

Global Tags is optional and can be used to set tags for all exported conversations in Obsidian. Fill in as needed.

Step 3: Export Conversations

Exporting a Complete Conversation

Return to Cherry Studio's conversation interface, right-click on the conversation, select Export, and click Export to Obsidian:

A window will pop up allowing you to adjust the Properties of the note exported to Obsidian and select a processing method. Three processing methods are available:

  • New (overwrite if exists): Create a new conversation note in the folder specified in . If a note with the same name exists, it will overwrite the old note

  • Prepend: When a note with the same name already exists, export the selected conversation content and add it to the beginning of that note

  • Append: When a note with the same name already exists, export the selected conversation content and add it to the end of that note

Only the first method includes Properties. The latter two methods do not include Properties.

Exporting a Single Message

To export a single message, click the three-dash menu below the message, select Export, and click Export to Obsidian:

The same window as when exporting a complete conversation will appear. Complete the configuration by following the same steps in .

Successful Export

🎉 Congratulations! You've completed all configurations for Cherry Studio's integration with Obsidian and finished the entire export process. Enjoy yourselves!

Custom CSS

This document was translated from Chinese by AI and has not yet been reviewed.

Custom CSS

By customizing CSS, you can modify the software's appearance to better suit your preferences, like this:

Built-in Variables

For more theme variables, refer to the source code:

Related Recommendations

Cherry Studio Theme Library:

Share some Chinese-style Cherry Studio theme skins:

Knowledge Popularization

This document was translated from Chinese by AI and has not yet been reviewed.

Knowledge Popularization

What are Tokens?

Tokens are the basic units of text processed by AI models, understood as the smallest units of model "thinking". They are not entirely equivalent to the characters or words as we understand them, but rather a special way the model segments text itself.

1. Chinese Tokenization

  • A Chinese character is typically encoded as 1-2 tokens

  • For example: "你好" ≈ 2-4 tokens

2. English Tokenization

  • Common words are typically 1 token

  • Longer or uncommon words are decomposed into multiple tokens

  • For example:

    • "hello" = 1 token

    • "indescribable" = 4 tokens

3. Special Characters

  • Spaces, punctuation, etc. also consume tokens

  • Line breaks are typically 1 token

Tokenizers vary across different service providers, and even different models from the same provider may have different tokenizers. This knowledge is solely for clarifying the concept of tokens.


What is a Tokenizer?

A Tokenizer is the tool that converts text into tokens for AI models. It determines how to split input text into the smallest units that models can understand.

Why do different models have different Tokenizers?

1. Different Training Data

  • Different corpora lead to different optimization directions

  • Variations in multilingual support levels

  • Specialized optimization for specific domains (medical, legal, etc.)

2. Different Tokenization Algorithms

  • BPE (Byte Pair Encoding) - OpenAI GPT series

  • WordPiece - Google BERT

  • SentencePiece - Suitable for multilingual scenarios

3. Different Optimization Goals

  • Some focus on compression efficiency

  • Others on semantic preservation

  • Others on processing speed

Practical Impact

The same text may have different token counts across models:


What is an Embedding Model?

Basic Concept: An embedding model is a technique that converts high-dimensional discrete data (text, images, etc.) into low-dimensional continuous vectors. This transformation allows machines to better understand and process complex data. Imagine it as simplifying a complex puzzle into a simple coordinate point that still retains the puzzle's key features. In the large model ecosystem, it serves as a "translator," converting human-understandable information into AI-computable numerical forms.

Working Principle: Taking natural language processing as an example, an embedding model maps words to specific positions in vector space. In this space:

  • Vectors for "King" and "Queen" will be very close

  • Pet-related words like "cat" and "dog" will cluster together

  • Semantically unrelated words like "car" and "bread" will be distant

Main Application Scenarios:

  • Text analysis: Document classification, sentiment analysis

  • Recommendation systems: Personalized content recommendations

  • Image processing: Similar image retrieval

  • Search engines: Semantic search optimization

Core Advantages:

  1. Dimensionality reduction: Simplifies complex data into manageable vector forms

  2. Semantic preservation: Retains key semantic information from original data

  3. Computational efficiency: Significantly improves training and inference efficiency

Technical Value: Embedding models are fundamental components of modern AI systems, providing high-quality data representations for machine learning tasks, and are key technologies driving advances in natural language processing, computer vision, and other fields.


How Embedding Models Work in Knowledge Retrieval

Basic Workflow:

  1. Knowledge Base Preprocessing Stage

    • Split documents into appropriately sized chunks

    • Use embedding models to convert each chunk into vectors

    • Store vectors and original text in a vector database

  2. Query Processing Stage

    • Convert user questions into vectors

    • Retrieve similar content from the vector database

    • Provide retrieved context to the LLM


What is MCP (Model Context Protocol)?

MCP is an open-source protocol designed to provide context information to large language models (LLMs) in a standardized way.

  • Analogy: Think of MCP as a "USB drive" for AI. Just as USB drives can store various files and be plugged into computers for immediate use, MCP servers can "plug in" various context-providing "plugins". LLMs can request these plugins from MCP servers as needed to obtain richer context information and enhance their capabilities.

  • Comparison with Function Tools: Traditional function tools provide external capabilities to LLMs, but MCP is a higher-level abstraction. Function tools focus on specific tasks, while MCP provides a more universal, modular context acquisition mechanism.

Core Advantages of MCP

  1. Standardization: Provides unified interfaces and data formats for seamless collaboration between LLMs and context providers.

  2. Modularity: Allows developers to decompose context information into independent modules (plugins) for easy management and reuse.

  3. Flexibility: Enables LLMs to dynamically select required context plugins for smarter, more personalized interactions.

  4. Extensibility: Supports adding new types of context plugins in the future, providing unlimited possibilities for LLM capability expansion.


Web Search Blacklist Configuration

This document was translated from Chinese by AI and has not yet been reviewed.

Network Search Blacklist Configuration

Cherry Studio supports two methods for configuring blacklists: manual setup and adding subscription sources. Configuration rules reference

Manual Configuration

You can add rules to search results or click the toolbar icon to block specified websites. Rules can be specified using: (example: *://*.example.com/*) or (example: /example\.(net|org)/).

Subscription Sources Configuration

You can also subscribe to public rule sets. This website lists some subscriptions: https://iorate.github.io/ublacklist/subscriptions

Here are some recommended subscription source links:

Name
Link
Type
Input: "Hello, world!"
GPT-3: 4 tokens
BERT: 3 tokens
Claude: 3 tokens
import failure when conversations are too long
Exporting a Complete Conversation
Step 1
Step 2
Exporting a Complete Conversation
Configuring note properties
Selecting path
Choosing processing method
Exporting a single message
Export to Obsidian
Viewing export results
Exporting a complete conversation
Configuring note properties
Exporting a single message
Export to Obsidian
Viewing export results

uBlacklist subscription compilation

https://git.io/ublacklist

Chinese

uBlockOrigin-HUGE-AI-Blocklist

https://raw.githubusercontent.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist/main/list_uBlacklist.txt

AI Generated

ublacklist
match patterns
regular expressions
Subscription Source Configuration
VolcEngine Web Integration
Tavily Web Login & Registration Tutorial

Cherry Studio Commercial License Agreement

This document was translated from Chinese by AI and has not yet been reviewed.

Cherry Studio License Agreement

By using or distributing any portion or element of Cherry Studio materials, you are deemed to have acknowledged and accepted the terms of this Agreement, which shall take effect immediately.

I. Definitions

  1. This Cherry Studio License Agreement (hereinafter referred to as the "Agreement") shall mean the terms and conditions governing the use, reproduction, distribution, and modification of the Materials as defined herein.

  2. "We" (or "Our") shall mean Shanghai WisdomAI Technology Co., Ltd.

  3. "You" (or "Your") shall mean a natural person or legal entity exercising rights granted under this Agreement and/or using the Materials for any purpose and in any field of use.

  4. "Third Party" shall mean an individual or legal entity not under common control with Us or You.

  5. "Cherry Studio" shall mean this software suite, including but not limited to [e.g., core libraries, editor, plugins, sample projects], as well as source code, documentation, sample code, and other elements of the foregoing distributed by Us. (Please elaborate based on Cherry Studio’s actual composition.)

  6. "Materials" shall collectively refer to Cherry Studio and documentation (and any portions thereof), proprietary to Shanghai WisdomAI Technology Co., Ltd., and provided under this Agreement.

  7. "Source" Form shall mean the preferred form for making modifications, including but not limited to source code, documentation source files, and configuration files.

  8. "Object" Form shall mean any form resulting from mechanical transformation or translation of a Source Form, including but not limited to compiled object code, generated documentation, and forms converted into other media types.

  9. "Commercial Use" shall mean use for the purpose of direct or indirect commercial benefit or commercial advantage, including but not limited to sale, licensing, subscription, advertising, marketing, training, consulting services, etc.

  10. "Modification" shall mean any change, adaptation, derivation, or secondary development to the Source Form of the Materials, including but not limited to modifying application names, logos, code, features, interfaces, etc.

II. Grant of Rights

  1. Free Commercial Use (Limited to Unmodified Code): Subject to the terms and conditions of this Agreement, We hereby grant You a non-exclusive, worldwide, non-transferable, royalty-free license to use, reproduce, distribute, copy, and disseminate unmodified Materials, including for Commercial Use, under the intellectual property or other rights We own or control embodied in the Materials.

  2. Commercial Authorization (When Necessary): Under the conditions described in Section III ("Commercial Authorization"), You must obtain explicit written commercial authorization from Us to exercise rights under this Agreement.

III. Commercial Authorization

In any of the following circumstances, You must contact Us and obtain explicit written commercial authorization before continuing to use Cherry Studio Materials:

  1. Modification and Derivation: You modify Cherry Studio Materials or develop derivatives based on them (including but not limited to modifying application names, logos, code, features, interfaces, etc.).

  2. Enterprise Services: You provide services based on Cherry Studio within Your enterprise or for enterprise clients, and such services support 10 or more cumulative users.

  3. Hardware Bundling: You pre-install or integrate Cherry Studio into hardware devices or products for bundled sales.

  4. Government or Educational Institutional Procurement: Your use case involves large-scale procurement projects by government or educational institutions, especially those involving sensitive requirements such as security or data privacy.

  5. Public Cloud Services: Providing public-facing cloud services based on Cherry Studio.

IV. Redistribution

You may distribute copies of unmodified Materials or make them available as part of a product or service containing unmodified Materials, distributed in Source or Object Form, provided You satisfy the following conditions:

  1. You must provide a copy of this Agreement to any other recipient of the Materials;

  2. You must retain the following attribution notice in all copies of the Materials You distribute, placing it within a "NOTICE" or similar text file distributed as part of such copies: "Cherry Studio is licensed under the Cherry Studio LICENSE AGREEMENT, Copyright (c) Shanghai WisdomAI Technology Co., Ltd. All Rights Reserved."

V. Usage Rules

  1. Materials may be subject to export controls or restrictions. You shall comply with applicable laws and regulations when using the Materials.

  2. If You use Materials or any output thereof to create, train, fine-tune, or improve software or models that will be distributed or provided, We encourage You to prominently mark "Built with Cherry Studio" or "Powered by Cherry Studio" in relevant product documentation.

VI. Intellectual Property

  1. We retain ownership of all intellectual property rights to the Materials and derivative works made by or for Us. Subject to compliance with the terms and conditions of this Agreement, intellectual property rights in modifications and derivative works of the Materials created by You shall be governed by the specific commercial authorization agreement. Without obtaining commercial authorization, You do not own such modifications or derivatives, and all intellectual property rights therein remain vested in Us.

  2. No trademark license is granted for the use of Our trade names, trademarks, service marks, or product names except as necessary to fulfill notice obligations under this Agreement or for reasonable and customary use in describing and redistributing the Materials.

  3. If You initiate litigation or other legal proceedings (including counterclaims in litigation) against Us or any entity alleging that the Materials, any output thereof, or any portion thereof infringes any intellectual property or other rights owned or licensable by You, all licenses granted to You under this Agreement shall terminate as of the commencement or filing date of such litigation or proceedings.

VII. Disclaimer and Limitation of Liability

  1. We have no obligation to support, update, provide training for, or develop any further versions of Cherry Studio Materials, nor to grant any related licenses.

  2. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTIES AND ASSUME NO LIABILITY REGARDING THE SECURITY OR STABILITY OF THE MATERIALS OR ANY OUTPUT THEREOF.

  3. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT THEREOF, INCLUDING BUT NOT LIMITED TO ANY DIRECT, INDIRECT, SPECIAL, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED.

  4. You shall defend, indemnify, and hold Us harmless from any claims by third parties arising from or related to Your use or distribution of the Materials.

VIII. Duration and Termination

  1. The term of this Agreement shall begin upon Your acceptance hereof or access to the Materials and shall continue until terminated in accordance with these terms and conditions.

  2. We may terminate this Agreement if You breach any term or condition herein. Upon termination, You must cease all use of the Materials. Sections VII, IX, and "II. Contributor Agreement" shall survive termination.

IX. Governing Law and Jurisdiction

  1. This Agreement and any disputes arising out of or in connection herewith shall be governed by the laws of China.

  2. The People's Court of Shanghai shall have exclusive jurisdiction over any disputes arising from this Agreement.

:root {
  --color-background: #1a462788;
  --color-background-soft: #1a4627aa;
  --color-background-mute: #1a462766;
  --navbar-background: #1a4627;
  --chat-background: #1a4627;
  --chat-background-user: #28b561;
  --chat-background-assistant: #1a462722;
}

#content-container {
  background-color: #2e5d3a !important;
}
:root {
  font-family: "汉仪唐美人" !important; /* Font */
}

/* Color of expanded deep thinking text */
.ant-collapse-content-box .markdown {
  color: red;
}

/* Theme variables */
:root {
  --color-black-soft: #2a2b2a; /* Dark background color */
  --color-white-soft: #f8f7f2; /* Light background color */
}

/* Dark theme */
body[theme-mode="dark"] {
  /* Colors */
  --color-background: #2b2b2b; /* Dark background color */
  --color-background-soft: #303030; /* Light background color */
  --color-background-mute: #282c34; /* Neutral background color */
  --navbar-background: var(-–color-black-soft); /* Navigation bar background */
  --chat-background: var(–-color-black-soft); /* Chat background */
  --chat-background-user: #323332; /* User chat background */
  --chat-background-assistant: #2d2e2d; /* Assistant chat background */
}

/* Dark theme specific styles */
body[theme-mode="dark"] {
  #content-container {
    background-color: var(-–chat-background-assistant) !important; /* Content container background */
  }

  #content-container #messages {
    background-color: var(-–chat-background-assistant); /* Messages background */
  }

  .inputbar-container {
    background-color: #3d3d3a; /* Input bar background */
    border: 1px solid #5e5d5940; /* Input bar border color */
    border-radius: 8px; /* Input bar border radius */
  }

  /* Code styles */
  code {
    background-color: #e5e5e20d; /* Code background */
    color: #ea928a; /* Code text color */
  }

  pre code {
    color: #abb2bf; /* Preformatted code text color */
  }
}

/* Light theme */
body[theme-mode="light"] {
  /* Colors */
  --color-white: #ffffff; /* White */
  --color-background: #ebe8e2; /* Light background */
  --color-background-soft: #cbc7be; /* Light background */
  --color-background-mute: #e4e1d7; /* Neutral background */
  --navbar-background: var(-–color-white-soft); /* Navigation bar background */
  --chat-background: var(-–color-white-soft); /* Chat background */
  --chat-background-user: #f8f7f2; /* User chat background */
  --chat-background-assistant: #f6f4ec; /* Assistant chat background */
}

/* Light theme specific styles */
body[theme-mode="light"] {
  #content-container {
    background-color: var(-–chat-background-assistant) !important; /* Content container background */
  }

  #content-container #messages {
    background-color: var(-–chat-background-assistant); /* Messages background */
  }

  .inputbar-container {
    background-color: #ffffff; /* Input bar background */
    border: 1px solid #87867f40; /* Input bar border color */
    border-radius: 8px; /* Input bar border radius */
  }

  /* Code styles */
  code {
    background-color: #3d39290d; /* Code background */
    color: #7c1b13; /* Code text color */
  }

  pre code {
    color: #000000; /* Preformatted code text color */
  }
}
https://github.com/CherryHQ/cherry-studio/tree/main/src/renderer/src/assets/styles
https://github.com/boilcy/cherrycss
https://linux.do/t/topic/325119/129
Custom CSS

Change Storage Location

This document was translated from Chinese by AI and has not yet been reviewed.

Default Storage Location

Cherry Studio data storage follows system specifications. Data is automatically placed in the user directory at the following locations:

macOS: /Users/username/Library/Application Support/CherryStudioDev

Windows: C:\Users\username\AppData\Roaming\CherryStudio

Linux: /home/username/.config/CherryStudio

You can also view it at:

Changing Storage Location (For Reference)

Method 1:

This can be achieved by creating a symbolic link. Exit the application, move the data to your desired location, then create a link at the original location pointing to the new path.

For detailed steps, refer to: https://github.com/CherryHQ/cherry-studio/issues/621#issuecomment-2588652880

Method 2: Based on Electron application characteristics, modify the storage location by configuring launch parameters.

--user-data-dir Example: Cherry-Studio-*-x64-portable.exe --user-data-dir="%user_data_dir%"

Example:

PS D:\CherryStudio> dir


    目录: D:\CherryStudio


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----         2025/4/18     14:05                user-data-dir
-a----         2025/4/14     23:05       94987175 Cherry-Studio-1.2.4-x64-portable.exe
-a----         2025/4/18     14:05            701 init_cherry_studio.bat

init_cherry_studio.bat (encoding: ANSI)

@title CherryStudio Initialization
@echo off

set current_path_dir=%~dp0
@echo Current path: %current_path_dir%
set user_data_dir=%current_path_dir%user-data-dir
@echo CherryStudio data path: %user_data_dir%

@echo Searching for Cherry-Studio-*-portable.exe in current directory
setlocal enabledelayedexpansion

for /f "delims=" %%F in ('dir /b /a-d "Cherry-Studio-*-portable*.exe" 2^>nul') do ( # Compatible with GitHub and official releases. Modify for other versions
    set "target_file=!cd!\%%F"
    goto :break
)
:break
if defined target_file (
    echo Found file: %target_file%
) else (
    echo No matching files found. Exiting script
    pause
    exit
)

@echo Press any key to continue...
pause

@echo Launching CherryStudio
start %target_file% --user-data-dir="%user_data_dir%"

@echo Operation completed
@echo on
exit

Initial structure of user-data-dir:

PS D:\CherryStudio> dir .\user-data-dir\


    目录: D:\CherryStudio\user-data-dir


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----         2025/4/18     14:29                blob_storage
d-----         2025/4/18     14:07                Cache
d-----         2025/4/18     14:07                Code Cache
d-----         2025/4/18     14:07                Data
d-----         2025/4/18     14:07                DawnGraphiteCache
d-----         2025/4/18     14:07                DawnWebGPUCache
d-----         2025/4/18     14:07                Dictionaries
d-----         2025/4/18     14:07                GPUCache
d-----         2025/4/18     14:07                IndexedDB
d-----         2025/4/18     14:07                Local Storage
d-----         2025/4/18     14:07                logs
d-----         2025/4/18     14:30                Network
d-----         2025/4/18     14:07                Partitions
d-----         2025/4/18     14:29                Session Storage
d-----         2025/4/18     14:07                Shared Dictionary
d-----         2025/4/18     14:07                WebStorage
-a----         2025/4/18     14:07             36 .updaterId
-a----         2025/4/18     14:29             20 config.json
-a----         2025/4/18     14:07            434 Local State
-a----         2025/4/18     14:29             57 Preferences
-a----         2025/4/18     14:09           4096 SharedStorage
-a----         2025/4/18     14:30            140 window-state.json

Common Issues

This document was translated from Chinese by AI and has not yet been reviewed.

Frequently Asked Questions

Common Error Codes

  • 4xx (Client Error Status Codes): Generally indicate requests that cannot be completed due to syntax errors, authentication/authorization failures, etc.

  • 5xx (Server Error Status Codes): Generally indicate server-side errors like server downtime, request timeouts, etc.

Error Code
Possible Scenarios
Solution

400

Request body format error, etc.

Check error message in chat response or use the to view error details, then follow prompts.

【Common Case 1】 For Gemini models, card binding may be required; 【Common Case 2】 Data size exceeds limit, common with vision models where image size exceeds provider's request limit; 【Common Case 3】 Added unsupported parameters or incorrect parameter values. Try testing with a clean assistant; 【Common Case 4】 Context exceeds limit - clear context, start new conversation, or reduce context messages.

401

Authentication failure: Unsupported model or provider account suspension

Contact service provider or check account status

403

No permission for requested operation

Follow instructions based on error message in chat response or

404

Requested resource not found

Check request paths, etc.

422

Semantically invalid request despite correct format

Common with JSON semantic errors (e.g., null values; numbers/booleans passed where strings required)

429

Request rate limit reached

Request rate (TPM/RPM) capped, try again later

500

Internal server error

Contact service provider if persistent

501

Server doesn't support requested functionality

502

Bad Gateway: Invalid response from upstream server

503

Service Unavailable: Server overloaded/maintenance

504

Gateway Timeout: Proxy server didn't receive timely response from upstream


Console Error Viewing Method

  • Press Ctrl + Shift + I when focused on Cherry Studio client window (Mac: Command + Option + I)

  • Cherry Studio client window must be active to open console

  • Console must be opened before making requests (test/dialogue) to collect request info

  • In console window, click Network → locate last entry marked with red × labeled completions(for dialogue/translation/model check errors) or generations(for image generation errors) → click Response to view full error message (area ④ in image).

If unable to diagnose error, screenshot this interface and share in official community

This method works for dialogue, model testing, knowledge base creation, image generation, etc. Always open console before making requests.

Name (area ② in image) varies by scenario:

Dialogue/Translation/Model Check: completions Image Generation: generations Knowledge Base Creation: embeddings


Formula Not Rendering/Formula Rendering Error

  • If formula code displays instead of rendering, check delimiters:

Delimiter Usage Inline Formulas

  • Single dollar sign: $formula$

  • Or \(formula\)

Formula Blocks

  • Double dollar sign: $$formula$$

  • Or \[formula\]

  • Example: $$\sum_{i=1}^n x_i$$ ∑i=1nxi\sum_{i=1}^n x_i∑i=1n​xi​

  • For rendering errors/garbled text (common with Chinese in formulas), try switching rendering engine to KateX.


Unable to Create Knowledge Base/"Failed to Get Embedding Dimensions"

  1. Model unavailable

Verify provider supports the model and check service status.

  1. Used non-embedding model


Model Cannot Recognize Images/Unable to Upload/Select Images

First confirm if the model supports image recognition. Hot models in Cherry Studio are categorized - models with an eye icon after the name support image recognition.

Image-capable models support file uploads. If model features aren't matched correctly:

  • Go to provider's model list

  • Click settings icon next to model name

  • Enable image option

Check model details on provider's page. Vision-incompatible models won't benefit from enabling image options.

NewAPI

This document was translated from Chinese by AI and has not yet been reviewed.

NewAPI

  • Log in and open the token page

  • Click "Add Token"

  • Enter a token name and click Submit (configure other settings if needed)

  • Open CherryStudio's provider settings and click Add at the bottom of the provider list

  • Enter a remark name, select OpenAI as the provider, and click OK

  • Paste the key you just copied

  • Return to the API Key acquisition page and copy the root address from the browser's address bar, for example:

Only copy https://xxx.xxx.com; content after '/' is not needed
  • When the address is IP + port, enter http://IP:port (e.g., http://127.0.0.1:3000)

  • Strictly distinguish between http and https - do not enter https if SSL is not enabled

  • Add models (click Manage to automatically fetch or manually enter) and toggle the switch at the top right to use.

console
console

Chat Interface

This document was translated from Chinese by AI and has not yet been reviewed.

Chat Interface

Assistants and Topics

Assistants

An Assistant is a personalized configuration for the selected model, such as preset prompts and parameters. These settings help the model better align with your expected workflow.

The System Default Assistant comes with relatively universal parameters (without prompts). You can use it directly or find the presets you need on the Agents page.

Topics

The Assistant is the parent set of Topics. A single assistant can create multiple topics (i.e., conversations). All Topics share the parameter settings and preset prompts (prompt) of the Assistant.

Chat Buttons

New Topic Creates a new topic under the current assistant.

Upload Image or Document Image upload requires model support. Uploading documents will automatically parse text as context for the model.

Web Search Requires configuration of web search information in settings. Search results are provided as context to the LLM. See Web Search Mode.

Knowledge Base Enables the knowledge base. See Knowledge Base Tutorial.

MCP Server Enables MCP server functionality. See MCP Usage Tutorial.

Generate Image Hidden by default. For models that support image generation (e.g., Gemini), manually enable this button to generate images.

Due to technical reasons, you must manually enable this button to generate images. This button will be removed after optimization.

Select Model Switches to the specified model for subsequent conversations while preserving context.

Quick Phrases Requires predefined common phrases in settings. Invoke them here for direct input, supporting variables.

Clear Messages Deletes all content in this topic.

Expand Enlarges the dialog box for long text input.

Clear Context Truncates the context available to the model without deleting content—the model "forgets" previous conversation content.

Estimated Token Count Shows estimated token usage: Current Context, Max Context (∞ indicates unlimited context), Current Message Word Count, and Estimated Tokens.

This function is for estimation only. Actual token counts vary by model—refer to model provider data.

Translate Translates the current input box content into English.

Chat Settings

Model Settings

Model settings synchronize with the Model Settings parameters in assistant settings. See Edit Assistant.

In chat settings, only the model settings apply to the current assistant. Other settings apply globally. For example, setting the message style to bubbles applies to all topics under all assistants.

Message Settings

Message Separator:

Uses a divider to separate message content from the action bar.

Use Serif Font:

Switches font style. You can also change fonts via Custom CSS.

Show Line Numbers for Code:

Displays line numbers in code blocks generated by the model.

Collapsible Code Blocks:

Automatically collapses code blocks when code snippets are long.

Wrap Lines in Code Blocks:

Enables automatic line wrapping when single-line code exceeds window width.

Auto-Collapse Reasoning Content:

Automatically collapses reasoning processes for models that support step-by-step thinking.

Message Style:

Switches chat interface to bubble style or list style.

Code Style:

Changes display style for code snippets.

Math Formula Engine:

  • KaTeX: Faster rendering with performance optimization

  • MathJax: Slower rendering with comprehensive symbol and command support

Message Font Size:

Adjusts font size in the chat interface.

Input Settings

Show Estimated Token Count:

Displays estimated token consumption for input text in the input box (reference only, not actual context tokens).

Paste Long Text as File:

Long text pasted into the input box automatically appears as files to reduce input interference.

Render Input Messages with Markdown:

When disabled, only renders model responses, not sent messages.

Triple-Space Translation:

Tap spacebar three times to translate input content to English after typing a message.

Note: This action overwrites the original text.

Target Language:

Sets target language for both translation button and triple-space translation.

Assistant Settings

In the assistant interface, select the assistant name → choose corresponding settings in the right-click menu

Edit Assistant

Assistant settings apply to all topics under that assistant.

Prompt Settings

Name:

Customizable assistant name for easy identification.

Prompt:

i.e., prompt. Edit content following prompt writing examples on the Agents page.

Model Settings

Default Model:

Sets a fixed default model for the assistant. When adding from Agents page or copying assistant, initial model uses this setting. If unset, initial model = global default model (see Default Assistant Model).

Two default models exist: Global Default Chat Model and Assistant Default Model. The assistant's model has higher priority. When unset: Assistant Default Model = Global Default Chat Model.

Auto-Reset Model:

When enabled: After switching models during conversation, creating a new topic resets to assistant's default model. When disabled: New topics inherit the previous topic's model.

Example: Assistant default model = gpt-3.5-turbo. Create Topic 1 → switch to gpt-4o during conversation.

  • Enabled Auto-Reset: Topic 2 uses gpt-3.5-turbo

  • Disabled Auto-Reset: Topic 2 uses gpt-4o

Temperature:

Controls randomness/creativity of text generation (default=0.7):

  • Low (0-0.3): More deterministic output. Ideal for code generation, data analysis

  • Medium (0.4-0.7): Balanced creativity/coherence. Recommended for chatbots (~0.5)

  • High (0.8-1.0): High creativity/diversity. Ideal for creative writing, but reduces coherence

Top P (Nucleus Sampling):

Default=1. Lower values → more focused/comprehensible responses. Higher values → wider vocabulary diversity.

Sampling controls token probability thresholds:

  • Low (0.1-0.3): Conservative output. Ideal for code comments/tech docs

  • Medium (0.4-0.6): Balanced diversity/accuracy. General dialogue/writing

  • High (0.7-1.0): Diverse expression. Creative writing scenarios

  • Parameters work independently or combined

  • Choose values based on task type

  • Experiment to find optimal combinations

  • Ranges are illustrative—consult model documentation for specifics

Context Window

Number of messages to retain in context. Higher values → longer context → higher token usage:

  • 5-10: Normal conversations

  • >10: Complex tasks requiring longer memory (e.g., multi-step content generation)

  • Note: More messages = higher token consumption

Enable Message Length Limit (MaxToken)

Sets maximum tokens per response. Directly impacts answer quality/length.

Example: When testing model connectivity, set MaxToken=1 to confirm response without specific content.

Most models support up to 32k tokens (some 64k+—check model documentation).

Suggestions:

  • Normal chat: 500-800

  • Short text gen: 800-2000

  • Code gen: 2000-3600

  • Long text gen: 4000+ (requires model support)

Responses are truncated at MaxToken limit. Incomplete expressions or truncation (e.g., long code) may occur—adjust as needed.

Stream Output

Enables continuous data stream processing instead of batch transmission. Provides real-time response generation (typing effect) in clients like CherryStudio.

  • Disabled: Full response delivered at once (like WeChat messages)

  • Enabled: Character-by-character output (generates → transmits each token immediately)

Disable for models without streaming support (e.g., initial o1-mini versions).

Custom Parameters

Adds extra request parameters to the body (e.g., presence_penalty). Generally not needed for regular use.

Parameters like top-p, max_tokens, and stream belong to this category.

Format: Parameter name—Parameter type (text/number/etc.)—Value. See documentation: Click here

Model providers often have unique parameters—consult their documentation.

  • Custom parameters override built-in parameters when names conflict.

Example: Setting model: gpt-4o forces all conversations to use gpt-4o regardless of selection.

  • Use parameter_name: undefined to exclude parameters.

PPIO Paiou Cloud

This document was translated from Chinese by AI and has not yet been reviewed.

PPIO Paiou Cloud

Cherry Studio Integration with PPIO LLM API

Tutorial Overview

Cherry Studio is a multi-model desktop client currently supporting installation packages for Windows, Linux, and macOS systems. It integrates mainstream LLM models to provide multi-scenario assistance. Users can enhance work efficiency through smart conversation management, open-source customization, and multi-theme interfaces.

Cherry Studio is now deeply integrated with PPIO's High-Performance API Channel – leveraging enterprise-grade computing power to ensure high-speed response for DeepSeek-R1/V3 and 99.9% service availability, delivering a fast and smooth experience.

The tutorial below provides a complete integration solution (including API key configuration), enabling the advanced mode of Cherry Studio Intelligent Scheduling + PPIO High-Performance API within 3 minutes.

1. Enter Cherry Studio and Add "PPIO" as Model Provider

First download Cherry Studio from the official website: (If inaccessible, download your required version from Quark Netdisk: )

(1) Click the settings icon in the bottom left corner, set the provider name to PPIO, and click "OK"

(2) Visit , click 【User Avatar】→【API Key Management】 to enter console

Click 【+ Create】 to generate a new API key. Customize a key name. Generated keys are visible only at creation – immediately copy and save them to avoid affecting future usage

(3) In Cherry Studio settings, select 【PPIO Paiou Cloud】, enter the API key generated on the official website, then click 【Verify】

(4) Select model: using deepseek/deepseek-r1/community as example. Switch directly if needing other models.

DeepSeek R1 and V3 community versions are for trial use only. They are full-parameter models with identical stability and performance. For high-volume usage, top up and switch to non-community versions.

2. Model Usage Configuration

(1) After clicking 【Verify】 and seeing successful connection, it's ready for use

(2) Finally, click 【@】 and select the newly added DeepSeek R1 model under PPIO provider to start chatting~

【Partial material source: 】

3. PPIO×Cherry Studio Video Tutorial

For visual learners, we've prepared a Bilibili video tutorial. Follow step-by-step instructions to quickly master PPIO API + Cherry Studio configuration. Click the link to jump directly: →

【Video material source: sola】

​
​
https://cherry-ai.com/download
https://pan.quark.cn/s/c8533a1ec63e#/list/share
PPIO Computing Cloud API Key Management
​
Chen En
​
《Still going crazy over DeepSeek's endless spinning? PPIO Cloud + DeepSeek Full-Power Edition =? No more congestion, take off immediately》

SearXNG Local Deployment & Configuration

This document was translated from Chinese by AI and has not yet been reviewed.

SearXNG Deployment and Configuration

Cherry Studio supports web search via SearXNG, an open-source project deployable both locally and on servers. Its configuration differs slightly from other setups requiring API providers.

SearXNG Project Link: SearXNG

Advantages of SearXNG

  • Open-source and free, no API required

  • Relatively high privacy

  • Highly customizable

Local Deployment

1. Direct Deployment with Docker

SearXNG doesn't require complex environment configuration and can deploy using Docker with just a single available port.

1. Download, install, and configure docker

After installation, select an image storage path:

2. Search and pull the SearXNG image

Enter searxng in the search bar:

Pull the image:

3. Run the image

After successful pull, go to Images:

Select the pulled image and click run:

Open settings for configuration:

Take port 8085 as example:

After successful run, open SearXNG frontend via link:

This page indicates successful deployment:

Server Deployment

Since Docker installation on Windows can be cumbersome, users can deploy SearXNG on servers for sharing. Unfortunately, SearXNG currently lacks built-in authentication, making deployed instances vulnerable to scanning and abuse.

To address this, Cherry Studio supports HTTP Basic Authentication (RFC7617). If exposing SearXNG publicly, you must configure HTTP Basic Authentication via reverse proxies like Nginx. This tutorial assumes basic Linux operations knowledge.

Deploy SearXNG

Similarly deploy via Docker. Assuming Docker CE is installed per official guide, run these commands on fresh Debian systems:

sudo apt update
sudo apt install git -y

# Pull official repo
cd /opt
git clone https://github.com/searxng/searxng-docker.git
cd /opt/searxng-docker

# Set to false for limited server bandwidth
export IMAGE_PROXY=true

# Modify config
cat <<EOF > /opt/searxng-docker/searxng/settings.yml
# see https://docs.searxng.org/admin/settings/settings.html#settings-use-default-settings
use_default_settings: true
server:
  # base_url is defined in the SEARXNG_BASE_URL environment variable, see .env and docker-compose.yml
  secret_key: $(openssl rand -hex 32)
  limiter: false  # can be disabled for a private instance
  image_proxy: $IMAGE_PROXY
ui:
  static_use_hash: true
redis:
  url: redis://redis:6379/0
search:
  formats:
    - html
    - json
EOF

Edit docker-compose.yaml to change ports or reuse existing Nginx:

version: "3.7"

services:
# Remove below if reusing existing Nginx instead of Caddy
  caddy:
    container_name: caddy
    image: docker.io/library/caddy:2-alpine
    network_mode: host
    restart: unless-stopped
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy-data:/data:rw
      - caddy-config:/config:rw
    environment:
      - SEARXNG_HOSTNAME=${SEARXNG_HOSTNAME:-http://localhost}
      - SEARXNG_TLS=${LETSENCRYPT_EMAIL:-internal}
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
    logging:
      driver: "json-file"
      options:
        max-size: "1m"
        max-file: "1"
# Remove above if reusing existing Nginx instead of Caddy
  redis:
    container_name: redis
    image: docker.io/valkey/valkey:8-alpine
    command: valkey-server --save 30 1 --loglevel warning
    restart: unless-stopped
    networks:
      - searxng
    volumes:
      - valkey-data2:/data
    cap_drop:
      - ALL
    cap_add:
      - SETGID
      - SETUID
      - DAC_OVERRIDE
    logging:
      driver: "json-file"
      options:
        max-size: "1m"
        max-file: "1"

  searxng:
    container_name: searxng
    image: docker.io/searxng/searxng:latest
    restart: unless-stopped
    networks:
      - searxng
    # Default host port:8080, change to "127.0.0.1:8000:8080" for port 8000
    ports:
      - "127.0.0.1:8080:8080"
    volumes:
      - ./searxng:/etc/searxng:rw
    environment:
      - SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
      - UWSGI_WORKERS=${SEARXNG_UWSGI_WORKERS:-4}
      - UWSGI_THREADS=${SEARXNG_UWSGI_THREADS:-4}
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
    logging:
      driver: "json-file"
      options:
        max-size: "1m"
        max-file: "1"

networks:
  searxng:

volumes:
# Remove below if reusing existing Nginx
  caddy-data:
  caddy-config:
# Remove above if reusing existing Nginx
  valkey-data2:

Start with docker compose up -d. View logs using docker compose logs -f searxng.

Deploy Nginx Reverse Proxy and HTTP Basic Authentication

For server panels like Baota or 1Panel:

  1. Add site and configure Nginx reverse proxy per their docs

  2. Locate Nginx config, modify per below example:

server
{
    listen 443 ssl;

    # Your hostname
    server_name search.example.com;

    # index index.html;
    # root /data/www/default;

    # SSL config
    ssl_certificate    /path/to/your/cert/fullchain.pem;
    ssl_certificate_key    /path/to/your/cert/privkey.pem;

    # HSTS
    # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";

    # Reverse proxy config
    location / {
        # Add these two lines in location block
        auth_basic "Please enter your username and password";
        auth_basic_user_file /etc/nginx/conf.d/search.htpasswd;

        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_protocol_addr;
        proxy_pass http://127.0.0.1:8000;
        client_max_body_size 0;
    }

    # access_log  ...;
    # error_log  ...;
}

Save password file in /etc/nginx/conf.d:

echo "example_name:$(openssl passwd -5 'example_password')" > /etc/nginx/conf.d/search.htpasswd

Restart or reload Nginx. Access should prompt for credentials:

Cherry Studio Configuration

After successful SearXNG deployment:

  1. Go to web search settings > Select Searxng

  1. Initial verification may fail due to missing JSON format:

  1. In Docker > Files tab, locate tagged folder > settings.yml

  1. Edit file, add "json" to formats (line 78)

  1. Rerun image

  1. Successful verification in Cherry Studio

Use:

  • Local: http://localhost:port

  • Docker: http://host.docker.internal:port

For server deployments with HTTP Basic Authentication:

  1. Initial verification returns 401

  1. Configure credentials in client:

Additional Configuration

To customize search engines:

  • Default preferences don't affect model invocations

  • Configure model-invoked engines in settings.yml:

Syntax reference:

For lengthy edits, modify in local IDE then paste back.

Common Verification Failures

Missing JSON Format

Add "json" to return formats:

Incorrect Search Engine Configuration

Cherry Studio defaults to engines with "web" and "general" categories (e.g., Google). In mainland China, force Baidu:

use_default_settings:
  engines:
    keep_only:
      - baidu
engines:
  - name: baidu
    engine: baidu 
    categories: 
      - web
      - general
    disabled: false

Excessive Access Rate

Disable limiter in settings:

Embedding Model Reference Information

This document was translated from Chinese by AI and has not yet been reviewed.

Embedding Model Reference Information

To prevent errors, some max input values in this document are not set to their theoretical limits. For example, when official documentation states a maximum input of 8k (without specifying an exact value), the reference values given here are approximations like 8191 or 8000. (If this isn't clear, simply use the reference values provided in the document.)

Volcano-Doubao

Official Model Information Reference

Model
max input

Doubao-embedding

4095

Doubao-embedding-vision

8191

Doubao-embedding-large

4095

Alibaba

Official Model Information Reference

Model
max input

text-embedding-v3

8192

text-embedding-v2

2048

text-embedding-v1

2048

text-embedding-async-v2

2048

text-embedding-async-v1

2048

OpenAI

Official Model Information Reference

Model
max input

text-embedding-3-small

8191

text-embedding-3-large

8191

text-embedding-ada-002

8191

Baidu

Official Model Information Reference

Model
max input

Embedding-V1

384

tao-8k

8192

Zhipu AI

Official Model Information Reference

Model
max input

embedding-2

1024

embedding-3

2048

Hunyuan

Official Model Information Reference

Model
max input

hunyuan-embedding

1024

Baichuan

Official Model Information Reference

Model
max input

Baichuan-Text-Embedding

512

Together

Official Model Information Reference

Model
max input

M2-BERT-80M-2K-Retrieval

2048

M2-BERT-80M-8K-Retrieval

8192

M2-BERT-80M-32K-Retrieval

32768

UAE-Large-v1

512

BGE-Large-EN-v1.5

512

BGE-Base-EN-v1.5

512

Jina

Official Model Information Reference

Model
max input

jina-embedding-b-en-v1

512

jina-embeddings-v2-base-en

8191

jina-embeddings-v2-base-zh

8191

jina-embeddings-v2-base-de

8191

jina-embeddings-v2-base-code

8191

jina-embeddings-v2-base-es

8191

jina-colbert-v1-en

8191

jina-reranker-v1-base-en

8191

jina-reranker-v1-turbo-en

8191

jina-reranker-v1-tiny-en

8191

jina-clip-v1

8191

jina-reranker-v2-base-multilingual

8191

reader-lm-1.5b

256000

reader-lm-0.5b

256000

jina-colbert-v2

8191

jina-embeddings-v3

8191

SiliconFlow

Official Model Information Reference

Model
max input

BAAI/bge-m3

8191

netease-youdao/bce-embedding-base_v1

512

BAAI/bge-large-zh-v1.5

512

BAAI/bge-large-en-v1.5

512

Pro/BAAI/bge-m3

8191

Gemini

Official Model Information Reference

Model
max input

text-embedding-004

2048

Nomic

Official Model Information Reference

Model
max input

nomic-embed-text-v1

8192

nomic-embed-text-v1.5

8192

gte-multilingual-base

8192

Console

Official Model Information Reference

Model
max input

embedding-query

4000

embedding-passage

4000

Cohere

Official Model Information Reference

Model
max input

embed-english-v3.0

512

embed-english-light-v3.0

512

embed-multilingual-v3.0

512

embed-multilingual-light-v3.0

512

embed-english-v2.0

512

embed-english-light-v2.0

512

embed-multilingual-v2.0

256

Model Rankings

This document was translated from Chinese by AI and has not yet been reviewed.

This document is translated from Chinese by AI and has not yet been reviewed. I will try to check the document one by one to ensure the translation is reasonable.

This is a leaderboard based on Chatbot Arena (lmarena.ai) data, generated through an automated process.

Data Update Time: 2025-07-07 11:42:50 UTC / 2025-07-07 19:42:50 CST (Beijing Time)

Click on the Model Name in the leaderboard to jump to its detailed information or trial page.

Leaderboard

Rank(UB)
Rank(StyleCtrl)
Model Name
Score
Confidence Interval
Votes
Provider
License Agreement
Knowledge Cutoff Date

1

1

1477

+5/-5

15,769

Google

Proprietary

N/A

2

2

1446

+4/-5

13,997

Google

Proprietary

N/A

3

3

1429

+4/-4

24,237

OpenAI

Proprietary

N/A

3

2

1427

+3/-4

21,965

OpenAI

Proprietary

N/A

3

6

1425

+4/-5

12,847

DeepSeek

MIT

N/A

3

7

1422

+3/-4

25,763

xAI

Proprietary

N/A

5

6

1418

+4/-4

21,209

Google

Proprietary

N/A

6

4

1414

+5/-4

15,271

OpenAI

Proprietary

N/A

9

7

1398

+5/-5

17,002

Google

Proprietary

N/A

9

11

1392

+5/-4

15,758

Alibaba

Apache 2.0

N/A

11

6

1384

+3/-4

18,275

OpenAI

Proprietary

N/A

11

12

1382

+3/-3

21,008

DeepSeek

MIT

N/A

11

17

1380

+6/-5

8,247

Tencent

Proprietary

N/A

11

11

1376

+6/-6

8,058

MiniMax

Apache 2.0

N/A

13

12

1374

+3/-5

19,430

DeepSeek

MIT

N/A

14

19

1370

+4/-4

19,980

Mistral

Proprietary

N/A

14

6

1370

+4/-4

20,056

Anthropic

Proprietary

N/A

15

23

1367

+4/-4

14,597

Alibaba

Apache 2.0

N/A

16

11

1366

+2/-3

29,038

OpenAI

Proprietary

N/A

16

11

1363

+4/-4

17,974

OpenAI

Proprietary

N/A

17

23

1363

+3/-3

32,074

Alibaba

Proprietary

N/A

18

25

1363

+3/-3

36,915

Google

Proprietary

N/A

18

31

1359

+6/-5

10,561

xAI

Proprietary

N/A

19

25

1360

+3/-3

26,443

Google

Gemma

N/A

24

31

1344

+12/-7

4,074

Alibaba

Apache 2.0

N/A

25

19

1351

+3/-4

33,177

OpenAI

Proprietary

2023/10

25

12

1343

+4/-5

16,050

Anthropic

Proprietary

N/A

26

25

1340

+4/-4

19,404

OpenAI

Proprietary

N/A

26

32

1337

+7/-8

3,976

Google

Gemma

N/A

26

23

1337

+5/-4

17,292

OpenAI

Proprietary

N/A

26

31

1334

+4/-4

22,841

DeepSeek

DeepSeek

N/A

26

31

1332

+13/-13

2,061

Mistral

Apache 2.0

N/A

28

38

1333

+4/-5

18,386

Alibaba

Apache 2.0

N/A

29

36

1327

+8/-6

6,028

Zhipu

Proprietary

N/A

30

31

1329

+4/-4

26,104

Google

Proprietary

N/A

30

56

1327

+5/-7

7,517

Amazon

Proprietary

N/A

30

32

1326

+7/-6

6,055

Alibaba

Proprietary

N/A

30

31

1321

+10/-11

2,656

Nvidia

Nvidia Open Model

N/A

32

32

1326

+3/-3

24,524

Cohere

CC-BY-NC-4.0

N/A

33

42

1323

+4/-4

14,229

Alibaba

Apache 2.0

N/A

33

38

1321

+7/-8

5,126

StepFun

Proprietary

N/A

33

31

1318

+8/-10

2,452

Tencent

Proprietary

N/A

34

39

1312

+11/-12

2,371

Nvidia

Nvidia

N/A

35

39

1320

+2/-2

54,951

OpenAI

Proprietary

2023/10

35

32

1319

+3/-3

36,971

OpenAI

Proprietary

N/A

38

32

1318

+2/-2

58,645

Google

Proprietary

N/A

38

33

1312

+8/-10

2,510

Tencent

Proprietary

N/A

41

18

1313

+4/-4

25,955

Anthropic

Proprietary

N/A

43

58

1307

+7/-8

7,379

Google

Gemma

N/A

44

21

1306

+4/-3

30,677

Anthropic

Proprietary

N/A

47

48

1304

+2/-2

67,084

xAI

Proprietary

2024/3

47

50

1303

+4/-3

28,968

01 AI

Proprietary

N/A

48

35

1301

+2/-2

117,747

OpenAI

Proprietary

2023/10

48

63

1298

+4/-6

10,715

Alibaba

Proprietary

N/A

50

25

1299

+2/-2

77,905

Anthropic

Proprietary

2024/4

50

55

1295

+6/-6

7,243

DeepSeek

DeepSeek

N/A

52

74

1292

+8/-9

4,321

Google

Gemma

N/A

55

43

1292

+4/-4

18,010

Meta

Llama 4

N/A

55

64

1291

+3/-3

26,074

NexusFlow

NexusFlow

N/A

55

60

1290

+3/-3

27,788

Zhipu AI

Proprietary

N/A

55

49

1288

+8/-7

3,856

Tencent

Proprietary

N/A

55

56

1287

+6/-8

6,302

OpenAI

Proprietary

N/A

56

70

1287

+3/-3

37,021

Google

Proprietary

N/A

56

79

1284

+5/-7

7,577

Nvidia

Llama 3.1

2023/12

57

61

1288

+2/-2

72,473

OpenAI

Proprietary

2023/10

59

41

1285

+2/-3

43,788

Meta

Llama 3.1 Community

2023/12

60

36

1284

+2/-2

86,159

Anthropic

Proprietary

2024/4

60

42

1283

+2/-2

63,038

Meta

Llama 3.1 Community

2023/12

61

41

1282

+3/-2

52,144

Google

Proprietary

Online

61

60

1277

+8/-10

4,014

Tencent

Proprietary

N/A

62

79

1282

+2/-3

55,442

xAI

Proprietary

2024/3

62

43

1281

+2/-2

47,973

OpenAI

Proprietary

2023/10

63

63

1279

+3/-4

17,432

Alibaba

Qwen

N/A

63

56

1277

+6/-6

7,451

Meta

Llama

N/A

71

79

1271

+7/-6

7,367

Mistral

Apache 2.0

N/A

72

57

1276

+2/-2

82,435

Google

Proprietary

2023/11

72

74

1274

+3/-3

26,344

DeepSeek

DeepSeek

N/A

72

61

1273

+3/-3

47,631

Meta

Llama-3.3

N/A

72

79

1273

+3/-3

41,519

Alibaba

Qwen

2024/9

73

56

1272

+2/-2

102,133

OpenAI

Proprietary

2023/12

78

86

1260

+10/-10

3,010

Ai2

Llama 3.1

N/A

79

64

1267

+2/-2

48,217

Mistral

Mistral Research

2024/7

79

79

1266

+4/-3

20,580

NexusFlow

CC-BY-NC-4.0

2024/7

79

61

1266

+2/-2

103,748

OpenAI

Proprietary

2023/4

79

79

1265

+3/-3

29,633

Mistral

MRL

N/A

79

62

1258

+9/-8

4,287

Mistral

Proprietary

N/A

80

86

1264

+2/-2

58,637

Meta

Llama 3.1 Community

2023/12

81

58

1263

+2/-1

202,641

Anthropic

Proprietary

2023/8

83

87

1261

+3/-3

26,371

Amazon

Proprietary

N/A

83

65

1261

+2/-2

97,079

OpenAI

Proprietary

2023/12

89

60

1254

+2/-2

49,399

Anthropic

Propretary

N/A

89

86

1251

+6/-7

7,948

Reka AI

Proprietary

N/A

89

88

1246

+7/-10

4,210

Tencent

Proprietary

N/A

92

90

1243

+2/-2

65,661

Google

Proprietary

2023/11

93

88

1237

+4/-6

9,125

AI21 Labs

Jamba Open

2024/3

93

96

1233

+8/-6

5,730

Alibaba

Apache 2.0

N/A

94

89

1236

+2/-2

79,538

Google

Gemma license

2024/6

94

98

1233

+4/-4

15,321

Mistral

Apache 2.0

N/A

94

106

1233

+3/-4

20,646

Amazon

Proprietary

N/A

94

90

1232

+5/-5

10,548

Princeton

MIT

2024/7

94

86

1228

+9/-10

3,889

Nvidia

Llama 3.1

2023/12

95

94

1231

+4/-6

10,535

Cohere

CC-BY-NC-4.0

2024/8

96

110

1228

+3/-3

37,697

Google

Proprietary

N/A

97

106

1222

+9/-11

3,460

Allen AI

Apache-2.0

N/A

99

105

1225

+3/-3

28,768

Cohere

CC-BY-NC-4.0

N/A

99

96

1225

+3/-4

20,608

Nvidia

NVIDIA Open Model

2023/6

99

99

1222

+5/-5

10,221

Zhipu AI

Proprietary

N/A

101

96

1221

+5/-5

8,132

Reka AI

Proprietary

N/A

102

110

1221

+4/-4

25,213

Microsoft

MIT

N/A

103

97

1222

+2/-1

163,629

Meta

Llama 3 Community

2023/12

106

96

1217

+2/-2

113,067

Anthropic

Proprietary

2023/8

109

119

1214

+3/-3

20,654

Amazon

Proprietary

N/A

111

120

1205

+10/-10

2,901

Tencent

Proprietary

N/A

112

121

1201

+10/-9

3,074

Ai2

Llama 3.1

N/A

113

109

1208

+2/-2

57,197

Google

Gemma license

2024/6

113

106

1206

+2/-2

80,846

Cohere

CC-BY-NC-4.0

2024/3

114

109

1203

+3/-2

38,872

Alibaba

Qianwen LICENSE

2024/6

114

92

1202

+2/-3

55,962

OpenAI

Proprietary

2021/9

114

119

1198

+7/-6

5,111

Mistral

MRL

N/A

115

121

1196

+7/-4

10,391

Cohere

CC-BY-NC-4.0

N/A

116

110

1196

+4/-5

10,851

Cohere

CC-BY-NC-4.0

2024/8

117

111

1195

+2/-2

122,309

Anthropic

Proprietary

2023/8

117

105

1194

+4/-4

15,753

DeepSeek AI

DeepSeek License

2024/6

117

120

1192

+5/-5

9,274

AI21 Labs

Jamba Open

2024/3

118

136

1192

+2/-2

52,578

Meta

Llama 3.1 Community

2023/12

126

105

1179

+2/-2

91,614

OpenAI

Proprietary

2021/9

126

121

1177

+3/-3

27,430

Alibaba

Qianwen LICENSE

2024/4

126

153

1169

+11/-10

3,410

Alibaba

Apache 2.0

N/A

127

136

1173

+4/-3

25,135

01 AI

Apache-2.0

2024/5

127

120

1173

+2/-2

64,926

Mistral

Proprietary

N/A

127

121

1172

+4/-5

16,027

Reka AI

Proprietary

Online

130

131

1168

+2/-2

109,056

Meta

Llama 3 Community

2023/3

130

143

1165

+4/-5

10,599

InternLM

Other

2024/8

131

125

1164

+2/-3

56,398

Cohere

CC-BY-NC-4.0

2024/3

131

131

1164

+3/-3

35,556

Mistral

Proprietary

N/A

131

124

1163

+3/-2

53,751

Mistral

Apache 2.0

2024/4

131

127

1163

+4/-4

25,803

Reka AI

Proprietary

2023/11

131

125

1163

+3/-2

40,658

Alibaba

Qianwen LICENSE

2024/2

131

128

1159

+8/-11

3,289

IBM

Apache 2.0

N/A

133

143

1160

+2/-3

48,892

Google

Gemma license

2024/7

140

124

1147

+4/-4

18,800

Google

Proprietary

2023/4

140

134

1143

+8/-7

4,854

HuggingFace

Apache 2.0

2024/4

141

137

1141

+3/-4

22,765

Alibaba

Qianwen LICENSE

2024/2

141

145

1135

+8/-10

3,380

IBM

Apache 2.0

N/A

142

143

1139

+4/-3

26,105

Microsoft

MIT

2023/10

142

154

1135

+3/-5

16,676

Nexusflow

Apache-2.0

2024/3

145

143

1130

+2/-2

76,126

Mistral

Apache 2.0

2023/12

145

149

1127

+4/-6

15,917

01 AI

Yi License

2023/6

145

133

1126

+6/-7

6,557

Google

Proprietary

2023/4

146

147

1125

+4/-4

18,687

Alibaba

Qianwen LICENSE

2024/2

147

147

1122

+6/-7

8,383

Microsoft

Llama 2 Community

2023/8

148

133

1122

+2/-2

68,867

OpenAI

Proprietary

2021/9

148

143

1119

+3/-3

33,743

Databricks

DBRX LICENSE

2023/12

148

151

1119

+7/-6

8,390

Meta

Llama 3.2

2023/12

148

151

1118

+4/-4

18,476

Microsoft

MIT

2023/10

149

152

1115

+6/-6

6,658

AllenAI/UW

AI2 ImpACT Low-risk

2023/11

152

143

1109

+8/-6

7,002

IBM

Apache 2.0

N/A

156

162

1109

+3/-3

39,595

Meta

Llama 2 Community

2023/7

156

149

1107

+4/-5

12,990

OpenChat

Apache-2.0

2024/1

156

156

1107

+4/-4

22,936

LMSYS

Non-commercial

2023/8

157

149

1106

+3/-3

34,173

Snowflake

Apache 2.0

2024/4

157

160

1104

+4/-5

10,415

UC Berkeley

CC-BY-NC-4.0

2023/11

157

166

1100

+7/-9

3,836

NousResearch

Apache-2.0

2024/1

158

151

1100

+4/-4

25,070

Google

Gemma license

2024/2

158

165

1097

+9/-9

3,636

Nvidia

Llama 2 Community

2023/11

162

151

1093

+9/-8

4,988

DeepSeek AI

DeepSeek License

2023/11

162

151

1092

+7/-6

8,106

OpenChat

Apache-2.0

2023/11

163

153

1090

+7/-8

5,088

NousResearch

Apache-2.0

2023/11

163

158

1090

+8/-7

7,191

IBM

Apache 2.0

N/A

164

169

1088

+4/-3

20,067

Mistral

Apache-2.0

2023/12

164

168

1087

+4/-5

12,808

Microsoft

MIT

2023/10

164

169

1086

+9/-7

4,872

Alibaba

Qianwen LICENSE

2024/2

164

165

1078

+14/-14

1,714

Cognitive Computations

Apache-2.0

2023/10

165

145

1083

+4/-4

17,036

OpenAI

Proprietary

2021/9

166

168

1078

+9/-9

4,286

Upstage AI

CC-BY-NC-4.0

2023/11

167

173

1082

+3/-4

21,097

Microsoft

MIT

2023/10

169

174

1079

+4/-4

19,722

Meta

Llama 2 Community

2023/7

172

169

1075

+7/-7

7,176

Microsoft

Llama 2 Community

2023/7

175

179

1070

+7/-6

8,523

Meta

Llama 3.2

2023/12

176

178

1069

+6/-4

11,321

HuggingFace

MIT

2023/10

176

172

1062

+11/-11

2,375

HuggingFace

Apache 2.0

N/A

176

169

1061

+11/-12

2,644

MosaicML

CC-BY-NC-SA-4.0

2023/6

176

177

1057

+15/-15

1,192

Meta

Llama 2 Community

2024/1

177

173

1056

+12/-13

1,811

HuggingFace

MIT

2023/10

180

178

1059

+6/-7

7,509

Meta

Llama 2 Community

2023/7

180

168

1050

+15/-15

1,327

TII

Falcon-180B TII License

2023/9

181

172

1058

+4/-4

19,775

LMSYS

Llama 2 Community

2023/7

181

178

1053

+4/-6

9,176

Google

Gemma license

2024/2

181

178

1053

+5/-5

21,622

Microsoft

MIT

2023/10

181

193

1053

+5/-5

14,532

Meta

Llama 2 Community

2023/7

181

171

1051

+8/-8

5,065

Alibaba

Qianwen LICENSE

2023/8

181

181

1049

+11/-11

2,996

UW

Non-commercial

2023/5

190

182

1037

+6/-5

11,351

Google

Gemma license

2024/2

191

185

1033

+7/-10

5,276

Together AI

Apache 2.0

2023/12

191

199

1031

+8/-7

6,503

Allen AI

Apache-2.0

2024/2

194

192

1023

+5/-5

9,142

Mistral

Apache 2.0

2023/9

194

193

1021

+6/-8

7,017

LMSYS

Llama 2 Community

2023/7

194

182

1019

+7/-5

8,713

Google

Proprietary

2021/6

199

197

1005

+8/-9

4,918

Google

Gemma license

2024/2

199

194

1004

+5/-7

7,816

Alibaba

Qianwen LICENSE

2024/2

201

200

980

+8/-6

7,020

UC Berkeley

Non-commercial

2023/4

201

201

971

+7/-9

4,763

Tsinghua

Apache-2.0

2023/10

202

201

948

+16/-16

1,788

Nomic AI

Non-commercial

2023/3

203

201

944

+9/-9

3,997

MosaicML

CC-BY-NC-SA-4.0

2023/5

203

206

940

+10/-10

2,713

Tsinghua

Apache-2.0

2023/6

203

206

937

+9/-8

4,920

RWKV

Apache 2.0

2023/4

207

201

917

+9/-7

5,864

Stanford

Non-commercial

2023/3

207

207

909

+9/-7

6,368

OpenAssistant

Apache 2.0

2023/4

208

209

895

+8/-10

4,983

Tsinghua

Non-commercial

2023/3

209

209

884

+9/-9

4,288

LMSYS

Apache 2.0

2023/4

211

212

856

+11/-11

3,336

Stability AI

CC-BY-NC-SA-4.0

2023/4

211

209

838

+10/-10

3,480

Databricks

MIT

2023/4

212

210

815

+14/-9

2,446

Meta

Non-commercial

2023/2

Description

  • Rank (UB): Ranking calculated based on the Bradley-Terry model. This ranking reflects the model's overall performance in the arena and provides an upper bound estimate of its Elo score, helping to understand the model's potential competitiveness.

  • Rank (StyleCtrl): Ranking after controlling for conversational style. This ranking aims to reduce preference bias caused by the model's response style (e.g., verbosity, conciseness) and more purely assess the model's core capabilities.

  • Model Name: The name of the Large Language Model (LLM). This column has embedded links to the models; click to navigate.

  • Score: The Elo rating obtained by the model through user votes in the arena. Elo rating is a relative ranking system, where a higher score indicates better model performance. This score is dynamic, reflecting the model's relative strength in the current competitive environment.

  • Confidence Interval: The 95% confidence interval for the model's Elo rating (e.g., +6/-6). A smaller interval indicates more stable and reliable model scores; conversely, a larger interval may indicate insufficient data or greater fluctuation in model performance. It provides a quantitative assessment of the score's accuracy.

  • Votes: The total number of votes received by the model in the arena. More votes generally mean higher statistical reliability of its score.

  • Provider: The organization or company providing the model.

  • License Agreement: The type of license agreement for the model, such as Proprietary, Apache 2.0, MIT, etc.

  • Knowledge Cutoff Date: The knowledge cutoff date for the model's training data. N/A indicates that the relevant information is not provided or unknown.

Data Source and Update Frequency

This leaderboard data is automatically generated and provided by the fboulnois/llm-leaderboard-csv project, which retrieves and processes data from lmarena.ai. This leaderboard is automatically updated daily by GitHub Actions.

Disclaimer

This report is for reference only. Leaderboard data is dynamic and based on user preference votes on Chatbot Arena during a specific period. The completeness and accuracy of the data depend on the upstream data sources and the updates and processing of the fboulnois/llm-leaderboard-csv project. Different models may use different license agreements; please refer to the official documentation of the model provider when using them.

Gemini-2.5-Pro
Gemini-2.5-Pro-Preview-05-06
ChatGPT-4o-latest (2025-03-26)
o3-2025-04-16
DeepSeek-R1-0528
Grok-3-Preview-02-24
Gemini-2.5-Flash
GPT-4.5-Preview
Gemini-2.5-Flash-Preview-04-17
Qwen3-235B-A22B-no-thinking
GPT-4.1-2025-04-14
DeepSeek-V3-0324
Hunyuan-Turbos-20250416
Minimax-M1
DeepSeek-R1
Mistral Medium 3
Claude Opus 4 (20250514)
Qwen3-235B-A22B
o1-2024-12-17
o4-mini-2025-04-16
Qwen2.5-Max
Gemini-2.0-Flash-001
Grok-3-Mini-beta
Gemma-3-27B-it
Qwen3-32B
o1-preview
Claude Sonnet 4 (20250514)
o3-mini-high
Gemma-3-12B-it
GPT-4.1-mini-2025-04-14
DeepSeek-V3
Mistral-Small-2506
QwQ-32B
GLM-4-Plus-0111
Gemini-2.0-Flash-Lite
Amazon-Nova-Experimental-Chat-05-14
Qwen-Plus-0125
Llama-3.1-Nemotron-Ultra-253B-v1
Command A (03-2025)
Qwen3-30B-A3B
Step-2-16K-Exp
Hunyuan-TurboS-20250226
Llama-3.3-Nemotron-Super-49B-v1
o1-mini
o3-mini
Gemini-1.5-Pro-002
Hunyuan-Turbo-0110
Claude 3.7 Sonnet (thinking-32k)
Gemma-3n-e4b-it
Claude 3.7 Sonnet
Grok-2-08-13
Yi-Lightning
GPT-4o-2024-05-13
Qwen2.5-plus-1127
Claude 3.5 Sonnet (20241022)
Deepseek-v2.5-1210
Gemma-3-4B-it
Llama-4-Maverick-17B-128E-Instruct
Athene-v2-Chat-72B
GLM-4-Plus
Hunyuan-Large-2025-02-10
GPT-4.1-nano-2025-04-14
Gemini-1.5-Flash-002
Llama-3.1-Nemotron-70B-Instruct
GPT-4o-mini-2024-07-18
Meta-Llama-3.1-405B-Instruct-bf16
Claude 3.5 Sonnet (20240620)
Meta-Llama-3.1-405B-Instruct-fp8
Gemini Advanced App (2024-05-14)
Hunyuan-Standard-2025-02-10
Grok-2-Mini-08-13
GPT-4o-2024-08-06
Qwen-Max-0919
Llama-4-Scout-17B-16E-Instruct
Mistral-Small-3.1-24B-Instruct-2503
Gemini-1.5-Pro-001
Deepseek-v2.5
Llama-3.3-70B-Instruct
Qwen2.5-72B-Instruct
GPT-4-Turbo-2024-04-09
Llama-3.1-Tulu-3-70B
Mistral-Large-2407
Athene-70B
GPT-4-1106-preview
Mistral-Large-2411
magistral-medium-2506
Meta-Llama-3.1-70B-Instruct
Claude 3 Opus
Amazon Nova Pro 1.0
GPT-4-0125-preview
Claude 3.5 Haiku (20241022)
Reka-Core-20240904
Hunyuan-Large-Vision
Gemini-1.5-Flash-001
Jamba-1.5-Large
Qwen2.5-Coder-32B-Instruct
Gemma-2-27B-it
Mistral-Small-24B-Instruct-2501
Amazon Nova Lite 1.0
Gemma-2-9B-it-SimPO
Llama-3.1-Nemotron-51B-Instruct
Command R+ (08-2024)
Gemini-1.5-Flash-8B-001
OLMo-2-0325-32B-Instruct
Aya-Expanse-32B
Nemotron-4-340B-Instruct
GLM-4-0520
Reka-Flash-20240904
Phi-4
Llama-3-70B-Instruct
Claude 3 Sonnet
Amazon Nova Micro 1.0
Hunyuan-Standard-256K
Llama-3.1-Tulu-3-8B
Gemma-2-9B-it
Command R+ (04-2024)
Qwen2-72B-Instruct
GPT-4-0314
Ministral-8B-2410
Aya-Expanse-8B
Command R (08-2024)
Claude 3 Haiku
DeepSeek-Coder-V2-Instruct
Jamba-1.5-Mini
Meta-Llama-3.1-8B-Instruct
GPT-4-0613
Qwen1.5-110B-Chat
QwQ-32B-Preview
Yi-1.5-34B-Chat
Mistral-Large-2402
Reka-Flash-21B-online
Llama-3-8B-Instruct
InternLM2.5-20B-chat
Command R (04-2024)
Mistral Medium
Mixtral-8x22b-Instruct-v0.1
Reka-Flash-21B
Qwen1.5-72B-Chat
Granite-3.1-8B-Instruct
Gemma-2-2b-it
Gemini-1.0-Pro-001
Zephyr-ORPO-141b-A35b-v0.1
Qwen1.5-32B-Chat
Granite-3.1-2B-Instruct
Phi-3-Medium-4k-Instruct
Starling-LM-7B-beta
Mixtral-8x7B-Instruct-v0.1
Yi-34B-Chat
Gemini Pro
Qwen1.5-14B-Chat
WizardLM-70B-v1.0
GPT-3.5-Turbo-0125
DBRX-Instruct-Preview
Meta-Llama-3.2-3B-Instruct
Phi-3-Small-8k-Instruct
Tulu-2-DPO-70B
Granite-3.0-8B-Instruct
Llama-2-70B-chat
OpenChat-3.5-0106
Vicuna-33B
Snowflake Arctic Instruct
Starling-LM-7B-alpha
Nous-Hermes-2-Mixtral-8x7B-DPO
Gemma-1.1-7B-it
NV-Llama2-70B-SteerLM-Chat
DeepSeek-LLM-67B-Chat
OpenChat-3.5
OpenHermes-2.5-Mistral-7B
Granite-3.0-2B-Instruct
Mistral-7B-Instruct-v0.2
Phi-3-Mini-4K-Instruct-June-24
Qwen1.5-7B-Chat
Dolphin-2.2.1-Mistral-7B
GPT-3.5-Turbo-1106
SOLAR-10.7B-Instruct-v1.0
Phi-3-Mini-4k-Instruct
Llama-2-13b-chat
WizardLM-13b-v1.2
Meta-Llama-3.2-1B-Instruct
Zephyr-7B-beta
SmolLM2-1.7B-Instruct
MPT-30B-chat
CodeLlama-70B-instruct
Zephyr-7B-alpha
CodeLlama-34B-instruct
falcon-180b-chat
Vicuna-13B
Gemma-7B-it
Phi-3-Mini-128k-Instruct
Llama-2-7B-chat
Qwen-14B-Chat
Guanaco-33B
Gemma-1.1-2b-it
StripedHyena-Nous-7B
OLMo-7B-instruct
Mistral-7B-Instruct-v0.1
Vicuna-7B
PaLM-Chat-Bison-001
Gemma-2B-it
Qwen1.5-4B-Chat
Koala-13B
ChatGLM3-6B
GPT4All-13B-Snoozy
MPT-7B-Chat
ChatGLM2-6B
RWKV-4-Raven-14B
Alpaca-13B
OpenAssistant-Pythia-12B
ChatGLM-6B
FastChat-T5-3B
StableLM-Tuned-Alpha-7B
Dolly-V2-12B
LLaMA-13B
Logo
Logo