Custom Provider

Cherry Studio not only integrates mainstream AI model services, but also gives you powerful customization capabilities. Through Custom AI Provider feature, you can easily connect any AI model you need.

Why do you need custom AI providers?

  • Flexibility: No longer limited to a preset list of providers; freely choose the AI model that best fits your needs.

  • Diversity: Try AI models from various platforms and discover their unique strengths.

  • Control: Directly manage your API keys and access addresses to ensure security and privacy.

  • Customization: Connect models deployed privately to meet the needs of specific business scenarios.

How to add a custom AI provider?

Just a few simple steps to add your custom AI provider in Cherry Studio:

  1. Open Settings: In the Cherry Studio interface left navigation bar, click "Settings" (gear icon).

  2. Go to Model Services: On the settings page, select the "Model Services" tab.

  3. Add provider: On the “Model Services” page, you will see the existing provider list. Click the “+ Add” button at the bottom of the list to open the “Add Provider” popup.

  4. Fill in the information: In the popup, you need to fill in the following information:

    • Provider Name: Give your custom provider an easy-to-recognize name (for example: MyCustomOpenAI).

    • Provider Type: Select your provider type from the dropdown list. Currently supported:

      • OpenAI

      • Gemini

      • Anthropic

      • Azure OpenAI

  5. Save configuration: After filling in, click the “Add” button to save your configuration.

Configure custom AI provider

After adding, find the provider you just added in the list and perform detailed configuration:

  1. Enable status There is an enable switch at the far right of the custom provider list; turning it on means enabling that custom service.

  2. API key:

    • Fill in the API key provided by your AI provider (API Key).

    • Click the “Check” button on the right to verify the key's validity.

  3. API address:

    • Fill in the API access address (Base URL) for the AI service.

    • Be sure to refer to the official documentation provided by your AI provider to get the correct API address.

  4. Model management:

    • Click the “+ Add” button to manually add the model IDs under this provider that you want to use. For example gpt-3.5-turbo,gemini-pro etc.

    • If you are unsure of the exact model name, please refer to the official documentation provided by your AI provider.

    • Click the "Manage" button to edit or delete models you have already added.

Get started

After completing the above configuration, you can choose your custom AI provider and model in Cherry Studio's chat interface and start conversing with the AI!

Using vLLM as a custom AI provider

vLLM is a fast and easy-to-use LLM inference library similar to Ollama. The following are the steps to integrate vLLM into Cherry Studio:

  1. Install vLLM: Install vLLM according to the official vLLM documentation (https://docs.vllm.ai/en/latest/getting_started/quickstart.html) .

  2. Start the vLLM service: Start the service using vLLM's OpenAI-compatible interface. There are two main ways, as follows:

    • Usevllm.entrypoints.openai.api_serverStart

    • UseuvicornStart

Ensure the service starts successfully and listens on the default port 8000 above. Of course, you can also use the--portparameter to specify the port number for the vLLM service.

  1. Add vLLM provider in Cherry Studio:

    • Follow the steps described above to add a new custom AI provider in Cherry Studio.

    • Provider Name: vLLM

    • Provider Type: Select OpenAI.

  2. Configure vLLM provider:

    • API key: Because vLLM does not require an API key, you can leave this field empty or enter any value.

    • API address: Fill in the API address for the vLLM service. By default, the address is: http://localhost:8000/(If you used a different port, please modify accordingly).

    • Model management: Add the model names you loaded in vLLM. In the example runpython -m vllm.entrypoints.openai.api_server --model gpt2above, you should entergpt2

  3. Start chatting: Now, you can select the vLLM provider and gpt2 model in Cherry Studio and start conversing with the vLLM-powered LLM!

Tips and tricks

  • Read the documentation carefully: Before adding a custom provider, be sure to carefully read the official documentation of the AI provider you are using to understand key information such as API keys, access addresses, and model names.

  • Check the API key: Use the “Check” button to quickly verify the validity of the API key to avoid being unable to use the service due to an incorrect key.

  • Pay attention to the API address: Different AI providers and models may have different API addresses; be sure to fill in the correct address.

  • Add models as needed: Please only add models you will actually use to avoid adding too many unnecessary models.

Last updated

Was this helpful?