Connecting DeepSeek R1 to n8n without using OpenRouter involves several steps. Here's a detailed guide on how to achieve this integration:
Step 1: Install n8n and the DeepSeek Node
1. Install n8n: First, ensure you have n8n installed. Follow the official n8n documentation for installation instructions.
2. Install the DeepSeek Node:
- Open n8n and navigate to Settings > Community Nodes.
- Click on Install a community node.
- In the npm Package Name field, type `"n8n-nodes-deepseek"`.
- Check the box next to "I understand the risks of installing unverified code from a public source."
- Click the Install button to install the DeepSeek node[3].
Step 2: Create Credentials for DeepSeek
1. Generate API Key: Obtain an API key from DeepSeek. This key is necessary for authenticating your requests.
2. Create Credentials in n8n:
- In n8n, go to the Credentials section.
- Click on Create a new credential.
- Paste your DeepSeek API key into the appropriate field.
- Save the credential[3].
Step 3: Configure the DeepSeek Node
1. Add the DeepSeek Node to Your Workflow:
- Create a new workflow or open an existing one.
- Click the + button to add a new node.
- Search for "DeepSeek" and select the DeepSeek node you installed.
2. Configure the Node:
- In the node configuration, select the credential you created.
- Choose the DeepSeek model you want to use, in this case, DeepSeek R1.
- Set up any additional parameters such as the prompt or system message as needed[3].
Step 4: Use HTTP Request for Advanced Integrations
For more advanced integrations or if you prefer using HTTP requests directly, you can make a POST request to DeepSeek's API endpoint. Hereâs how:
1. API Endpoint: Use the DeepSeek API endpoint compatible with OpenAI's format. For example, you can POST to `https://api.deepinfra.com/v1/openai/chat/completions` with appropriate headers and a JSON body containing your messages and model selection[5].
2. Authorization: Include your API key in the `Authorization` header using a Bearer token.
3. JSON Body: Structure your JSON body with the model name (`"deepseek-ai/DeepSeek-R1"`), messages, and any other required parameters[5].
Example of an HTTP Request
Hereâs an example of how you might structure your HTTP request using `curl`:
bash
curl "https://api.deepinfra.com/v1/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEEPINFRA_TOKEN" \
-d '{
"model": "deepseek-ai/DeepSeek-R1",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
This setup allows you to leverage DeepSeek R1's capabilities directly within n8n without relying on OpenRouter, enabling cost-effective and powerful AI-driven workflows.
Citations:
[1] https://www.youtube.com/watch?v=tjaD65OCoE8
[2] https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-deepseek.html
[3] https://dev.to/dwtoledo/integrating-deepseek-into-n8n-low-cost-ai-automations-11ge
[4] https://blog.elest.io/how-to-use-deepseek-r1-in-n8n/
[5] https://deepinfra.com/deepseek-ai/DeepSeek-R1/api
[6] https://www.skool.com/ai-automation-society/new-video-how-to-actually-build-agents-with-deepseek-r1-in-n8n-without-openrouter
[7] https://www.skool.com/ai-automation-society/new-video-two-ways-to-save-96-of-your-money-using-deepseek-r1-in-n8n
[8] https://api-docs.deepseek.com