Home Arrow Icon Knowledge base Arrow Icon Global Arrow Icon How do I create a prompt for DeepSeek R1 in n8n


How do I create a prompt for DeepSeek R1 in n8n


To create a prompt for DeepSeek R1 in n8n, follow these steps:

Step 1: Install the DeepSeek Node in n8n

First, ensure you have n8n installed. Then, go to Settings > Community Nodes > Install a community node. In the npm Package Name field, type "n8n-nodes-deepseek", check the box acknowledging the risks, and click Install**[1].

Step 2: Create Credentials for DeepSeek

After installing the node, you need to create credentials for DeepSeek. Go to Create a new workflow and add a Trigger node to start the flow. Then, add a service by clicking the "+" and searching for "DeepSeek." Select the DeepSeek node you installed[1].

Step 3: Configure the DeepSeek Node

In the DeepSeek node, you will need to configure the credentials:
- Click on + Create a new credential.
- Paste your DeepSeek API key into the API Key field.
- Save the credential[1].

Step 4: Set Up the Prompt

Now, configure the prompt for DeepSeek R1:
- In the Model field, select DeepSeek R1 or the specific model variant you are using (e.g., deepseek-reasoner for logical reasoning tasks)[5].
- In the Prompt section, you can add messages to guide the AI's response. For example:
- System Message: Type a description of the AI's role, such as "You are a helpful assistant."
- User Message: Enter the question or prompt you want the AI to respond to, such as "Who are you?"[1].

Step 5: Test the Prompt

Finally, test your setup by clicking the Test step button. This will send the prompt to DeepSeek R1 and display the response in n8n[1].

Additional Considerations for DeepSeek R1

DeepSeek R1 is particularly useful as a planning agent, where it can generate step-by-step plans based on user input. This model is cost-effective and powerful for chat-based applications and automation tasks[4][5]. When using DeepSeek R1, ensure you understand its capabilities and limitations, such as processing time, which may not be suitable for all real-time applications[1][4].

For more complex setups, such as integrating with Ollama or using Docker for self-hosting, refer to additional resources that provide detailed guides on these configurations[2][7].

Citations:
[1] https://dev.to/dwtoledo/integrating-deepseek-into-n8n-low-cost-ai-automations-11ge
[2] https://dev.to/docker/self-hosting-ai-workflows-on-mac-deepseek-r1-ollama-n8n-powered-by-docker-4a9k
[3] https://www.youtube.com/watch?v=JYU_0nNNmEg
[4] https://www.youtube.com/watch?v=tjaD65OCoE8
[5] https://blog.elest.io/how-to-use-deepseek-r1-in-n8n/
[6] https://www.linkedin.com/pulse/how-automate-anything-deepseek-r1-complete-guide-julian-goldie-irnse
[7] https://www.youtube.com/watch?v=g_LLiqUptsE
[8] https://www.youtube.com/watch?v=4Zk-bp9atPQ
[9] https://www.skool.com/ai-automation-society/new-video-how-to-actually-build-agents-with-deepseek-r1-in-n8n-without-openrouter