Using DeepSeek with LangChain involves integrating DeepSeek models into LangChain's framework to leverage their capabilities, such as advanced reasoning and chat functionalities. Here's a step-by-step guide on how to set up and use DeepSeek with LangChain:
Setting Up DeepSeek with LangChain
1. Create a DeepSeek Account and Get an API Key**
- Visit DeepSeek's website to create an account.
- Generate an API key, which is necessary for accessing DeepSeek models.
2. Install LangChain DeepSeek Integration Package**
You need to install the `langchain-deepseek` package to integrate DeepSeek models with LangChain. This can be done using pip:
bash
%pip install -qU langchain-deepseek
3. Set Environment Variables**
Set the `DEEPSEEK_API_KEY` environment variable with your API key:
python
import os
if not os.getenv("DEEPSEEK_API_KEY"):
os.environ["DEEPSEEK_API_KEY"] = "your-api-key"
4. Instantiate DeepSeek Model in LangChain**
You can instantiate a DeepSeek model using the `ChatDeepSeek` class. For example, to use the `deepseek-chat` model:
python
from langchain_deepseek import ChatDeepSeek
llm = ChatDeepSeek(
model="deepseek-chat",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
5. Invoke the Model**
To generate chat completions, you can invoke the model with a prompt:
python
messages = [
("system", "You are a helpful assistant that translates English to French. Translate the user sentence."),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
Features and Limitations
- Tool Calling: Supported by DeepSeek-V3 (`deepseek-chat`), but not by DeepSeek-R1 (`deepseek-reasoner`)[2][4].
- Structured Output: Also supported by DeepSeek-V3, not by DeepSeek-R1[2][4].
- Local Deployment: DeepSeek models can be run locally using providers like Ollama[2][4].
Building Applications with DeepSeek and LangChain
You can build advanced applications like Retrieval-Augmented Generation (RAG) systems using DeepSeek, LangChain, and tools like Streamlit for interactive interfaces[3][7]. These systems allow you to query local documents and answer complex questions.
Citations:[1] https://www.reddit.com/r/LocalLLaMA/comments/1iqtyw6/langchain_and_langgraph_tool_calling_support_for/
[2] https://js.langchain.com/docs/integrations/chat/deepseek
[3] https://www.youtube.com/watch?v=M6vZ6b75p9k
[4] https://python.langchain.com/docs/integrations/chat/deepseek/
[5] https://www.youtube.com/watch?v=zC6B490i-IY
[6] https://python.langchain.com/docs/integrations/providers/deepseek/
[7] https://www.reddit.com/r/selfhosted/comments/1icu6jp/build_a_local_rag_using_deepseekr1_langchain_and/
[8] https://api-docs.deepseek.com/faq