To invoke the DeepSeek model using an IAM role, you need to configure the role with specific permissions depending on where the model is deployed, such as Amazon Bedrock or Amazon SageMaker. Here are the detailed steps for each environment:
For Amazon Bedrock
1. Create an IAM Role: Go to the AWS IAM console and create a new IAM role. Name it something like `my_invoke_bedrock_deepseek_model_role`.
2. Custom Trust Policy: Add a custom trust policy to allow the OpenSearch service to assume this role. The policy should look like this:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
3. Permission Policy: Attach a permission policy that allows the role to invoke the DeepSeek model on Bedrock. The policy should include the `bedrock:InvokeModel` action and specify the ARN of your DeepSeek model:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"bedrock:InvokeModel"
],
"Effect": "Allow",
"Resource": "your_DeepSeek_R1_model_ARN_"
}
]
}
4. Note the Role ARN: After creating the role, note its ARN, which will be used in subsequent steps to configure the connector.
For Amazon SageMaker
1. Create an IAM Role: In the AWS IAM console, create another IAM role named something like `my_invoke_sagemaker_deepseek_model_role`.
2. Custom Trust Policy: Similar to the Bedrock setup, add a trust policy to allow OpenSearch to assume this role:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
3. Permission Policy: Attach a permission policy that allows the role to invoke the DeepSeek model on SageMaker. This policy should include the `sagemaker:InvokeEndpoint` action and specify the ARN of your SageMaker inference endpoint:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sagemaker:InvokeEndpoint"
],
"Effect": "Allow",
"Resource": [
"your_sagemaker_model_inference_endpoint_arn"
]
}
]
}
4. Note the Role ARN: Record the ARN of this role for later use in configuring the connector.
In both cases, ensure that the IAM role is properly configured in your OpenSearch cluster to enable model invocation. This involves mapping the IAM role to a backend role in OpenSearch Dashboards, typically the `ml_full_access` role[1][3].
Citations:
[1] https://github.com/opensearch-project/ml-commons/blob/main/docs/tutorials/aws/RAG_with_DeepSeek_R1_model_on_Bedrock.md
[2] https://www.listendata.com/2025/01/how-to-use-deepseek-in-r.html
[3] https://github.com/opensearch-project/ml-commons/blob/main/docs/tutorials/aws/RAG_with_DeepSeek_R1_model_on_Sagemaker.md
[4] https://dev.to/fidelisesq/deepseek-r1-deployment-on-aws-via-terraform-github-actions-32jp
[5] https://crossasyst.com/blog/deepseek-r1-on-aws-bedrock/
[6] https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-permissions.html
[7] https://docs.aws.amazon.com/bedrock/latest/userguide/model-evaluation-security-service-roles.html
[8] https://aws.amazon.com/blogs/machine-learning/deploy-deepseek-r1-distilled-models-on-amazon-sagemaker-using-a-large-model-inference-container/