DeepSeek in Amazon Bedrock: Models, Pricing, APIs, and Setup Guide

Last updated: May 5, 2026

DeepSeek in Amazon Bedrock refers to using DeepSeek foundation models through AWS’s managed Amazon Bedrock service instead of deploying and operating the models yourself. As of this update, AWS documentation lists DeepSeek V3.2, DeepSeek-V3.1, and DeepSeek-R1 as available DeepSeek models in Amazon Bedrock. DeepSeek V3.2 and DeepSeek-V3.1 support Invoke, Converse, and Chat Completions, while DeepSeek-R1 supports Invoke and Converse but not Chat Completions. Always verify the exact model ID, region, endpoint, and price in your Amazon Bedrock console before production deployment.

What Is DeepSeek in Amazon Bedrock?

Amazon Bedrock is a fully managed AWS service that provides secure, enterprise-grade access to foundation models so teams can build and scale generative AI applications without managing model infrastructure directly.

DeepSeek in Amazon Bedrock gives AWS teams access to DeepSeek models through Bedrock’s APIs, governance controls, security features, service tiers, and integration points such as IAM, CloudTrail, Guardrails, Agents, Flows, and model evaluation where supported. The main reason DeepSeek matters in Bedrock is its usefulness for workloads that require reasoning, code generation, math, technical analysis, and multi-step problem solving.

This is not only about calling a model. For production teams, the value is being able to run DeepSeek inside an AWS operating model: access control, regional deployment decisions, cost monitoring, security review, and responsible AI controls.

Which DeepSeek Models Are Available in Amazon Bedrock?

AWS currently documents three DeepSeek models in Amazon Bedrock: DeepSeek V3.2, DeepSeek-V3.1, and DeepSeek-R1. AWS describes V3.2 as a mixture-of-experts model with improved reasoning, coding, and instruction following; V3.1 as a 685B parameter mixture-of-experts model for coding, math, and general reasoning; and R1 as a reasoning model that uses chain-of-thought for complex math, coding, and logic problems.

ModelBest forContext windowMax output tokensReasoning supportAPI supportModel ID / endpoint notesRecommended use case
DeepSeek V3.2Reasoning, coding, instruction following, general enterprise assistants164K tokens8KStrong reasoning behavior; AWS model card describes improved reasoningInvoke, Converse, Chat Completionsdeepseek.v3.2 for both bedrock-runtime and bedrock-mantleNew applications that need long context and OpenAI-compatible Chat Completions
DeepSeek-V3.1Coding, math, general reasoning, hybrid thinking/non-thinking usage128K tokens8KHybrid thinking and direct-answer style; model is 685B parameter MoEInvoke, Converse, Chat Completionsdeepseek.v3-v1:0 on bedrock-runtime; deepseek.v3.1 on bedrock-mantleEngineering assistants, architecture reasoning, data analysis, agentic workflows
DeepSeek-R1Math, code, logic, chain-of-thought-style reasoning workloads128K tokens8KReasoning supportedInvoke, Converse; no Chat Completionsdeepseek.r1-v1:0; geo inference profile ID may be us.deepseek.r1-v1:0Deep reasoning tasks where Chat Completions compatibility is not required

DeepSeek V3.2 has a 164K token context window, 8K max output, text input/output, support for Invoke, Converse, and Chat Completions, and the model ID deepseek.v3.2 for both bedrock-runtime and bedrock-mantle.

DeepSeek-V3.1 has a 128K token context window, 8K max output, support for Invoke, Converse, and Chat Completions, and different model IDs depending on endpoint: deepseek.v3-v1:0 for bedrock-runtime and deepseek.v3.1 for bedrock-mantle.

DeepSeek-R1 has a 128K token context window, 8K max output, reasoning support, and supports Invoke and Converse; AWS lists deepseek.r1-v1:0 for bedrock-runtime and us.deepseek.r1-v1:0 as the US geo inference ID.

Important: Model availability varies by AWS Region. The R1 model card shows geo cross-Region inference across US Regions, while V3.2 and V3.1 show broader in-Region availability. Always confirm the model in your Bedrock console and region before coding against a model ID.

Why Use DeepSeek Through Amazon Bedrock Instead of Calling DeepSeek Directly?

Using DeepSeek through Amazon Bedrock is attractive when your organization already runs workloads on AWS and needs centralized governance, security, observability, and billing. Bedrock also gives teams a unified way to experiment with multiple foundation models and choose the best fit for performance, cost, and deployment requirements.

Key advantages include:

BenefitWhy it matters
Unified AWS APIDevelopers can integrate DeepSeek through familiar Bedrock Runtime APIs.
Managed or serverless accessFully managed Bedrock models reduce infrastructure work compared with self-hosting.
IAM integrationAccess can be controlled using AWS identity and permission boundaries.
GuardrailsTeams can apply configurable safety and privacy controls to prompts and responses.
Model evaluationAWS provides model and RAG evaluation workflows to compare performance and quality.
Private networkingAWS PrivateLink can keep traffic private between Amazon VPC and Amazon Bedrock.
Provider isolationAWS states that inputs and outputs are not shared with model providers.
Centralized governanceCloudTrail, CloudWatch, cost controls, and AWS account policies fit enterprise workflows.

AWS states that Amazon Bedrock content is not used to improve base models, is not shared with model providers, and is encrypted in transit and at rest; AWS also notes that PrivateLink can be used to establish private connectivity from VPC to Bedrock.

Balanced note: Amazon Bedrock improves enterprise controls, but it does not remove the need for security review, data classification, least-privilege IAM, prompt testing, model evaluation, human review for high-risk use cases, monitoring, and cost governance.

DeepSeek in Amazon Bedrock: API Options Explained

AWS documents the Amazon Bedrock API compatibility family as including Invoke, Converse, ConverseStream, and OpenAI-compatible APIs such as Chat Completions where supported.

API optionBest forDeepSeek supportNotes
InvokeModel / Invoke APILow-level model invocation, direct request body controlV3.2, V3.1, R1Useful when you want direct model-specific payload control.
Converse APIMulti-turn chat applicationsV3.2, V3.1, R1Recommended for model-agnostic chat apps. AWS describes Converse as a unified interface for synchronous multi-turn conversations.
ConverseStreamStreaming responsesSupported where model supports response streamingUse when users need incremental output.
Chat Completions API via bedrock-mantleOpenAI-compatible integrationsV3.2 and V3.1; not R1Best when migrating existing OpenAI SDK-based applications.
Bedrock console playgroundTesting prompts before codeSupported for available modelsGood for quick validation and stakeholder demos.

DeepSeek V3.2 and V3.1 support Chat Completions through bedrock-mantle, but DeepSeek-R1 does not. AWS’s API compatibility table shows V3.2 and V3.1 as supporting Invoke, Converse, and Chat Completions, while R1 supports Invoke and Converse only.

How to Get Started with DeepSeek in Amazon Bedrock

Prerequisites

You need:

  • An AWS account.
  • Amazon Bedrock access in a region where your selected DeepSeek model is available.
  • IAM permissions for Bedrock and Bedrock Runtime.
  • Python and boto3.
  • AWS CLI, optional but useful.
  • Standard AWS credentials for bedrock-runtime, or a Bedrock API key for supported API-key workflows.
  • Amazon Bedrock Guardrails, optional but recommended.

AWS documents Bedrock API keys, including short-term and long-term keys, but recommends restricting API keys to exploration and switching to short-term credentials for applications with stronger security requirements.

Step-by-step setup

  1. Open the Amazon Bedrock console.
  2. Check the model catalog or model cards for DeepSeek.
  3. Choose the model and AWS Region.
  4. Test a prompt in the Bedrock Playground.
  5. Invoke the model with Python using boto3 or OpenAI-compatible Chat Completions where supported.
  6. Add Guardrails for harmful content, denied topics, sensitive information, and grounding checks where relevant.
  7. Monitor input tokens, output tokens, latency, errors, and cost.

AWS’s DeepSeek-R1 launch post shows a console workflow that includes selecting DeepSeek-R1 in the Playground and using “View API request” to access AWS CLI and SDK examples.

Python Example: Invoke DeepSeek-R1 with the Converse API

Use this example for DeepSeek-R1 Amazon Bedrock through the bedrock-runtime endpoint.

import boto3
from botocore.exceptions import ClientError, BotoCoreError

REGION = "us-west-2"

# For DeepSeek-R1, check your Bedrock console.
# Some regions/use cases require the cross-Region inference profile ID:
# "us.deepseek.r1-v1:0"
MODEL_ID = "us.deepseek.r1-v1:0"

client = boto3.client("bedrock-runtime", region_name=REGION)

messages = [
{
"role": "user",
"content": [
{
"text": (
"Explain the trade-offs between using Amazon Bedrock "
"managed models and self-hosting an open-weight model."
)
}
],
}
]

try:
response = client.converse(
modelId=MODEL_ID,
messages=messages,
inferenceConfig={
"maxTokens": 700,
"temperature": 0.3,
"topP": 0.9,
},
)

output_text = response["output"]["message"]["content"][0]["text"]
print(output_text)

except (ClientError, BotoCoreError, KeyError) as error:
print(f"Failed to invoke model {MODEL_ID}: {error}")
raise

Important: For DeepSeek-R1, verify whether your region requires deepseek.r1-v1:0 or a cross-Region inference profile such as us.deepseek.r1-v1:0. AWS’s R1 model card lists both the in-region model ID and the US geo inference ID.

Python Example: Invoke DeepSeek V3.2 with Amazon Bedrock

DeepSeek V3.2 supports both bedrock-runtime and bedrock-mantle. AWS lists deepseek.v3.2 as the model ID for both endpoints.

Option A: Boto3 Converse API

import boto3
from botocore.exceptions import ClientError, BotoCoreError

REGION = "us-east-1"
MODEL_ID = "deepseek.v3.2"

client = boto3.client("bedrock-runtime", region_name=REGION)

messages = [
{
"role": "user",
"content": [
{
"text": (
"Create a production readiness checklist for deploying "
"a customer support chatbot on Amazon Bedrock."
)
}
],
}
]

try:
response = client.converse(
modelId=MODEL_ID,
messages=messages,
inferenceConfig={
"maxTokens": 900,
"temperature": 0.2,
"topP": 0.95,
},
)

print(response["output"]["message"]["content"][0]["text"])

except (ClientError, BotoCoreError, KeyError) as error:
print(f"Failed to invoke model {MODEL_ID}: {error}")
raise

Option B: OpenAI-compatible Chat Completions via bedrock-mantle

Use this when you want to adapt an existing OpenAI SDK integration.

export OPENAI_API_KEY="<your-bedrock-api-key>"
export OPENAI_BASE_URL="https://bedrock-mantle.us-east-1.api.aws/v1"
from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
model="deepseek.v3.2",
messages=[
{
"role": "user",
"content": "Explain how Amazon Bedrock Guardrails can reduce AI application risk."
}
],
)

print(response.choices[0].message.content)

AWS’s V3.2 model card includes Chat Completions sample setup with OPENAI_API_KEY, OPENAI_BASE_URL, the OpenAI SDK, and model="deepseek.v3.2".

AWS CLI Example

Replace the region, model ID, prompt, and output filename.

aws bedrock-runtime invoke-model \
--region us-east-1 \
--model-id deepseek.v3.2 \
--cli-binary-format raw-in-base64-out \
--body '{
"messages": [
{
"role": "user",
"content": "Summarize the security benefits of using DeepSeek through Amazon Bedrock."
}
],
"max_tokens": 700
}' \
deepseek-output.json

For DeepSeek-R1, AWS’s launch post shows an AWS CLI pattern using us.deepseek.r1-v1:0 as the model ID for cross-Region inference.

DeepSeek in Amazon Bedrock Pricing

DeepSeek Bedrock pricing depends on model, region, and service tier. For DeepSeek V3.2, AWS’s pricing page currently lists $0.62 per 1M input tokens and $1.85 per 1M output tokens in US East and US West regions under Standard on-demand pricing; other regions have different prices.

Pricing factorWhat to checkWhy it matters
ModelDeepSeek V3.2, V3.1, or R1Pricing and API support can differ by model.
RegionExample: US East, US West, Tokyo, London, SydneyAWS pricing varies by region.
Service tierStandard, Priority, Flex, Reserved where supportedV3.2 supports Standard, Priority, and Flex; V3.1 and R1 have more limited tier support in current AWS model cards.
Token mixInput tokens vs output tokensOutput tokens often cost more.
Deployment pathFully managed Bedrock vs Marketplace vs SageMaker vs EC2Marketplace, SageMaker, and EC2 paths may include instance or infrastructure charges.

Practical cost example

Assume DeepSeek V3.2 in a US Region at the current Standard on-demand price:

  • Input tokens: 100,000
  • Output tokens: 25,000
  • Input cost: 100,000 / 1,000,000 × $0.62 = $0.062
  • Output cost: 25,000 / 1,000,000 × $1.85 = $0.04625
  • Estimated total: $0.10825

Cost note: This is only a model-token estimate. It excludes other AWS service charges, logs, storage, networking, Guardrails, Knowledge Bases, Marketplace endpoint costs, SageMaker endpoints, or EC2 infrastructure. Check the official Amazon Bedrock pricing page before production use.

Security, Privacy, and Compliance Considerations

AWS states that Amazon Bedrock does not store or log prompts and completions, does not use prompts and completions to train AWS models, and does not distribute them to third parties. AWS also states that model providers do not have access to Bedrock model deployment accounts.

For production use, apply these controls:

  • Use IAM least privilege.
  • Use short-term credentials where practical.
  • Enable CloudTrail for auditability.
  • Use AWS PrivateLink for private connectivity where required.
  • Encrypt data in transit and at rest.
  • Avoid secrets in prompts, tags, names, metadata, and logs.
  • Classify data before sending it to any model.
  • Use Guardrails for sensitive information and harmful content.
  • Run model evaluation before production release.
  • Monitor latency, token usage, refusal behavior, hallucinations, and cost.

AWS recommends protecting credentials, using IAM or IAM Identity Center, using SSL/TLS, setting up CloudTrail logging, and avoiding confidential information in tags and free-form name fields.

Using Amazon Bedrock Guardrails with DeepSeek

Amazon Bedrock Guardrails provides configurable safeguards to help detect and filter undesirable content and protect sensitive information in model inputs and responses. AWS documentation lists major Guardrails components including content filters, denied topics, word filters, sensitive information filters, contextual grounding checks, and Automated Reasoning checks.

Use Guardrails with DeepSeek when you need:

  • Content filtering for harmful categories.
  • Denied topics for business-specific exclusions.
  • Sensitive information filters for PII masking or blocking.
  • Word filters for prohibited phrases.
  • Contextual grounding checks for RAG-style responses.
  • Automated Reasoning checks for policy-based validation.

AWS’s DeepSeek Guardrails blog recommends security controls for DeepSeek-R1 deployments and describes using Guardrails with DeepSeek models across Bedrock Marketplace, SageMaker JumpStart, and Custom Model Import patterns.

Best practice: Test Guardrails before production with real prompts, adversarial prompts, multilingual prompts, edge cases, and expected false positives. A guardrail configuration that is too strict can block useful responses; a configuration that is too loose can miss policy violations.

Deployment Options: Bedrock Managed Models vs Marketplace vs Custom Model Import vs SageMaker

OptionBest forInfrastructure managementPricing modelProsConsRecommended audience
Fully managed DeepSeek model in Amazon BedrockMost production apps that want DeepSeek through AWS APIsMinimalToken-based / Bedrock tier pricingFastest path, unified APIs, IAM, Guardrails support where availableModel and region availability constraintsApp developers, cloud architects, platform teams
Amazon Bedrock MarketplaceSpecialized models and endpoint-level deployment controlManaged endpoints, but more configurationMarketplace and endpoint charges may applyLarger model catalog, deploy selected models on managed endpointsMore operational and cost planning than serverless modelsML platform teams
Amazon Bedrock Custom Model ImportDistilled or customized compatible modelsServerless imported model experienceCustom Model Unit / import-related pricingUse external customizations with Bedrock toolsArchitecture and model support limitationsTeams with custom or distilled models
SageMaker JumpStartML teams needing deeper deployment controlMore control and more responsibilityEndpoint infrastructure pricingStrong ML workflow integrationRequires endpoint, quota, and scaling managementML engineers and research teams
EC2 with Trainium/Inferentia for distilled modelsMaximum control and optimizationHighInstance, storage, networkingFull infrastructure controlHighest operational burdenAdvanced infra/ML platform teams

Amazon Bedrock Marketplace lets developers discover, subscribe to, and deploy over 100 models on managed endpoints, while still accessing compatible models through Bedrock unified APIs and tools.

Amazon Bedrock Custom Model Import supports distilled Llama versions of DeepSeek-R1, including DeepSeek-R1-Distill-Llama-8B and DeepSeek-R1-Distill-Llama-70B, imported from Amazon S3 or an Amazon SageMaker AI model repository into a managed serverless environment.

AWS also announced DeepSeek-R1 and distilled models through Bedrock Marketplace and SageMaker JumpStart, with distilled versions ranging from 1.5B to 70B parameters.

Best Use Cases for DeepSeek in Amazon Bedrock

Use caseWhy DeepSeek fitsLikely modelWatch out for
Code generation and debuggingDeepSeek models are positioned strongly for coding and reasoningV3.2 or V3.1Test for insecure code, dependency hallucinations, and licensing concerns.
Technical architecture reasoningLong-form trade-off analysis benefits from reasoning modelsV3.2, V3.1, or R1Validate recommendations against your architecture standards.
Math and quantitative analysisR1 is designed for complex math and logic tasksR1 or V3.2Do not use without verification for financial, legal, or safety-critical decisions.
Data analysisGood for explaining patterns and generating analytical stepsV3.2 or V3.1Keep raw sensitive data out unless approved by policy.
Business decision supportUseful for scenario analysis and structured comparisonsV3.2Require human approval for material business decisions.
Agentic workflowsV3.1 and V3.2 support client-side tool calling; server-side tool calling is not supported in the cited model cardsV3.1 or V3.2Test tool boundaries, permissions, and prompt injection resistance.
Multilingual enterprise appsAWS’s V3.1 launch post highlights broad multilingual capabilityV3.1Evaluate quality in your actual target languages.
RAG applicationsDeepSeek can synthesize retrieved material when paired with retrievalV3.2 or V3.1Use grounding checks and citations to reduce hallucinations.
Internal developer assistantsGood fit for code review, explanation, and troubleshootingV3.2 or V3.1Restrict repository, secret, and credential exposure.

Common Limitations and Troubleshooting

IssueLikely causeFix
Model not visibleRegion, access, or account permission issueCheck the Bedrock console, model card, and IAM permissions.
AccessDeniedExceptionIAM policy missing Bedrock Runtime permissionAdd least-privilege actions for the specific model and region.
Wrong model IDEndpoint-specific IDs differUse deepseek.v3.2, deepseek.v3-v1:0, deepseek.v3.1, or deepseek.r1-v1:0 only after checking the console.
R1 invocation failsCross-Region inference profile may be neededTry the documented geo inference ID where applicable, such as us.deepseek.r1-v1:0.
Chat Completions failsModel does not support Chat CompletionsUse V3.2 or V3.1, not R1, for Chat Completions.
High costLong prompts or verbose outputsReduce context, cap maxTokens, cache common responses, and monitor usage.
High latencyLarge context or complex reasoningUse smaller prompts, streaming, or a faster model for low-latency tasks.
Guardrails block expected contentConfiguration too strictReview blocked categories, denied topics, and thresholds.
Tool calling mismatchServer-side tool calling not supportedUse client-side orchestration where supported and validate outputs.
Marketplace cost surpriseEndpoint or infrastructure billingReview Marketplace subscription and endpoint configuration.

DeepSeek in Amazon Bedrock vs Other AWS AI Model Options

DeepSeek is one model family among many available through Amazon Bedrock. AWS’s model catalog includes models from multiple providers, including Amazon, Anthropic, Meta, Qwen, and others.

OptionWhen it may fit better than DeepSeek
Amazon NovaWhen you want Amazon-native models and tight AWS ecosystem alignment.
Anthropic ClaudeWhen your application prioritizes long-form writing, instruction following, or existing Claude-specific workflows.
Meta LlamaWhen you want open-weight model options and broad ecosystem support.
Qwen or other open-weight modelsWhen coding, multilingual, or open model deployment trade-offs fit your use case.
DeepSeekWhen reasoning, coding, math, and technical analysis are central and the available APIs, regions, and pricing fit your requirements.

Do not assume DeepSeek is always better. Model choice depends on reasoning quality, latency, cost, context window, tool support, language support, security requirements, and deployment path.

Best Practices Before Production Deployment

Checklist itemStatus
Verify model name, model ID, endpoint, and AWS Region.
Confirm API support: Invoke, Converse, streaming, or Chat Completions.
Use least-privilege IAM policies.
Avoid root credentials and long-term static credentials.
Add Guardrails for harmful content, sensitive data, and denied topics.
Run model evaluation against your real prompts and datasets.
Red-team prompt injection and sensitive workflows.
Log usage, latency, errors, and token counts.
Set budget alerts and cost dashboards.
Validate outputs for hallucination, bias, and unsafe recommendations.
Document fallback models and failure behavior.
Re-check pricing, model IDs, and region availability before launch.

Amazon Bedrock evaluations can assess models, knowledge bases, and RAG sources, including metrics such as robustness and correctness for retrieval and generation workflows.

  • Link to your AWS generative AI consulting page using anchor text: AWS generative AI consulting
  • Link to your RAG implementation guide using anchor text: RAG application architecture
  • Link to your cloud security page using anchor text: AWS AI security best practices
  • Link to your model evaluation guide using anchor text: LLM evaluation framework

FAQ About DeepSeek in Amazon Bedrock

Is DeepSeek available in Amazon Bedrock?

Yes. AWS documentation currently lists DeepSeek V3.2, DeepSeek-V3.1, and DeepSeek-R1 as available DeepSeek models in Amazon Bedrock.

Which DeepSeek models are available in Amazon Bedrock?

The documented models are DeepSeek V3.2, DeepSeek-V3.1, and DeepSeek-R1. Availability can vary by region, so verify in the Bedrock console before deployment.

What is the model ID for DeepSeek-R1 in Bedrock?

AWS lists deepseek.r1-v1:0 as the Bedrock Runtime model ID and us.deepseek.r1-v1:0 as the US geo inference ID.

What is the model ID for DeepSeek V3.2 in Bedrock?

AWS lists deepseek.v3.2 for both bedrock-runtime and bedrock-mantle.

Does DeepSeek in Bedrock support the Converse API?

Yes. AWS’s API compatibility table shows that DeepSeek V3.2, DeepSeek-V3.1, and DeepSeek-R1 support the Converse API.

Does DeepSeek in Bedrock support Chat Completions?

DeepSeek V3.2 and DeepSeek-V3.1 support Chat Completions. DeepSeek-R1 does not support Chat Completions in the current AWS API compatibility table.

How much does DeepSeek in Amazon Bedrock cost?

Pricing varies by model, region, and service tier. AWS currently lists DeepSeek V3.2 in US East and US West regions at $0.62 per 1M input tokens and $1.85 per 1M output tokens under Standard on-demand pricing.

Is DeepSeek in Amazon Bedrock serverless?

Fully managed DeepSeek models in Amazon Bedrock are designed to reduce infrastructure management. AWS announced DeepSeek-R1 as a fully managed serverless model in Amazon Bedrock, and DeepSeek-V3.1 as a fully managed foundation model.

Is my data shared with DeepSeek when I use Amazon Bedrock?

AWS states that Amazon Bedrock does not share user inputs and model outputs with model providers and does not use them to train AWS or third-party models.

Can I use Amazon Bedrock Guardrails with DeepSeek?

Yes, where supported by the endpoint and deployment path. AWS’s model cards list Guardrails support for DeepSeek models through bedrock-runtime, and AWS has published guidance on protecting DeepSeek deployments with Bedrock Guardrails.

Should I use Bedrock, SageMaker JumpStart, or Custom Model Import?

Use fully managed Bedrock models for the simplest application integration, SageMaker JumpStart when your ML team wants endpoint-level control, and Custom Model Import when you have compatible distilled or customized models to bring into Bedrock.

What are the main limitations of DeepSeek in Bedrock?

The main limitations are regional availability, endpoint-specific model IDs, different API support per model, possible cross-Region inference requirements for R1, pricing variation, and the need to test safety, latency, cost, and output quality before production.

Conclusion

DeepSeek in Amazon Bedrock is a strong option for teams that want DeepSeek’s reasoning and coding capabilities inside AWS’s enterprise governance model. The best choice depends on the model version, endpoint, API, region, price, context window, Guardrails requirements, and production-readiness standards.

For most developers, start with DeepSeek V3.2 if you need the newest documented DeepSeek model, long context, Converse, and Chat Completions compatibility. Use DeepSeek-V3.1 when its hybrid thinking/direct-answer behavior fits your workload. Use DeepSeek-R1 when your priority is reasoning and you do not need Chat Completions.

Before deploying, verify model IDs, pricing, region availability, and API support in the Amazon Bedrock console and official AWS documentation.