Use the OpenAI SDK with DeepSeek: Python, Node.js, Streaming, JSON Output & Tool Calls

The DeepSeek API uses an API format compatible with OpenAI, so many existing apps that already use the OpenAI SDK can call DeepSeek by changing the API key, the base URL, and the model name. According to the official DeepSeek quick start, the standard DeepSeek API base URL is https://api.deepseek.com, and https://api.deepseek.com/v1 may also be used for OpenAI compatibility; the v1 path does not mean the model version. Official DeepSeek API quick start

Independent guide: Chat-Deep.ai is an independent guide and is not the official DeepSeek website. For API keys, billing, official limits, account management, and production-critical API details, use DeepSeek’s official platform and documentation.

Last verified: April 20, 2026. Model aliases, pricing, output limits, feature support, and endpoint behavior can change. Always check the official DeepSeek pages before deploying production code: API docs, Models & Pricing, and Change Log.

Quick answer: what do you change?

For a basic OpenAI SDK to DeepSeek migration, keep the OpenAI SDK package, but initialize the client with a DeepSeek API key and DeepSeek base URL, then use a DeepSeek model alias such as deepseek-chat or deepseek-reasoner.

ItemOpenAI SDK appDeepSeek API versionOfficial source
API keyUsually OPENAI_API_KEYUse a DeepSeek API key from the official DeepSeek platform. In examples below, it is stored as DEEPSEEK_API_KEY.DeepSeek authentication
Base URLOpenAI default endpointhttps://api.deepseek.com. DeepSeek also documents https://api.deepseek.com/v1 for OpenAI compatibility.DeepSeek quick start
Model nameOpenAI model nameUse a current DeepSeek model alias, commonly deepseek-chat or deepseek-reasoner.Models & Pricing
SDK packageopenaiKeep the official OpenAI SDK package if you are using the OpenAI-compatible DeepSeek API format.openai-python / openai-node
Request formatchat.completions.create for Chat Completions appsDeepSeek’s official examples use client.chat.completions.create with the OpenAI SDK.DeepSeek quick start
Streamingstream: trueDeepSeek documents streaming by setting stream to true; streaming sends SSE chunks and terminates with data: [DONE].Create Chat Completion
Tools / function callingtools, tool_choiceDeepSeek supports tool calls. Current official docs describe function tools for tool calling.Tool Calls
JSON outputresponse_formatUse response_format: {"type": "json_object"} and explicitly ask for JSON in the prompt.JSON Output

Table of contents

Before you migrate: OpenAI SDK vs DeepSeek API

The main reason this migration is practical is that DeepSeek officially documents an OpenAI-compatible API format and provides Python and Node.js examples using the OpenAI SDK. This does not mean every OpenAI feature, model parameter, endpoint, or product surface is supported by DeepSeek. Treat the OpenAI SDK as the client library and DeepSeek as the API provider you are targeting. Official DeepSeek quick start

AreaWhat stays similarWhat you must verify for DeepSeek
OpenAI SDK packageYou can keep the official OpenAI Python or JavaScript/TypeScript SDK package.Use the DeepSeek base URL and DeepSeek API key. DeepSeek’s official examples show this pattern.
DeepSeek-compatible endpointChat Completions request shape is similar to OpenAI-compatible clients.Use https://api.deepseek.com or the compatibility path documented by DeepSeek.
Model aliasesYou still pass a model string.Use DeepSeek aliases such as deepseek-chat and deepseek-reasoner; confirm current aliases on the official Models & Pricing page.
Chat completionsUse client.chat.completions.create in Python or Node.js.Request and response details should be checked against DeepSeek’s Chat Completion reference.
StreamingUse stream: true.DeepSeek documents streaming support through the Chat Completion API. If parsing raw HTTP responses, handle SSE chunks safely.
Tool callsUse a tools array and read tool_calls from the assistant message.DeepSeek documents tool calls. Thinking mode tool calls require checking the current official Thinking Mode guide.
JSON outputUse response_format.DeepSeek requires {"type": "json_object"}, the word “json” in the prompt, an example JSON format, and a reasonable max_tokens.
Reasoning model considerationsYou still call Chat Completions.deepseek-reasoner can return reasoning_content as well as final content. Review DeepSeek’s official reasoning docs before displaying or reusing reasoning fields.
Pricing and token accountingCosts are based on tokens.DeepSeek’s pricing page says prices are listed per 1M tokens and may vary. Check the official page before estimating production costs.

Compatibility warning: Do not call DeepSeek a complete drop-in replacement for OpenAI. The official DeepSeek docs confirm OpenAI-compatible API format, but model behavior, feature support, parameters, rate behavior, billing, and response fields can differ.

DeepSeek base_url explained

The base_url is the root API endpoint that the SDK sends requests to. In the official DeepSeek Python example, the OpenAI client is initialized with base_url="https://api.deepseek.com". In the official DeepSeek Node.js example, the client uses baseURL: "https://api.deepseek.com". Official DeepSeek quick start

EnvironmentSDK optionValueExample
Pythonbase_urlhttps://api.deepseek.comOpenAI(api_key=..., base_url="https://api.deepseek.com")
Node.js / TypeScriptbaseURLhttps://api.deepseek.comnew OpenAI({ apiKey: ..., baseURL: "https://api.deepseek.com" })
OpenAI compatibility pathbase_url or baseURLhttps://api.deepseek.com/v1DeepSeek documents this as an OpenAI compatibility option. The v1 path is not a model version.

Common base URL mistakes

  • Using a DeepSeek API key with the default OpenAI endpoint.
  • Using an OpenAI API key with the DeepSeek base URL.
  • Using base_url in Node.js instead of baseURL.
  • Using baseURL in Python instead of base_url.
  • Assuming /v1 means the DeepSeek model version. DeepSeek’s official quick start says it does not.
  • Copying a temporary model endpoint from a release note without checking whether it has expired.

Practical recommendation: Use https://api.deepseek.com unless a specific official DeepSeek page instructs you to use another official endpoint for a specific feature or temporary model.

Python example: OpenAI SDK with DeepSeek

This example follows the official DeepSeek pattern: install the OpenAI SDK, initialize OpenAI with a DeepSeek API key and DeepSeek base URL, then call chat.completions.create with deepseek-chat. The official OpenAI Python library is the OpenAI REST API client for Python applications, and DeepSeek’s quick start shows it being used with DeepSeek. Official openai-python / DeepSeek quick start

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPSEEK_API_KEY"],
    base_url="https://api.deepseek.com",
)

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful developer assistant."},
        {"role": "user", "content": "Explain how to call DeepSeek with the OpenAI SDK in one paragraph."},
    ],
    stream=False,
)

print(response.choices[0].message.content)

Line-by-line notes

  • from openai import OpenAI uses the official OpenAI Python SDK client.
  • DEEPSEEK_API_KEY is a local environment variable name used to avoid confusing your DeepSeek key with an OpenAI key.
  • base_url="https://api.deepseek.com" sends requests to DeepSeek instead of the default OpenAI endpoint.
  • model="deepseek-chat" uses DeepSeek’s chat model alias. Verify current model aliases on the official Models & Pricing page before production use.
  • stream=False returns the full response after generation. Use stream=True for incremental chunks.

Node.js / TypeScript example

The official DeepSeek quick start also provides a Node.js example using the OpenAI JavaScript/TypeScript SDK. In Node.js, use baseURL with a capital URL. DeepSeek quick start / Official openai-node

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.DEEPSEEK_API_KEY,
  baseURL: "https://api.deepseek.com",
});

async function main() {
  const completion = await client.chat.completions.create({
    model: "deepseek-chat",
    messages: [
      { role: "system", content: "You are a helpful developer assistant." },
      { role: "user", content: "Show the minimal changes needed to use DeepSeek with the OpenAI SDK." },
    ],
    stream: false,
  });

  console.log(completion.choices[0].message.content);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});

Node.js notes

  • Install the official SDK with npm install openai. The official OpenAI JavaScript/TypeScript SDK is documented by OpenAI. Official openai-node
  • Use apiKey, not api_key, in Node.js.
  • Use baseURL, not base_url, in Node.js.
  • Keep the call server-side. Do not expose your DeepSeek API key in browser JavaScript.

Migrating existing OpenAI code to DeepSeek

Use this table as a practical migration checklist. It is intentionally conservative: keep the SDK, change only provider-specific configuration first, then test features such as streaming, JSON output, and tool calls one by one.

Migration itemBefore: OpenAI-only appAfter: DeepSeek with OpenAI SDKWhat to test
Client initializationOpenAI() with default endpointOpenAI(..., base_url="https://api.deepseek.com") in Python or new OpenAI({ baseURL: "https://api.deepseek.com" }) in Node.jsConfirm requests hit DeepSeek, not OpenAI.
API key env varOPENAI_API_KEYDEEPSEEK_API_KEY in your application configurationCheck for 401 errors caused by the wrong key.
Base URLDefault OpenAI endpointhttps://api.deepseek.comVerify official DeepSeek endpoint before deployment.
Model nameOpenAI model IDdeepseek-chat or deepseek-reasoner, depending on the taskCheck current official model aliases.
Streamingstream: truestream: true with DeepSeek Chat CompletionsConfirm your parser handles SSE chunks and partial deltas if parsing raw HTTP.
ErrorsOpenAI status errorsDeepSeek HTTP status codes, surfaced through the OpenAI SDK error classes or raw HTTP clientHandle 400, 401, 402, 422, 429, 500, and 503.
Pricing assumptionsOpenAI pricingDeepSeek token-based pricing from the official Models & Pricing pageDo not reuse OpenAI cost assumptions.
Unsupported or different paramsParameters accepted by a specific OpenAI modelParameters documented by DeepSeek for the selected model and modeFor deepseek-reasoner, verify current parameter support in the official reasoning or thinking docs.

Streaming responses

DeepSeek’s official quick start says the non-stream example can be changed by setting the stream parameter to true to get a stream response. The Chat Completion reference describes streaming as returning partial message deltas through server-sent events and ending with data: [DONE]. DeepSeek quick start / Create Chat Completion

Python streaming example

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPSEEK_API_KEY"],
    base_url="https://api.deepseek.com",
)

stream = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "user", "content": "Write a short checklist for migrating OpenAI SDK code to DeepSeek."}
    ],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta
    if delta.content:
        print(delta.content, end="", flush=True)

Node.js streaming example

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.DEEPSEEK_API_KEY,
  baseURL: "https://api.deepseek.com",
});

const stream = await client.chat.completions.create({
  model: "deepseek-chat",
  messages: [
    { role: "user", content: "Write a short checklist for migrating OpenAI SDK code to DeepSeek." },
  ],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || "";
  process.stdout.write(content);
}

When to use streaming

  • Use streaming for chat UIs where users should see text as it is generated.
  • Use streaming when non-streaming responses feel slow for long answers.
  • Use non-streaming when your backend needs one complete response object before continuing.
  • For raw HTTP streaming parsers, handle server-sent events and partial chunks safely.

Frontend mistake to avoid: Do not assume every streaming chunk contains visible text. Some chunks may contain metadata, empty deltas, tool calls, or reasoning fields depending on the request and parsing layer.

JSON output with DeepSeek

DeepSeek officially documents JSON Output for strict JSON-format responses. To enable it, set response_format to {"type": "json_object"}, include the word “json” in the system or user prompt, provide an example of the desired JSON format, and set max_tokens reasonably to reduce the risk of truncated JSON. DeepSeek JSON Output guide

import json
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPSEEK_API_KEY"],
    base_url="https://api.deepseek.com",
)

system_prompt = """
You extract product data and return only valid json.

Example JSON output:
{
  "product_name": "Example Keyboard",
  "category": "keyboard",
  "price": 49.99,
  "currency": "USD"
}
"""

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "Extract json from this text: The Acme K2 keyboard costs 79.99 USD."},
    ],
    response_format={"type": "json_object"},
    max_tokens=512,
)

data = json.loads(response.choices[0].message.content)
print(data)

JSON Output checklist

  • Set response_format={"type": "json_object"}.
  • Include the word “json” in your prompt.
  • Show the model an example of the target JSON structure.
  • Set a reasonable max_tokens value so the JSON is not cut off.
  • Parse and validate the returned JSON in your application before using it.
  • Handle occasional empty content, which DeepSeek’s JSON Output guide notes may happen.

JSON Output vs tool calls

Use JSON Output when…Use Tool Calls when…
You need one structured response, such as classification, extraction, or a formatted summary.The model must decide whether to call an external function, API, database, or business workflow.
Your application can continue after parsing a JSON object.Your application needs a multi-step loop: model chooses tool, app runs tool, model sees result, model answers.
You want a simple schema-like output.You need the model to produce arguments for a specific function.

Important: DeepSeek’s JSON Output guide says the prompt should explicitly include the word “json” and provide an example JSON format. Your app should still parse and validate the output before trusting it.

Tool calls / function calling

Tool calls allow the model to call external tools to enhance its capabilities. DeepSeek’s Tool Calls guide shows a weather-style function example using the OpenAI SDK. Your application, not the model, executes the function and sends the result back to the model. DeepSeek Tool Calls guide / Create Chat Completion

import json
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPSEEK_API_KEY"],
    base_url="https://api.deepseek.com",
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City and region, for example Cairo, Egypt"
                    }
                },
                "required": ["location"]
            }
        }
    }
]

messages = [
    {"role": "user", "content": "What is the weather in Cairo?"}
]

first_response = client.chat.completions.create(
    model="deepseek-chat",
    messages=messages,
    tools=tools,
    tool_choice="auto",
)

assistant_message = first_response.choices[0].message
messages.append(assistant_message)

if assistant_message.tool_calls:
    for tool_call in assistant_message.tool_calls:
        if tool_call.function.name == "get_weather":
            args = json.loads(tool_call.function.arguments)
            location = args["location"]

            # Your application must run the real tool.
            # This is a mock result for demonstration.
            weather_result = f"The current weather in {location} is 24°C and clear."

            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": weather_result,
            })

    final_response = client.chat.completions.create(
        model="deepseek-chat",
        messages=messages,
        tools=tools,
    )

    print(final_response.choices[0].message.content)
else:
    print(assistant_message.content)

Tool call flow

  1. Send the user message and a tools array to DeepSeek.
  2. Read message.tool_calls from the assistant response.
  3. Run the matching function in your own application.
  4. Append a tool role message with the result and the correct tool_call_id.
  5. Send the updated message list back to the model for the final answer.

Common tool call mistakes

  • Expecting the model to execute the function. It only requests the function call; your code executes it.
  • Forgetting to append the assistant tool-call message before appending the tool result.
  • Sending invalid JSON Schema in the tool parameters.
  • Ignoring tool_call_id when sending the tool result back.
  • Using undocumented tool patterns without checking the current DeepSeek Chat Completion reference.

Thinking mode note: DeepSeek’s Models & Pricing page and Thinking Mode guide document tool support in current V3.2 thinking mode, but thinking-mode tool calls require special handling. Follow the current official Thinking Mode guide before implementing that flow. DeepSeek Thinking Mode guide

Using deepseek-chat vs deepseek-reasoner

As of April 20, 2026, the official DeepSeek quick start and Models & Pricing page state that deepseek-chat and deepseek-reasoner correspond to DeepSeek-V3.2, with deepseek-chat representing non-thinking mode and deepseek-reasoner representing thinking mode. The same official page lists a 128K context length for both aliases. Always verify this on the official Models & Pricing page because aliases and model versions can change. DeepSeek Models & Pricing

Model aliasOfficial mode descriptionGood fitImportant notes
deepseek-chatDeepSeek-V3.2 non-thinking modeGeneral chat, coding help, extraction, JSON output, standard tool calls, production chat appsUsually the first model to test when migrating an OpenAI SDK chat app.
deepseek-reasonerDeepSeek-V3.2 thinking modeComplex reasoning tasks, multi-step analysis, harder planning problemsCan return reasoning_content as well as final content. Review current official reasoning and thinking-mode docs before displaying or reusing reasoning fields.

Reasoning content handling

DeepSeek’s reasoning model guide states that deepseek-reasoner can output reasoning_content and final content. It also says that in normal multi-round conversations, the previous round’s CoT should not be concatenated into the next round’s context. The Thinking Mode guide has a separate tool-call flow for thinking-mode tool calls. Follow the official guide for the exact flow you are implementing. Reasoning Model guide / Thinking Mode guide

Do not blindly display reasoning content: Decide intentionally whether your product should show raw reasoning text. Many apps only display the final content to users and keep reasoning fields for internal debugging or advanced workflows.

Current official model details

Detaildeepseek-chatdeepseek-reasonerLast verified
Model version listed by DeepSeekDeepSeek-V3.2DeepSeek-V3.2April 20, 2026
ModeNon-thinking modeThinking modeApril 20, 2026
Context length listed by DeepSeek128K128KApril 20, 2026
Max output listed by DeepSeekDefault 4K, maximum 8KDefault 32K, maximum 64KApril 20, 2026

Source for the table above: official DeepSeek Models & Pricing. Check that page before production use because DeepSeek states that product prices may vary and recommends checking the pricing page for the most recent information.

Environment variables and security

Store API keys in server-side environment variables, not in client-side code. Secret API credentials should not be exposed in browser JavaScript, mobile app bundles, or public repositories. Use a backend route, serverless function, or server-side proxy for requests that require your DeepSeek API key.

Recommended environment variables

# macOS / Linux example
export DEEPSEEK_API_KEY="your_deepseek_api_key_here"
// Node.js example
const apiKey = process.env.DEEPSEEK_API_KEY;

Security checklist

  • Do not place a DeepSeek API key in browser JavaScript, mobile app bundles, or public repositories.
  • Use a backend route or server-side proxy for calls that require your secret API key.
  • Use environment variables or your deployment platform’s secret manager.
  • Rotate keys if they are exposed, copied to logs, or shared accidentally.
  • Keep debug logging off in production if request or response bodies may contain sensitive user data.
  • Limit what you store from prompts, uploaded documents, model outputs, tool arguments, and tool results.
  • For sensitive use cases, review DeepSeek’s official terms, privacy, and platform documentation before deployment.

Never ship this: const client = new OpenAI({ apiKey: "real_key_here", baseURL: "https://api.deepseek.com" }); in browser-facing code. Anyone can inspect the bundle and misuse the key.

Error handling and retries

DeepSeek’s official Error Codes page lists common API errors and their causes. The OpenAI Python SDK also documents SDK-level error handling, retries, and timeouts. When you use the OpenAI SDK with DeepSeek, treat HTTP status codes as provider responses from DeepSeek and SDK exceptions as the client library’s way of surfacing them. DeepSeek Error Codes / Official openai-python

StatusOfficial DeepSeek descriptionLikely cause during migrationFix
400Invalid FormatInvalid request body format, malformed messages, wrong tool message order, or reasoning fields sent incorrectly.Read the error message and compare your request body with the official Chat Completion docs.
401Authentication FailsWrong API key, missing API key, using an OpenAI key against DeepSeek, or using a DeepSeek key against OpenAI.Check DEEPSEEK_API_KEY and confirm the key was created on the official DeepSeek platform.
402Insufficient BalanceYour DeepSeek account balance is depleted.Check balance and top up through the official DeepSeek platform.
422Invalid ParametersA request parameter is unsupported, invalid, or incompatible with the selected model or mode.Modify parameters according to the error message and official API reference.
429Rate Limit ReachedRequests are being sent too quickly.Pace requests, queue work, and use retries with backoff.
500Server ErrorDeepSeek server encountered an issue.Retry after a brief wait and contact DeepSeek if the issue persists.
503Server OverloadedHigh traffic or overloaded server.Retry after a brief wait and consider graceful degradation in your app.

Python retry and timeout example

The official OpenAI Python SDK documents SDK-level errors, retries, and timeouts. The example below makes retry and timeout settings explicit for a DeepSeek client.

import os
import openai
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPSEEK_API_KEY"],
    base_url="https://api.deepseek.com",
    max_retries=2,
    timeout=60.0,
)

try:
    response = client.chat.completions.create(
        model="deepseek-chat",
        messages=[{"role": "user", "content": "Return a short migration checklist."}],
    )
    print(response.choices[0].message.content)

except openai.APIStatusError as exc:
    print(f"DeepSeek API returned HTTP {exc.status_code}")
    print(exc.response)
    raise

except openai.APIConnectionError:
    print("The SDK could not connect to the API.")
    raise

except openai.APITimeoutError:
    print("The request timed out.")
    raise

Operational retry guidance

  • Retry 500 and 503 after a short wait.
  • For 429, reduce request concurrency and apply backoff instead of retrying immediately.
  • Do not retry 401 until the API key configuration is fixed.
  • Do not retry 402 until the account balance issue is resolved.
  • For 400 and 422, inspect the request body and official API reference before retrying.

Common migration mistakes

  • Using an OpenAI API key with the DeepSeek base URL: Create and use a DeepSeek API key from the official DeepSeek platform.
  • Using a DeepSeek API key with the OpenAI endpoint: Set the DeepSeek base_url or baseURL.
  • Using the wrong SDK option name: Python uses base_url; Node.js uses baseURL.
  • Using an old or unofficial model name: Check official Models & Pricing and List Models.
  • Assuming all OpenAI-only parameters work: Check the official DeepSeek Chat Completion reference and the model-specific guides.
  • Ignoring reasoning-specific behavior: deepseek-reasoner can return reasoning_content. Follow official reasoning and thinking-mode docs carefully.
  • Exposing the API key in the browser: Keep calls server-side.
  • Breaking the streaming parser: Handle partial deltas and SSE chunks if you are not using the SDK abstraction.
  • Not validating JSON: JSON Output improves structure, but your application should still parse and validate the response before using it.
  • Forgetting official docs can change: Re-check pricing, models, context length, output limits, and feature support before publishing or deploying.

When should you use OpenAI SDK with DeepSeek?

Using the OpenAI SDK with DeepSeek is most useful when your application already uses OpenAI-compatible Chat Completions and you want to evaluate or add DeepSeek without replacing your entire client layer.

Good fit

  • You already have a Python, Node.js, or TypeScript app using the OpenAI SDK.
  • You want a quick provider migration test with minimal code changes.
  • You are building a multi-provider architecture and want one common client pattern for compatible chat APIs.
  • You need to compare cost, latency, and quality using your own prompts.
  • You are prototyping developer tools, chat apps, extraction workflows, or internal AI features.
  • You need streaming, JSON output, or tool calls documented by DeepSeek’s official API docs.

Not a good fit

  • You need a specific OpenAI-only endpoint or feature that DeepSeek does not officially support.
  • You require strict enterprise, legal, or compliance approval and have not reviewed DeepSeek’s official platform terms and documentation.
  • Your app relies on undocumented behavior from an OpenAI model or SDK helper.
  • You need official provider support beyond what is documented in DeepSeek’s public API docs.
  • You cannot safely keep API keys server-side.

Related DeepSeek developer guides

Use these internal guides to continue building a complete DeepSeek developer setup on Chat-Deep.ai.

Official sources used

This article uses official DeepSeek and OpenAI sources only for technical claims. No competitor pages, blogs, third-party tutorials, Reddit threads, or unofficial GitHub repositories were used as factual sources.

FAQ

Can I use the OpenAI SDK with DeepSeek?

Yes. DeepSeek’s official quick start says the DeepSeek API uses an API format compatible with OpenAI and that you can use the OpenAI SDK by modifying the configuration. Use a DeepSeek API key, the DeepSeek base URL, and a DeepSeek model alias.

What base_url should I use for DeepSeek?

The official DeepSeek quick start lists https://api.deepseek.com as the base URL. It also says https://api.deepseek.com/v1 can be used for OpenAI compatibility, but the v1 path is not related to the model version.

Do I need to change my OpenAI SDK package?

Usually no for basic Chat Completions migration. DeepSeek’s official examples use the OpenAI SDK in both Python and Node.js. You still need to change the API key, base URL, and model name.

What model name should I use?

For many chat use cases, start with deepseek-chat. For reasoning-heavy tasks, consider deepseek-reasoner. As of April 20, 2026, DeepSeek’s official Models & Pricing page says both correspond to DeepSeek-V3.2, with deepseek-chat as non-thinking mode and deepseek-reasoner as thinking mode. Verify the official page before production use.

Does DeepSeek support streaming with the OpenAI SDK?

Yes. DeepSeek’s quick start says the non-stream example can be changed by setting the stream parameter to true. The Chat Completion reference describes streaming through server-sent events.

Does DeepSeek support JSON output?

Yes. DeepSeek’s JSON Output guide says to set response_format to {"type": "json_object"}, include the word “json” in the system or user prompt, provide an example JSON format, and set max_tokens reasonably.

Does DeepSeek support tool calls?

Yes. DeepSeek’s Tool Calls guide says tool calls allow the model to call external tools, and the Chat Completion reference documents tool-related parameters. Your application executes the tool and sends the result back to the model.

Is deepseek-chat the same as DeepSeek R1?

Not according to the current official Models & Pricing page verified on April 20, 2026. It lists deepseek-chat as DeepSeek-V3.2 non-thinking mode and deepseek-reasoner as DeepSeek-V3.2 thinking mode. Check the official page because aliases can change.

Can I use this directly in the browser?

No. Do not put your DeepSeek API key in browser code. Use a server-side route or backend proxy for any request that requires a secret API key.

Why am I getting 401 errors?

DeepSeek’s official Error Codes page says 401 means authentication fails due to the wrong API key. Check that you are using a DeepSeek API key with the DeepSeek base URL, and not an OpenAI key or missing environment variable.

Why am I getting 429 errors?

DeepSeek’s Error Codes page says 429 means you are sending requests too quickly. Pace your requests, reduce concurrency, and use backoff. If you need production reliability, design your app to handle temporary provider pressure gracefully.

Is Chat-Deep.ai the official DeepSeek website?

No. Chat-Deep.ai is an independent guide and is not the official DeepSeek website. Use the official DeepSeek platform and documentation for API keys, billing, account management, official limits, and production-critical API information.