Deepseek V4 on AI Gateway
DeepSeek V4 is now available on Vercel AI Gateway.
There are 2 model variants: DeepSeek V4 Pro and DeepSeek V4 Flash. A 1M token context window is the default across both models.
DeepSeek V4 Pro focuses on agentic coding, formal mathematical reasoning, and long-horizon workflows. It handles feature development, bug fixing, and refactoring across stacks, with tool use that works across harnesses like MCP workflows and agent frameworks. It also writes clear, well-structured long-form documents.
DeepSeek V4 Flash performs close to V4 Pro on reasoning and holds up on simpler agent tasks, with a smaller parameter size for faster responses and lower API cost. It's a good fit for high-volume workloads and latency-sensitive use cases.
To use DeepSeek V4, set model to deepseek/deepseek-v4-pro or deepseek/deepseek-v4-flash in the AI SDK.
import { streamText } from 'ai';
const result = streamText({ model: 'deepseek/deepseek-v4-pro', // or 'deepseek/deepseek-v4-flash' prompt: `Audit this repository for unsafe concurrent access patterns, propose a refactor that introduces proper synchronization, and open the changes as a PR with a migration plan.`,});AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.



