Vercel users can now view requests that make rewrites or redirects directly in the Vercel dashboard in runtime logs.
By default, these requests are filtered out on the Runtime Logs page. To view these requests on the Logs page, you can filter for Rewrites or Redirects in the Resource dropdown.
Rewrites: shows the destination of the rewrite
Redirects: shows the redirect status code and location
Any new deployment containing a version of Next.js that is vulnerable to CVE-2025-66478 will now automatically fail to deploy on Vercel.
We strongly recommend upgrading to a patched version regardless of your hosting provider. Learn more
This automatic protection can be disabled by setting the DANGEROUSLY_DEPLOY_VULNERABLE_CVE_2025_66478=1 environment variable on your Vercel project. Learn more
As part of the new Vercel for Platforms product, you can now use a set of prebuilt UI blocks and actions to add functionality directly to your application.
An all-new library of production-ready shadcn/ui components and actions help you launch (and upgrade) quickly.
Blocks:
custom-domain: Domain configuration with DNS validation and real-time verification
deploy-popover: Deployment interface with status and history
You can now build platforms with the new Vercel for Platforms product announced today, making it easy to create and run customer projects on behalf of your users.
Two platform modes are available: Multi-Tenant and Multi-Project, allowing you to deploy with a single codebase or many, across any number of domains.
You can now access OpenAI's latest Codex models, GPT-5.1 Codex Max with Vercel's AI Gateway and no other provider accounts required.
Using a process called compaction, GPT-5.1 Codex Max has been trained to operate across multiple context windows and on real-world software engineering tasks. GPT-5.1 Codex Max is faster and more token efficient compared to previous Codex models, optimized for long-running coding tasks, and can maintain context and reasoning over long periods without needing to start new sessions.
To use GPT-5.1 Codex Max with the AI SDK, set the model to openai/gpt-5.1-codex-max.
import{ streamText }from'ai';
const result =streamText({
model:'openai/gpt-5.1-codex-max',
prompt:
`Write a CLI tool that scans web server logs, counts 5xx errors per endpoint,
and prints the ten endpoints with the most errors using only standard libraries.`
providerOptions:{
openai:{
reasoningSummary:"auto",
reasoningEffort:"low"
},
},
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Managing domains at the account level is no longer supported. Domains must now be managed at the team level, which simplifies access control, collaboration, and unified billing.
Domains that are currently linked to accounts will continue to resolve, serve traffic, and renew as usual, but any changes will require moving the domain to a team.
When viewing an account-level domain, you'll now be prompted to select a destination team to transfer your domain and all project domains, DNS records, and aliases will move to that team and continue to work after the move.
You can now access Amazon's latest model Nova 2 Lite via Vercel's AI Gateway with no other provider accounts required. Nova 2 Lite is a reasoning model for everyday workloads that can process text, images, and videos to generate text.
To use Nova 2 Lite, set model to amazon/nova-2-lite in the AI SDK. Extending thinking is disabled by default. To enable reasoning for this model, set maxReasoningEffort in the providerOptions. The reasoning content is redacted and displays as such, but users are still charged for these tokens.
import{ streamText }from'ai';
const result =streamText({
model:'amazon/nova-2-lite',
prompt:
`Derive a correct, optimal algorithm to detect all unique cycles in
a directed graph, explaining each logical step and validating edge cases.`,
providerOptions:{
bedrock:{
reasoningConfig:{
maxReasoningEffort:'medium'// low, medium, or high accepted
},
},
},
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Remend powers the markdown rendering in Streamdown and has been battle-tested in production AI applications. It includes intelligent rules to avoid false positives and handles complex edge cases like:
Mathematical expressions with underscores in LaTeX blocks
Product codes and variable names with asterisks/underscores
List items with formatting markers
Nested brackets in links
To get started, either use it through Streamdown or install it standalone with: