• Rewrites and redirects now available in runtime logs

    Vercel users can now view requests that make rewrites or redirects directly in the Vercel dashboard in runtime logs.

    By default, these requests are filtered out on the Runtime Logs page. To view these requests on the Logs page, you can filter for Rewrites or Redirects in the Resource dropdown.

    • Rewrites: shows the destination of the rewrite

    • Redirects: shows the redirect status code and location

    This feature is available to all users. Try it out or learn more about runtime logs.

  • New deployments of vulnerable Next.js applications are now blocked by default

    Any new deployment containing a version of Next.js that is vulnerable to CVE-2025-66478 will now automatically fail to deploy on Vercel.

    We strongly recommend upgrading to a patched version regardless of your hosting provider. Learn more

    This automatic protection can be disabled by setting the DANGEROUSLY_DEPLOY_VULNERABLE_CVE_2025_66478=1 environment variable on your Vercel project. Learn more

  • Introducing Platform Elements

    As part of the new Vercel for Platforms product, you can now use a set of prebuilt UI blocks and actions to add functionality directly to your application.

    An all-new library of production-ready shadcn/ui components and actions help you launch (and upgrade) quickly.

    Blocks:

    Actions:

    You can install Platforms components with the Vercel Platforms CLI. For example:

    npx @vercel/platforms add claim-deployment

    Start building with Platform Elements using our Quickstart for Multi-Tenant or Multi-Project platforms.

  • Introducing Vercel for Platforms

    You can now build platforms with the new Vercel for Platforms product announced today, making it easy to create and run customer projects on behalf of your users.

    Two platform modes are available: Multi-Tenant and Multi-Project, allowing you to deploy with a single codebase or many, across any number of domains.

    Link to headingMulti-Tenant Platforms

    Run a single codebase that serves many customers with:

    • Wildcard domains (*.yourapp.com) with automatic routing and SSL.

    • Custom domain support via SDK, including DNS verification and certificate management.

    • Routing Middleware for hostname parsing and customer resolution at the edge.

    • Single deployment model: deploy once, changes apply to all tenants.

    Add custom domains to your app in seconds:

    import {
    addDomain,
    getDomainStatus,
    } from "@/components/vercel-platform/src/actions/add-custom-domain";
    const added = await addDomain("test.com");
    if (added.status === "Valid Configuration") {
    // do something
    }
    const config = await getDomainStatus("test.com");
    config.dnsRecordsToSet; // show this in a table

    Link to headingMulti-Project Platforms

    Create a separate Vercel project per customer with:

    • Programmatic project creation with the Vercel SDK.

    • Isolation of builds, functions, environment variables, and settings per customer.

    • Support for different frameworks per project.

    Deploy your customer's code into isolated projects in seconds:

    import { deployFiles } from "@/components/vercel-platform/actions/deploy-files";
    // automatically detects the framework & build commands
    await deployFiles([], {
    // optionally assign a custom domian
    domain: "site.myapp.com",
    });

    Today we are also introducing Platform Elements, a new library to make building on platforms easier.

    Start building with our Quickstart for Multi-Tenant or Multi-Project platform.

  • GPT 5.1 Codex Max now available on Vercel AI Gateway

    You can now access OpenAI's latest Codex models, GPT-5.1 Codex Max with Vercel's AI Gateway and no other provider accounts required.

    Using a process called compaction, GPT-5.1 Codex Max has been trained to operate across multiple context windows and on real-world software engineering tasks. GPT-5.1 Codex Max is faster and more token efficient compared to previous Codex models, optimized for long-running coding tasks, and can maintain context and reasoning over long periods without needing to start new sessions.

    To use GPT-5.1 Codex Max with the AI SDK, set the model to openai/gpt-5.1-codex-max.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.1-codex-max',
    prompt:
    `Write a CLI tool that scans web server logs, counts 5xx errors per endpoint,
    and prints the ten endpoints with the most errors using only standard libraries.`
    providerOptions: {
    openai: {
    reasoningSummary: "auto",
    reasoningEffort: "low"
    },
    },
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • Domains must now be managed at the team level

    Managing domains at the account level is no longer supported. Domains must now be managed at the team level, which simplifies access control, collaboration, and unified billing.

    Domains that are currently linked to accounts will continue to resolve, serve traffic, and renew as usual, but any changes will require moving the domain to a team.

    When viewing an account-level domain, you'll now be prompted to select a destination team to transfer your domain and all project domains, DNS records, and aliases will move to that team and continue to work after the move.

  • Nova 2 Lite now available on Vercel AI Gateway

    You can now access Amazon's latest model Nova 2 Lite via Vercel's AI Gateway with no other provider accounts required. Nova 2 Lite is a reasoning model for everyday workloads that can process text, images, and videos to generate text.

    To use Nova 2 Lite, set model to amazon/nova-2-lite in the AI SDK. Extending thinking is disabled by default. To enable reasoning for this model, set maxReasoningEffort in the providerOptions. The reasoning content is redacted and displays as such, but users are still charged for these tokens.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'amazon/nova-2-lite',
    prompt:
    `Derive a correct, optimal algorithm to detect all unique cycles in
    a directed graph, explaining each logical step and validating edge cases.`,
    providerOptions: {
    bedrock: {
    reasoningConfig: {
    maxReasoningEffort: 'medium' // low, medium, or high accepted
    },
    },
    },
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Read the docs, view the AI Gateway model leaderboard, or use the model directly in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • New npm package for automatic recovery of broken streaming markdown

    Remend is a new standalone package that brings intelligent incomplete Markdown handling to any application.

    Previously part of Streamdown's Markdown termination logic, Remend is now a standalone library (npm i remend) you can use in any application.

    Link to headingWhy it matters

    AI models stream Markdown token-by-token, which often produces incomplete syntax that breaks rendering. For example:

    • Unclosed fences

    • Half-finished bold/italic markers

    • Unterminated links or lists

    Without correction, these patterns fail to render, leak raw Markdown, or disrupt layout:

    **This is bold text
    [Click here](https://exampl
    `const foo = "bar

    Remend automatically detects and completes unterminated Markdown blocks, ensuring clean, stable output during streaming.

    import remend from "remend";
    const partialMarkdown = "This is **bold text";
    const completed = remend(partialMarkdown);
    // Result: "This is **bold text**"

    As the stream continues and the actual closing markers arrive, the content seamlessly updates, giving users a polished experience even mid-stream.

    It works with any Markdown renderer as a pre-processor. For example:

    import remend from "remend";
    import { unified } from "unified";
    import remarkParse from "remark-parse";
    import remarkRehype from "remark-rehype";
    import rehypeStringify from "rehype-stringify";
    const streamedMarkdown = "This is **incomplete bold";
    // Run Remend first to complete incomplete syntax
    const completedMarkdown = remend(streamedMarkdown);
    // Then process with unified
    const file = await unified()
    .use(remarkParse)
    .use(remarkRehype)
    .use(rehypeStringify)
    .process(completedMarkdown);
    console.log(String(file));

    Remend powers the markdown rendering in Streamdown and has been battle-tested in production AI applications. It includes intelligent rules to avoid false positives and handles complex edge cases like:

    • Mathematical expressions with underscores in LaTeX blocks

    • Product codes and variable names with asterisks/underscores

    • List items with formatting markers

    • Nested brackets in links

    To get started, either use it through Streamdown or install it standalone with:

    npm i remend