Building on what we've learned from Streamdown, we massively improved the code block component with support for a header, icon, filename, multiple languages and a more performant renderer.
The Sandbox component provides a structured way to display AI-generated code alongside its execution output in chat conversations. It features a collapsible container with status indicators and tabbed navigation between code and output views.
The Snippet component provides a lightweight way to display terminal commands and short code snippets with copy functionality. Built on top of shadcn/ui InputGroup, it's designed for brief code references in text.
Not code related, but since attachment were being used in Message, PromptInput and more, we broke it out into its own component - a flexible, composable attachment component for displaying files, images, videos, audio, and source documents.
Skills support is now available in bash-tool, so your AI SDK agents can use the skills pattern with filesystem context, Bash execution, and sandboxed runtime access.
This gives your agent a consistent way to pull in the right context for a task, using the same isolated execution model that powers filesystem-based context retrieval.
This allows giving your agent access to the wide variety of publicly available skills, or for you to write your own proprietary skills and privately use them in your agent.
You can now apply suggested code fixes from the Vercel Agent directly in the Vercel Dashboard.
When the Vercel Agent reviews your pull request, suggestions include a View suggestion button that lets you commit the fix to your PR branch, including changes that touch multiple files.
Vercel Agent - Review suggestions on dashboard
Suggestions open in the dashboard, where you can accept them in bulk or apply them one by one.
Vercel Agent - Reviewing and applying suggestions in bulk
After you apply a suggestion, the review thread is automatically resolved. You can also track multiple concurrent Vercel Agent jobs from the Tasks page.
Montréal, Canada (yul1) is now part of Vercel’s global delivery network, expanding our footprint to deliver lower latency and improved performance for users in Central Canada.
The new Montréal region extends our globally distributed CDN’s caching and compute closer to end users, reducing latency without any changes required from developers. Montréal is generally available and handling production traffic.
Teams can configure Montréal as an execution region for Vercel Functions, powered by Fluid compute to enhance resource efficiency, minimize cold starts, and scale automatically with demand.
Teams with Canadian data residency requirements can also use Montréal to keep execution in Canada.
We released skills, a CLI for installing and managing skill packages for agents.
Install a skill package with npx skills add <package>.
So far, skills has been used to install skills on: amp, antigravity, claude-code, clawdbot, codex, cursor, droid, gemini, gemini-cli, github-copilot, goose, kilo, kiro-cli, opencode, roo, trae, and windsurf.
Today we’re also introducing skills.sh, a directory and leaderboard for skill packages.
Use it to:
discover new skills to enhance your agents
browse skills by category and popularity
track usage stats and installs across the ecosystem
Recraft models are now available via Vercel's AI Gateway with no other provider accounts required. You can access Recraft's image models, V3 and V2.
These image models excel at photorealism, accurate text rendering, and complex prompt following. V3 supports long multi-word text generation with precise positioning, anatomical correctness, and native vector output. It includes 20+ specialized styles from realistic portraits to pixel art.
To use this model, set model to recraft/recraft-v3 in the AI SDK. This model supports generateImage.
import{ generateImage }from'ai';
const result =awaitgenerateImage({
model:'recraft/recraft-v3',
prompt:
`A misty Japanese forest with ancient cedar trees, painted in the style of
traditional ukiyo-e woodblock prints with soft indigo and moss green tones.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.