Web Search
AI Gateway provides built-in web search capabilities that allow AI models to access current information from the web. This is useful when you need up-to-date information that may not be in the model's training data.
AI Gateway supports two types of web search:
- Search for all providers: Use Perplexity Search with any model regardless of provider. This gives you consistent web search behavior across different models.
- Provider-specific search: Use native web search tools from Anthropic, OpenAI, or Google. These tools are optimized for their respective providers and may offer additional features.
The perplexitySearch tool can be used with any model regardless of the model provider or creator. This makes it a flexible option when you want consistent web search behavior across different models, or when you want to use web search with a model whose provider doesn't offer native web search capabilities.
To use Perplexity Search, import gateway from @ai-sdk/gateway and pass gateway.tools.perplexitySearch() to the tools parameter. When the model needs current information, it calls the tool and AI Gateway routes the request to Perplexity's search API.
Perplexity web search requests are charged at $5 per 1,000 requests. See Perplexity's pricing for more details.
import { streamText } from 'ai';
import { gateway } from '@ai-sdk/gateway';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'openai/gpt-5.2', // Works with any model, not just Perplexity
prompt,
tools: {
perplexity_search: gateway.tools.perplexitySearch(),
},
});
for await (const part of result.fullStream) {
if (part.type === 'text-delta') {
process.stdout.write(part.text);
} else if (part.type === 'tool-call') {
console.log('Tool call:', part.toolName);
} else if (part.type === 'tool-result') {
console.log('Search results received');
}
}
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { gateway } from '@ai-sdk/gateway';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'openai/gpt-5.2', // Works with any model, not just Perplexity
prompt,
tools: {
perplexity_search: gateway.tools.perplexitySearch(),
},
});
return Response.json({ text });
}You can configure the perplexitySearch tool with these parameters:
maxResults: Number of results to return (1-20). Defaults to 10.maxTokens: Total token budget across all results. Defaults to 25,000, max 1,000,000.maxTokensPerPage: Tokens extracted per webpage. Defaults to 2,048.country: ISO 3166-1 alpha-2 country code (e.g.,'US','GB') for regional results.searchLanguageFilter: ISO 639-1 language codes (e.g.,['en', 'fr']). Max 10 codes.searchDomainFilter: Domains to include (e.g.,['reuters.com']) or exclude with-prefix (e.g.,['-reddit.com']). Max 20 domains. Cannot mix allowlist and denylist.searchRecencyFilter: Filter by content recency. Values:'day','week','month', or'year'.
import { streamText } from 'ai';
import { gateway } from '@ai-sdk/gateway';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'openai/gpt-5.2',
prompt,
tools: {
perplexity_search: gateway.tools.perplexitySearch({
maxResults: 5,
maxTokens: 50000,
maxTokensPerPage: 2048,
country: 'US',
searchLanguageFilter: ['en'],
searchDomainFilter: ['reuters.com', 'bbc.com', 'nytimes.com'],
searchRecencyFilter: 'week',
}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { gateway } from '@ai-sdk/gateway';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'openai/gpt-5.2',
prompt,
tools: {
perplexity_search: gateway.tools.perplexitySearch({
maxResults: 5,
maxTokens: 50000,
maxTokensPerPage: 2048,
country: 'US',
searchLanguageFilter: ['en'],
searchDomainFilter: ['reuters.com', 'bbc.com', 'nytimes.com'],
searchRecencyFilter: 'week',
}),
},
});
return Response.json({ text });
}Use native web search tools from Anthropic, OpenAI, or Google. These tools are optimized for their respective providers and may offer additional features.
Pricing for provider-specific web search tools depends on the model you use. See the Web Search price column on the model detail pages for exact pricing.
For Anthropic models, you can use the native web search tool provided by the @ai-sdk/anthropic package. Import anthropic from @ai-sdk/anthropic and pass anthropic.tools.webSearch_20250305() to the tools parameter. The tool returns source information including titles and URLs, which you can access through the source event type in the stream.
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'anthropic/claude-opus-4.5',
prompt,
tools: {
web_search: anthropic.tools.webSearch_20250305(),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'anthropic/claude-opus-4.5',
prompt,
tools: {
web_search: anthropic.tools.webSearch_20250305(),
},
});
return Response.json({ text });
}The following parameters are supported:
maxUses: Maximum number of web searches Claude can perform during the conversation.allowedDomains: Optional list of domains Claude is allowed to search. If provided, searches will be restricted to these domains.blockedDomains: Optional list of domains Claude should avoid when searching.userLocation: Optional user location information to provide geographically relevant search results.
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'anthropic/claude-opus-4.5',
prompt,
tools: {
web_search: anthropic.tools.webSearch_20250305({
maxUses: 3,
allowedDomains: ['techcrunch.com', 'wired.com'],
blockedDomains: ['example-spam-site.com'],
userLocation: {
type: 'approximate',
country: 'US',
region: 'California',
city: 'San Francisco',
timezone: 'America/Los_Angeles',
},
}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'anthropic/claude-opus-4.5',
prompt,
tools: {
web_search: anthropic.tools.webSearch_20250305({
maxUses: 3,
allowedDomains: ['techcrunch.com', 'wired.com'],
blockedDomains: ['example-spam-site.com'],
userLocation: {
type: 'approximate',
country: 'US',
region: 'California',
city: 'San Francisco',
timezone: 'America/Los_Angeles',
},
}),
},
});
return Response.json({ text });
}For more details on using the Anthropic-compatible API directly, see the Anthropic advanced features documentation.
For OpenAI models, you can use the native web search tool provided by the @ai-sdk/openai package. Import openai from @ai-sdk/openai and pass openai.tools.webSearch({}) to the tools parameter.
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'openai/gpt-5.2',
prompt,
tools: {
web_search: openai.tools.webSearch({}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'openai/gpt-5.2',
prompt,
tools: {
web_search: openai.tools.webSearch({}),
},
});
return Response.json({ text });
}For Google Gemini models, you can use Grounding with Google Search. Google offers two providers: Google Vertex and Google AI Studio. Choose the one that matches your setup. The Google Search tool returns source information including titles and URLs, which you can access through the source event type in the stream.
Import vertex from @ai-sdk/google-vertex and pass vertex.tools.googleSearch({}) to the tools parameter. For users who need zero data retention, see Enterprise web search below.
import { streamText } from 'ai';
import { vertex } from '@ai-sdk/google-vertex';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'google/gemini-3-flash',
prompt,
tools: {
google_search: vertex.tools.googleSearch({}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { vertex } from '@ai-sdk/google-vertex';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'google/gemini-3-flash',
prompt,
tools: {
google_search: vertex.tools.googleSearch({}),
},
});
return Response.json({ text });
}For users who need zero data retention, you can use Enterprise Web Grounding instead. Pass vertex.tools.enterpriseWebSearch({}) to the tools parameter.
Enterprise web search uses indexed content that is a subset of the full web. Use Google search for more up-to-date and comprehensive results.
import { streamText } from 'ai';
import { vertex } from '@ai-sdk/google-vertex';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'google/gemini-3-flash',
prompt,
tools: {
enterprise_web_search: vertex.tools.enterpriseWebSearch({}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { vertex } from '@ai-sdk/google-vertex';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'google/gemini-3-flash',
prompt,
tools: {
enterprise_web_search: vertex.tools.enterpriseWebSearch({}),
},
});
return Response.json({ text });
}Import google from @ai-sdk/google and pass google.tools.googleSearch({}) to the tools parameter.
import { streamText } from 'ai';
import { google } from '@ai-sdk/google';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'google/gemini-3-flash',
prompt,
tools: {
google_search: google.tools.googleSearch({}),
},
});
return result.toDataStreamResponse();
}import { generateText } from 'ai';
import { google } from '@ai-sdk/google';
export async function POST(request: Request) {
const { prompt } = await request.json();
const { text } = await generateText({
model: 'google/gemini-3-flash',
prompt,
tools: {
google_search: google.tools.googleSearch({}),
},
});
return Response.json({ text });
}Was this helpful?