Image Input
The OpenResponses API supports sending images alongside text for vision-capable models to analyze. Include an image_url object in your message content array with either a public URL or a base64-encoded data URI. The detail parameter controls the resolution used for analysis.
const apiKey = process.env.AI_GATEWAY_API_KEY;
const response = await fetch('https://ai-gateway.vercel.sh/v1/responses', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: 'zai/glm-4.7',
input: [
{
type: 'message',
role: 'user',
content: [
{ type: 'text', text: 'Describe this image in detail.' },
{
type: 'image_url',
image_url: { url: 'https://example.com/image.jpg', detail: 'auto' },
},
],
},
],
}),
});You can also use base64-encoded images:
{
type: 'image_url',
image_url: {
url: `data:image/png;base64,${imageBase64}`,
detail: 'high',
},
}The detail parameter controls image resolution:
auto- Let the model decide the appropriate resolutionlow- Use lower resolution for faster processinghigh- Use higher resolution for more detailed analysis
Was this helpful?