LlamaIndex
LlamaIndex makes it simple to build knowledge assistants using LLMs connected to your enterprise data. This guide demonstrates how to integrate Vercel AI Gateway with LlamaIndex to access various AI models and providers.
First, create a new directory for your project and initialize it:
terminalmkdir llamaindex-ai-gateway cd llamaindex-ai-gateway
Install the required LlamaIndex packages along with the
python-dotenv
package:terminalpip install llama-index-llms-vercel-ai-gateway llama-index python-dotenv
Create a
.env
file with your Vercel AI Gateway API key:.envAI_GATEWAY_API_KEY=your-api-key-here
If you're using the AI Gateway from within a Vercel deployment, you can also use the
VERCEL_OIDC_TOKEN
environment variable which will be automatically provided.Create a new file called
main.py
with the following code:main.pyfrom dotenv import load_dotenv from llama_index.llms.vercel_ai_gateway import VercelAIGateway from llama_index.core.llms import ChatMessage import os load_dotenv() llm = VercelAIGateway( api_key=os.getenv("AI_GATEWAY_API_KEY"), max_tokens=200000, context_window=64000, model="anthropic/claude-4-sonnet", ) message = ChatMessage(role="user", content="Tell me a story in 250 words") resp = llm.stream_chat([message]) for r in resp: print(r.delta, end="")
The following code:
- Initializes a
VercelAIGateway
LLM instance with your API key - Configures the model to use Anthropic's Claude 4 Sonnet via the AI Gateway
- Creates a chat message and streams the response
- Initializes a
Run your application using Python:
terminalpython main.py
You should see a streaming response from the AI model.
Was this helpful?