Vercel Blog
vercel.com/
Building efficient MCP servers
MCP is becoming the standard for building AI model integrations. See how you can use Vercel's open-source MCP adapter to quickly build your own MCP server, like the teams at Zapier, Composio, and Solana.

How we’re adapting SEO for LLMs and AI search
AI is changing how content gets discovered. Now, SEO ranking ≠ LLM visibility. No one has all the answers, but here's how we're adapting our approach to SEO for LLMs and AI search.
v0-1.5-md & v0-1.5-lg now in beta on the Models API
Try v0-1.5-md and v0-1.5-lg in beta on the v0 Models API, now offering two new model sizes for more flexible performance and accuracy. Ideal for everything from quick responses to deep analysis.

Observability added to AI Gateway alpha
Vercel Observability now includes a dedicated AI section to surface metrics related to the AI Gateway.

Building secure AI agents
Learn how to design secure AI agents that resist prompt injection attacks. Understand tool scoping, input validation, and output sanitization strategies to protect LLM-powered systems.

The no-nonsense approach to AI agent development
Learn how to build reliable, domain-specific AI agents by simulating tasks manually, structuring logic with code, and optimizing with real-world feedback. A clear, hands-on approach to practical automation.

Introducing the v0 composite model family
Learn how v0's composite AI models combine RAG, frontier LLMs, and AutoFix to build accurate, up-to-date web app code with fewer errors and faster output.

Fluid compute: Evolving serverless for AI workloads
Fluid, our newly announced compute model, eliminates wasted compute by maximizing resource efficiency. Instead of launching a new function for every request, it intelligently reuses available capacity, ensuring that compute isn’t sitting idle.