Vercel Blog
vercel.com/
AI Chat now available on Vercel docs
AI Chat is now live on Vercel docs. Ask questions, load docs as context for page-aware answers, and copy chats as Markdown for easy sharing.
Manage Next.js Server Actions in the Vercel Firewall
Vercel Firewall and Observability Plus can now configure Custom Rules targeting specific server actions
Open source Workflow Development Kit is now in public beta
Turn TypeScript functions into durable, reliable workflows with automatic retries, persistence, and observability built in.
Vercel Python SDK is now available in beta
Vercel now has a native Python library hosted on PyPI to simplify tasks such as executing code on Vercel Sandboxes, transferring objects to Vercel Blob, working with the Runtime Cache and retrieving project tokens.
Introducing AI agents & services on the Vercel Marketplace
Discover AI Agents & Services on the Vercel Marketplace. Integrate agentic tools, automate workflows, and build with unified billing and observability.
Vercel Agent Investigations now in Public Beta
Vercel Agent can now automatically run AI investigations on anomalous events for faster incident response.
Zero-config backends on Vercel AI Cloud
Build, scale, and orchestrate AI backends on Vercel. Deploy Python or Node frameworks with zero config and optimized compute for agents and workflows.
Introducing Vercel Agent: Your new Vercel teammate
Vercel Agent provides AI-powered code reviews and production investigations, delivering accurate, context-aware insights to help you ship reliable software.
Built-in durability: Introducing Workflow Development Kit
The Workflow Development Kit (WDK) makes async workflows in TypeScript reliable, durable, fault-tolerant, and portable across any cloud.
AI agents and services on the Vercel Marketplace
Agents and Tools are available in the Vercel Marketplace, enabling AI-powered workflows in your projects with native integrations, unified billing, and built-in observability.
You can just ship agents
Vercel AI Cloud combines unified model routing and failover, elastic cost-efficient compute that only bills for active CPU time, isolated execution for untrusted code, and workflow durability that survives restarts, deploys, and long pauses.
Faster builds with Turbo build machines
Turbo build machines are now available for all paid plans. With 30 CPUs and 60 GB memory they are our fastest yet, ideal for Turbopack builds and large monorepos.
Update regarding Vercel service disruption on October 20, 2025
Update regarding Vercel service disruption on October 20, 2025. Read the summary of impact, timeline, root cause, and steps we’re taking to improve reliability.
Preview links between microfrontends projects now serve all paths
Teams using microfrontends can now visit all routes from any domain in the microfrontends group, enabling teams to test their full site experience without broken links or missing pages.
Zero-configuration support for NestJS
Vercel now detects and deploys NestJS, a framework for building efficient, scalable Node.js server-side applications, with zero configuration.
Braintrust joins the Vercel Marketplace
Braintrust joins the Vercel Marketplace with native support for the Vercel AI SDK and AI Gateway, enabling developers to monitor, evaluate, and improve AI application performance in real time.
Agents at work, a partnership with Salesforce and Slack
Vercel and Salesforce are partnering to help teams build, ship, and scale AI agents across the Salesforce ecosystem, starting with Slack.
Running Next.js inside ChatGPT: A deep dive into native app integration
Next.js now runs natively in ChatGPT with working navigation, React Server Components, and full features. Learn how we made this possible behind ChatGPTs triple iframe architecture and deploy our starter template to get started.
Talha Tariq joins Vercel as CTO (Security)
Talha Tariq joins Vercel as CTO of Security, bringing expertise from HashiCorp and IBM to lead security innovation in the AI era.
Just another (Black) Friday
Vercel customers can treat Black Friday like just another day, ready to scale to billions of requests.
Anomaly alerts now in public beta
You can now subscribe to receive alerts when anomalies spikes in usage or 5XX errors are detected in your Vercel application
Zero-configuration Flask backends
Flask, one of the most popular Python web application frameworks, can now be deployed instantly at Vercel with no configuration changes needed.
Expanded Role-Based Access Control (RBAC) for Enterprise teams
Vercel’s Role-Based Access Control (RBAC) system now supports multiple roles per user and introduces extended permissions for finer-grained access control across Enterprise teams.
ChatGPT apps support on Vercel
Use the Apps SDK, Next.js, and mcp-handler to build and deploy ChatGPT apps on Vercel, complete with custom UI and app-specific functionality.
Server rendering benchmarks: Fluid Compute and Cloudflare Workers
Fluid Compute outperforms Cloudflare Workers by 1.2x–5x in server-side rendering benchmarks, offering faster, more consistent response times through an optimized in-region architecture.
Block Vercel deployment promotions with Github Actions
You can now automatically block a deployment from releasing to production until selected GitHub Actions complete successfully.
New Domains Registrar API for domain search, pricing, purchase, and management
Vercel Domains Registrar APIs let you search TLDs, fetch pricing, check availability, purchase or renew domains, manage nameservers, and handle transfers.
Anomaly alerts now available via email
Subscribe to receive alerts by email when anomalous spikes in usage or HTTP 500 errors occur on your Vercel projects.
Python package manager uv is now available for builds with zero configuration
Vercel now uses uv, a fast Python package manager written in Rust, as the default package manager during the installation step for all Python builds.
Invalidate the CDN cache by tag
You can now invalidate the CDN cache contents by tag providing a way to revalidate content without increasing latency for your users
Static IPs are now available for more secure connectivity
Teams on Pro and Enterprise can now access Static IPs to connect to services that require IP allowlisting. Static IPs give your projects consistent outbound IPs without needing the full networking stack of Secure Compute.
Deployment-level configuration for Fluid compute
You can now enable Fluid Compute on a per-deployment basis. By setting "fluid": true in your vercel.json
Faster time-to-start for v0 builds
Publishing v0 apps on Vercel got 1.1 seconds faster on average due to some optimizations on sending source files during deployment creation.
Towards the AI Cloud: Our Series F
Today, Vercel announced an important milestone: a Series F funding round valuing our company at $9.3 billion.
Stripe is now available in beta on the Vercel Marketplace
stripe claimable sandbox available on vercel marketplace and v0. You can test your flow fully in test mode and go claim it when ready to go live
View & query bot verification data in Vercel Observability
Analyze traffic to your Vercel projects by bot name, bot category, and bot verification status in Vercel Observability
Collaborating with Anthropic on Claude Sonnet 4.5 to power intelligent coding agents
Claude Sonnet 4.5 is now available on Vercel AI Gateway and across the Vercel AI Cloud. Also introducing a new coding agent platform template.
Node.js Vercel Functions now support per-path request cancellation
Vercel Functions using Node.js can now detect when a request is cancelled and stop execution before completion. This includes actions like navigating away, closing a tab, or hitting stop on an AI chat to terminate compute processing early.
Request collapsing for ISR cache misses
Vercel now does regional request collapsing on cache miss for Incremental Static Regeneration (ISR).
Preventing the stampede: Request collapsing in the Vercel CDN
The Vercel CDN now supports request collapsing for ISR routes. For a given path, only one function invocation per region runs at once, no matter how many concurrent requests arrive.
Query data on external API requests in Vercel Observability
It's now possible to run custom queries against all external API requests that where made from Vercel Functions
Claimed deployments now include third-party resources
Vercel now supports transferring resources like databases between teams as part of the Claim Deployments flow. Developers building AI agents, no-code tools, and workflow apps can instantly deploy projects and resources
Anomaly alerts now include error spikes
Enterprise customers with Observability Plus can now receive anomaly alerts for errors through the limited beta
BotID uncovers hidden SEO poisoning
A financial institution's suspicious bot traffic turned out to be Google bots crawling SEO-poisoned URLs from years ago. Here's how BotID revealed the real problem.
Filter deployments by author
Easily filter deployments on the Vercel dashboard by the Vercel username or email, or git username (if applicable).
How we made global routing faster with Bloom filters
We replaced slow JSON path lookups with Bloom filters in our global routing service, cutting memory usage by 15% and reducing 99th percentile lookup times from hundreds of milliseconds to under 1 ms. Here’s how we did it.
Observability Plus replacing legacy Monitoring
Observability Plus will replace the legacy Monitoring subscription. Pro customers using Monitoring should migrate to Observability Plus to author custom queries on their Vercel data.
What you need to know about vibe coding
Vibe coding is revolutionizing how we work. English is now the fastest growing programming language. Our state of vibe coding report outlines what you need to know.
Scale to one: How Fluid solves cold starts
Learn how Vercel solves serverless cold starts with scale to one, Fluid compute, predictive scaling, and caching to keep functions warm and fast.
Generate static AI SDK tools from MCP servers with mcp-to-ai-sdk
Use mcp-to-ai-sdk to generate MCP tools directly into your project. Gain security, reliability, and prompt-tuned control while avoiding dynamic MCP risks.
AI agents at scale: Rox’s Vercel-powered revenue operating system
Learn more about how Rox runs global, AI-driven sales ops on fast, reliable infrastructure thanks to Vercel
Builds now start up to 30% faster
Vercel builds now initialize faster due optimized cache downloads. We use a worker pool to download the build cache, reducing initialization times by 30%
Updated defaults for deployment retention
Vercel is updating the default retention policy for deployments. Unlimited retention is no longer available.
Introducing Vercel Drains: Complete observability data, anywhere
Vercel Drains give you a single way to stream observability data out of Vercel and into the systems your team already rely on.
402-mcp enables x402 payments in MCP
Introducing x402-mcp, a library that integrates with the AI SDK to bring x402 paywalls to Model Context Protocol (MCP) servers to let agents discover and call pay for MCP tools easily and securely.
Introducing x402-mcp: Open protocol payments for MCP tools
We built x402-mcp to integrate x402 payments with Model Context Protocol (MCP) servers and the Vercel AI SDK.
New Vercel CLI login flow
Simplified Vercel CLI login with OAuth 2.0 Device Flow. Sign in securely from any browser. Email and provider-based logins deprecated Feb 1. Upgrade now.
LongCat-Flash Chat model is now supported in Vercel AI Gateway
You can now access LongCat-Flash Chat from Meituan using Vercel AI Gateway, with no Meituan account required.
Vercel Sandbox maximum duration extended to 5 hours
Pro and Enterprise teams can now run Vercel Sandboxes for up to 5 hours (up from 45 minutes). This extension unlocks new possibilities for workloads that require longer runtimes
MongoDB Atlas joins the Vercel Marketplace
MongoDB Atlas is now available on the Vercel Marketplace, making it simple to provision and scale databases directly from your Vercel dashboard
MongoDB Atlas is now available on the Vercel Marketplace
この記事では、最新のJavaScriptフレームワークであるNext.jsの新機能について詳しく解説しています。特に、画像最適化機能や新しいデータフェッチングの手法が強調されており、これにより開発者はパフォーマンスを向上させることができます。また、Next.jsの新しいAPIルート機能により、サーバーサイドでのデータ処理が簡素化され、開発の効率が向上します。さらに、TypeScriptとの統合が強化され、型安全性が向上した点も重要です。これらの機能は、特に大規模なアプリケーションの開発において、開発者にとって大きなメリットをもたらします。 • Next.jsの新機能として画像最適化機能が追加された。 • 新しいデータフェッチング手法により、パフォーマンスが向上する。 • APIルート機能の追加でサーバーサイドデータ処理が簡素化された。 • TypeScriptとの統合が強化され、型安全性が向上した。 • これらの機能は大規模アプリケーションの開発において特に有用。
HIPAA BAAs are now available to Pro teams
Pro teams can now access and sign a Business Associate Agreement (BAA) to enable HIPAA-compliant workloads on Vercel. The BAA is self-serve and available from the team billing dashboard.
Free Viewer seats now available on Pro
Pro teams can now add unlimited free Viewer seats so team members can more flexibly, and cost-efficiently collaborate.
Spend Management now enabled by default on Pro
Pro teams can now add unlimited free Viewer seats so team members can more flexibly, and cost-efficiently collaborate.
Included Pro usage is now credit-based
The Pro plan now includes $20 in monthly usage credit instead of fixed allocations across metrics like data transfer, compute, caching, and more.
No build queues: On-demand concurrent builds now on by default
Teams on the new Pro pricing model will now have on-demand concurrent builds enabled by default. This ensures builds across projects start immediately without waiting in a queue, except when multiple builds target the same Git branch.
A more flexible Pro plan for modern teams
We’re updating Vercel’s Pro plan to better align with how modern teams collaborate and how applications consume infrastructure, and how workloads are changing shape with AI.
Critical npm supply chain attack response - September 8, 2025
How Vercel responded to the September 2025 npm supply chain attack on chalk, debug and 16 other packages. Incident timeline, impact analysis, and customer remediation.
Vercel Functions now support graceful shutdown
Vercel Functions running on the Node.js or Python runtimes now support graceful shutdown, allowing to run cleanup tasks for up to 500 milliseconds just before shutting down.
Export traces, web analytics events, and speed insights datapoints to any destination
You can now export logs, traces, web analytics events, and speed insights datapoints with Vercel Drains.
Zero-configuration Express backends
Vercel now detects and deploys Express, a fast, unopinionated, minimalist web framework built on web standards, with zero configuration.
Stress testing Biome's noFloatingPromises lint rule
We partnered with Biome to push their noFloatingPromises lint rule to the limit, uncovering edge cases and showing how we solve hard problems together.
Open SDK strategy
Vercel’s Open SDK strategy commits to building frameworks, SDKs, and tools in the open, under permissive licenses. Learn how we’re avoiding lock-in, ensuring portability, and investing in open source to build a better web for everyone.
CVE-2025-55173
A vulnerability affecting Next.js Image Optimization has been addressed. It impacted versions prior to v15.4.5 and v14.2.31.
CVE-2025-57822
A vulnerability affecting Next.js Middleware has been addressed. It impacted versions prior to v14.2.32 and v15.4.7.
CVE-2025-57752
A vulnerability affecting Next.js Image Optimization has been addressed. It impacted versions prior to v15.4.5 and v14.2.31.
Preparing for the worst: Our core database failover test
On July 24, 2025, we successfully performed a full production failover of our core control-plane database from Azure West US to East US 2 with zero customer impact.
s1ngularity: supply chain attack in Nx packages
A critical vulnerability was published in Nx and some of its supporting libraries. Vercel builds are safe from this vulnerability by default.
Anomaly alerts now in limited beta for Enterprise customers
Enterprise customers with Observability Plus can now receive anomaly alerts through the limited beta
Build Slack agents with @vercel/slack-bolt
Deploy your Slack agent to Vercel's AI Cloud using @vercel/slack-bolt to take advantage of AI Gateway, Fluid compute, and more.
Deploy xmcp servers with zero-configuration
xmcp, a framework for building and shipping MCP applications with TypeScript, can now be deployed to Vercel with zero-configuration.
AI-powered prototyping with design systems
Why AI-native design systems unlock true brand-ready, production-aligned prototyping for teams using v0
Introducing Streamdown: Open source Markdown for AI streaming
Streamdown is a new open source, drop-in Markdown renderer built for AI streaming. It powers the AI Elements Response component, but can also be used standalone.
AI Gateway is now generally available
AI Gateway is now generally available, providing a single interface to access hundreds of AI models with transparent pricing and built-in observability.
AI Gateway: Production-ready reliability for your AI apps
AI Gateway, now generally available, ensures availability when a provider fails, avoiding low rate limits and providing consistent reliability for AI workloads.
<script type="text/llms.txt">
llms.txt is an emerging standard for making content such as docs available for direct consumption by AIs. We’re proposing a convention to include such content directly in HTML responses.
Agents can now access protected deployments via Vercel’s MCP server
Vercel's MCP server now lets agents access deployments behind authentication, enabling them to act on your behalf.
Node.js Vercel Functions now support fetch web handlers
Vercel Functions running on the Node.js runtime now support fetch web handlers to enhance interoperability across runtimes and frameworks.
If agents are building your app, who gets the W-2?
If agents can design, build, test, and deploy features, their work should be treated like a developer's under GAAP. With modern AI logging, you can tie usage directly to capitalizable development activity.
Vercel Sandbox increases concurrency and port limits
Run up to 2000 sandboxes at the same time. Sandbox also now supports listening on up to 4 different ports
Improved fake hardware detection with Vercel BotID
Vercel BotID Deep Analysis now uses an updated detection model that expands fingerprinting coverage for bespoke headless browsers and simulated device hardware.
How Coxwave delivers GenAI value faster with Vercel
Coxwave's journey to cutting deployment times by 85% and building AI-native products faster with Vercel
Cutting delivery times in half with v0
Learn how Ready.net uses v0 to reduce ambiguity and accelerate feedback loops with limited resources
v0.dev -> v0.app
v0.dev is now v0.app, the AI builder for everyone: founders, designers, developers, marketers, sales, finance, and more
Cursor now supported on Vercel MCP
Connect Cursor to Vercel MCP to manage projects and deployments, analyze logs, search docs, and more
How Zapier scales product partnerships with v0
The team behind Zapier’s embedded platform uses v0 to turn partner conversations into scalable integrations
Improved metrics search in Observability Plus
We’ve improved the metrics search and navigation experience in Vercel Observability, making it faster and easier to build custom queries.
Vercel collaborates with OpenAI for GPT-5 launch
The GPT-5 family of models released today, are now available through AI Gateway and are in production on our own v0.dev applications. Thanks to OpenAI, Vercel has been testing these models for a few weeks in v0, Next.js, AI SDK, and Vercel Sandbox.
Vercel is the only vendor to be recognized as a Visionary in the 2025 Gartner® Magic Quadrant™ for Cloud-Native Application Platforms
We’re honored to be the only vendor recognized as a Visionary in the 2025 Gartner® Magic Quadrant™ for Cloud Native Application Platforms.
Introducing AI Elements: build AI interfaces faster
Focus on your AI’s intelligence, not the UI scaffolding. AI Elements is now available as a new Vercel product to help frontend engineers build AI-driven interfaces in a fraction of the time.
Introducing Vercel MCP: Connect Vercel to your AI tools
Vercel now has an official hosted MCP server (aka Vercel MCP), which you can use to connect your favorite AI tools, such as Claude or VS Code, directly to Vercel.
Claude 4.1 Opus is now supported in Vercel AI Gateway
You can now access Claude Opus 4.1, a new model released by Anthropic today, using Vercel's AI Gateway with no other provider accounts required.
gpt-oss-20b and gpt-oss-120b are now supported in Vercel AI Gateway
You can now access gpt-oss by OpenAI, an open-weight reasoning model designed to push the open model frontier, using Vercel's AI Gateway with no other provider accounts required.
v0: vibe coding, securely
Vibe coding makes it possible for anyone to ship a viral app. But every line of AI-generated code is a potential vulnerability. Security cannot be an afterthought, it must be the foundation. Turn ideas into secure apps with v0.
New custom visualization in Vercel Observability
You can now customize how you visualize Observability data with line charts, volume charts, table views, or a big number.
A new wave of software, shipped on Vercel
We're launching a new way to showcase standout products built and shipped on Vercel. Submit your project.
Deploy Hono backends with zero configuration
Vercel now detects and deploys Hono, a fast, lightweight web application framework built on web standards, with zero configuration.
AI SDK 5
Introducing type-safe chat, agentic loop control, new specification, tool enhancements, speech generation, and more.
Join the v0 Ambassador Program
Apply today to join the v0 Ambassador Program and help others discover the magic of what's possible with v0.
Z.ai's GLM-4.5 and GLM-4.5 Air are now supported in Vercel AI Gateway
You can now access GLM-4.5 and GLM-4.5 Air, new flagship models from Z.ai designed to unify frontier reasoning, coding, and agentic capabilities, using Vercel's AI Gateway with no other provider accounts required.
Fluid: How we built serverless servers
Fluid Compute cuts cold starts and compute costs by up to 95%, scaling I/O-bound and AI workloads efficiently across 45B+ weekly requests.
Model Context Protocol (MCP) explained: An FAQ
Model Context Protocol (MCP) is a new spec that helps standardize the way large language models (LLMs) access data and systems, extending what they can do beyond their training data.
Vercel and Solara6 partner to build better ecommerce experiences
Solara6 is partnering with us to help ecommerce brands ship faster and deploy with confidence. Through this partnership, ecommerce teams working with Solara6 can expect improved SEO, site speed, and reliability during peak traffic moments.
Qwen3-Coder is now supported in Vercel AI Gateway
You can now access Kimi K2 from Moonshot AI using Vercel's AI Gateway, with no Moonshot AI account required.
Growthbook joins the Vercel Marketplace
Add feature flags and A/B testing to your Vercel projects with GrowthBook, now available on the Vercel Marketplace.
Build your own AI app builder with the v0 Platform API
Learn how to build, extend, and automate AI-generated apps like BI tools and website builders with v0 Platform API
Transform rules are now available in vercel.json
Transform rules allow you to modify request headers, response headers, and request query parameters through the vercel.json.
OpenAI-compatible API endpoints now supported in AI Gateway
OpenAI-compatible API endpoints now supported in AI Gateway giving you access to 100s of models with no code rewrites required
Open Vercel documentation pages in AI providers
Copy Vercel documentation pages as markdown, or open them in AI providers, such as v0, Claude, or ChatGPT.
Grep a million GitHub repositories via MCP
Search 1M+ GitHub repositories from your AI agent using Grep's MCP server. Your agent can now reference coding patterns and solutions used in open source projects to solve problems.
Moonshot AI's Kimi K2 model is now supported in Vercel AI Gateway
You can now access Kimi K2 from Moonshot AI using Vercel's AI Gateway, with no Moonshot AI account required.
OAuth support added to MCP Adapter
Vercel's open-source MCP adapter now supports the latest MCP Authorization spec—letting you securely ship, OAuth-enabled MCP servers.
Search any public GitHub repo with Grep
You can now use Grep to search any public repository on GitHub, no longer limited to the 1M+ pre-indexed repos. Get quick, full-text search across the repo without any setup.
Clerk joins the Vercel Marketplace
Clerk is now available on the Vercel Marketplace, add secure auth, user management, and SSO to your app with a native integration and fully integrated billing.
More Secure Deployment Protection
Standard Deployment Protection now protects all except production custom domains enhancing protection for automatic aliases for production deployments.
The AI Cloud: A unified platform for AI workloads
We made it simple to build, preview, and ship any frontend, from marketing pages to dynamic apps, without managing infrastructure. Now we’re introducing the next layer: the Vercel AI Cloud.
Vercel Blob now available in all Vercel Regions
You can now select the storage region for your Vercel Blob store when creating it. This allows you to store your files in the region closest to your users for reduced latency.
v0 Platform API now in beta
The v0 Platform API enables developers to programmatically generate, retrieve, and manage full stack web apps using RESTful endpoints and TypeScript SDK. Integrate v0 into your workflows, tools, or automation pipelines.
Web Application Firewall control now available with vercel.json
You can now configure firewall mitigation rules through your vercel.json project configuration file. This is in addition to the existing dashboard and API support.
Inngest joins the Vercel Marketplace
Build background jobs and AI workflows with Inngest, now on the Vercel Marketplace. Native support for Next.js, preview environments, and branching.
NuxtLabs joins Vercel
NuxtLabs, creators of Nuxt and Nitro, are joining Vercel. Same license, roadmap, and open governance, but now in a joint mission to build the best web.
Sandbox now supports sudo and installing RPM packages
You can now run commands with sudo inside Vercel Sandbox, giving you full control to install packages at runtime, just like on a traditional Linux system.
Correlate logs and traces with OpenTelemetry in Vercel Log Drains
Correlate Vercel Logs and Traces with OpenTelemetry (OTel) in Vercel Log Drains sent to Datadog and Dash0
CVE-2025-49005
A cache poisoning vulnerability affecting Next.js App Router >=15.3.0 < 15.3.3 / Vercel CLI 41.4.1–42.2.0 has been resolved. The issue allowed page requests for HTML content to return a React Server Component (RSC) payload instead.
CVE-2025-49826
A vulnerability affecting Next.js has been addressed. It impacted versions >=15.1.0 <15.1.8 and involved a cache poisoning bug leading to a Denial of Service (DoS) condition.
Zero-configuration support for Nitro
Vercel now has detects and deploys Nitro, a server toolkit for building webservers, with zero configuration.
New usage dashboard for Pro customers
We’ve launched a new usage dashboard for Pro teams to analyze Vercel usage and costs with detailed breakdowns and export options.
New webhook events for domain management
You can now subscribe to webhook events for deeper visibility into domain operations on Vercel. These events make it easier to automate domain workflows, especially in multi-tenant platforms or when managing a large number of domains.
Vercel Ship 2025 recap
Vercel Ship 2025 added new building blocks for an AI era: Fast, flexible, and secure by default. Lower costs with Fluid's Active CPU pricing, Rolling Releases for safer deployments, invisible CAPTCHA with BotID. See these and more in our recap.
Introducing BotID, invisible bot filtering for critical routes
BotID is a new invisible CAPTCHA layer of protection that stops sophisticated bots before they reach your backend. It's built to secure critical routes like checkouts, logins, and signups or actions that trigger expensive calls like LLM-powered APIs.
Edge Middleware and Edge Functions are now powered by Vercel Functions
The Edge runtime now runs on Vercel Functions, unifying pricing across all compute, and available before and after cache. Edge Middleware and Edge Functions are now deprecated.
Run untrusted code with Vercel Sandbox
Vercel Sandbox securely runs untrusted code in isolated cloud environments, like AI-generated code. Create ephemeral, isolated microVMs using the new Sandbox SDK, with up to 45m execution times. Now in Beta and available to customers on all plans.
Lower pricing with Active CPU pricing for Fluid compute
Pricing for Vercel Functions on Fluid compute has been reduced. All Fluid-based compute now uses an Active CPU pricing model, offering up to 90% savings in addition to the cost efficiency already delivered by Fluid's concurrency model.
Higher defaults and limits for Vercel Functions running Fluid compute
Vercel Functions using Fluid compute now have longer execution times, more memory, and more CPU. The default execution time, for all projects on all plans, is now 300 seconds.
Introducing Active CPU pricing for Fluid compute
Fluid compute now uses Active CPU pricing. Only pay CPU rates when your function is actively computing. Building on existing Fluid gains, this brings additional savings of up to 90% for workloads like LLM calls, AI agents, or tasks with idle time.
WPP and Vercel: Bringing AI to the creative process
Announcing an expansion of our partnership with WPP, a first-of-its-kind agency collaboration that now brings v0 and AI SDK directly to WPP's global network of creative teams and their clients.
Vercel Blob CLI is now available
The Vercel CLI now includes blob commands, allowing you to manage your Vercel Blob storage directly from the terminal.
Keith Messick joins Vercel as CMO
We’re welcoming Keith Messick as Chief Marketing Officer to support our growth, engage on more channels, and (as always) amplify the voice of the developer. Keith is a longtime enterprise CMO and comes to Vercel from database leader, Redis.
Turso Cloud joins the Vercel Marketplace
Turso now offers a native integration on the Vercel Marketplace—deploy fast, edge-optimized SQLite databases with one-click setup and unified billing.
Two-factor authentication (2FA) team enforcement
Team owners can now enforce two-factor authentication (2FA) for every member of their team via a toggle in Security & Privacy under team settings.
Create and share queries with notebooks in Vercel Observability
Observability Plus users can now create a collection of queries in notebooks to collaboratively explore their observability data.
Introducing the Dubai Vercel region (dxb1)
Dubai (dxb1) is now part of Vercel’s global edge network, improving latency for users in the Middle East, Africa, and Central Asia.
Tray.ai cut build times from a day to minutes with Vercel
Tray.ai cut build times from a full day to just two minutes after migrating to Vercel. By consolidating infrastructure and updating their tech stack, they now deliver over a million monthly page views with a faster, more resilient site.
Improved unhandled Node.js errors in Fluid compute
Fluid compute now gracefully handles Node.js uncaught exceptions and unhandled rejections to provide better isolation between requests.
Improved team overview page
The Vercel team overview now sorts by your activity, can be filtered by repository, and shows a window into your usage.
Building efficient MCP servers
MCP is becoming the standard for building AI model integrations. See how you can use Vercel's open-source MCP adapter to quickly build your own MCP server, like the teams at Zapier, Composio, and Solana.
Designing and building the Vercel Ship conference platform
Here's how we designed and built our Vercel Ship conference platform. We generated 15,000+ images and videos with tools like Flux, Veo 2, Runway, and Ideogram. Then, we moved to v0 for prototyping. See our iterations, examples, tech stack, and more.
Filter runtime logs for fatal function errors
You can now filter your runtime logs to identify fatal function errors, such as Node.js crashes, in the Vercel Logs UI.
How we’re adapting SEO for LLMs and AI search
AI is changing how content gets discovered. Now, SEO ranking ≠ LLM visibility. No one has all the answers, but here's how we're adapting our approach to SEO for LLMs and AI search.
Observability added to AI Gateway alpha
Vercel Observability now includes a dedicated AI section to surface metrics related to the AI Gateway.
Claude Code and Cursor Agent no longer require a team seat
Claude Code and Cursor Agent can now trigger builds on Vercel without a team seat, as a part of our bot detection policies.
Bot Protection is now generally available
Vercel's Bot Protection managed ruleset allows users to mitigate unwanted bot activity on their projects in a single click
Pre-generate SSL certs, now in the Domains dashboard
The Domains Dashboard now enables zero-downtime migration by allowing SSL certificates to be pre-provisioned before migrating domains.
The no-nonsense approach to AI agent development
Learn how to build reliable, domain-specific AI agents by simulating tasks manually, structuring logic with code, and optimizing with real-world feedback. A clear, hands-on approach to practical automation.
New firewall challenge metrics now available
You can now monitor and query for challenge outcomes with two new metrics, available in the Firewall and Observability dashboards.
Introducing the v0 composite model family
Learn how v0's composite AI models combine RAG, frontier LLMs, and AutoFix to build accurate, up-to-date web app code with fewer errors and faster output.
Fluid compute now supports ISR background and on-demand revalidation
Fluid compute now supports both background and on-demand revalidations across all Vercel projects. This update brings the same performance and efficiency improvements to on-demand cache updates, with no configuration changes required.
Faster login flow and new Google Sign-in support
We’ve improved the login experience with a new design and support for Google sign-in, including Google One Tap. Signing in with Google is now a single-click experience.
Fluid compute: Evolving serverless for AI workloads
Fluid, our newly announced compute model, eliminates wasted compute by maximizing resource efficiency. Instead of launching a new function for every request, it intelligently reuses available capacity, ensuring that compute isn’t sitting idle.
CVE-2025-48068
A low-severity vulnerability in the Next.js dev server has been addressed. It affects versions 13.0.0 through 14.2.29, and 15.0.0 through 15.2.1 when using the App Router and involves Cross-site WebSocket hijacking (CSWSH) to perform the exploit.
AI query prompting now available in Observability Plus
AI query prompting is now available in Observability Plus, allowing users to write or edit log queries using natural language. Generate shareable, bookmarkable queries without writing syntax.
Faster CDN proxying to external origins
Vercel’s upgraded CDN connection pooling speeds up proxying to external backends by up to 60%, cutting latency for both low-traffic and high-traffic apps.
Middleware insights now available in Vercel Observability
The Vercel Observability dashboard now includes a dedicated view for middleware, showing invocation counts and performance metrics.
Rate limiting now available on Hobby, with higher included usage on Pro
The first 1,000,000 allowed rate limit requests per month are now included. Hobby teams also get 1 free rate limit rule per project, up to the same included allotment.
Vercel security roundup: improved bot defenses, DoS mitigations, and insights
Since February, Vercel blocked over 148 billion attacks from 108 million IPs. This roundup highlights improvements to bot protection, DoS mitigation, and firewall tooling to help teams build securely by default.
External API caching insights now in Observability
External API caching insights. are now available in Observability for external API calls using Vercel Data Cache.
Vercel Blob is now generally available
Vercel Blob is now generally available, bringing high-performance storage integrated with the Vercel application delivery platform.
How Vapi built their MCP server on Vercel
Vapi has used Vercel's MCP Adapter to deploy and host their MCP server on Vercel, leveraging the benefits of Fluid Compute
Vercel Blob is now generally available: Cost-efficient, durable storage
Vercel Blob is now generally available, providing durable object storage that's integrated with Vercel's application delivery network.
Introducing the AI Gateway
With the AI Gateway, build with any model instantly. No API keys, no configuration, no vendor lock-in.
Vercel Blob insights now available in Observability
The Observability dashboard now includes a dedicated tab for Vercel Blob, which provides visibility into how Blob stores are used across your applications.
Hypertune joins the Vercel Marketplace
Hypertune is now on the Vercel Marketplace—get native feature flags, A/B testing, and dynamic config with one-click setup and unified billing. Type-safe, edge-ready, and built for modern workflows to help you ship faster.
45% faster build initialization
Customers on all plans can now benefit from faster build cache restoration times. We've made architectural improvements to builds to help customers build faster.
How Fern delivers 6M+ monthly views and 80% faster docs with Vercel
Fern used Vercel and Next.js to achieve efficient multi-tenancy, faster development cycles, and 50-80% faster load times
How Consensys rebuilt MetaMask.io with Vercel and Next.js
Learn how Consensys modernized MetaMask.io using Vercel and Next.js—cutting deployment times, improving collaboration across teams, and unlocking dynamic content with serverless architecture.
New one-click AI bot managed ruleset
Protect your content from unauthorized AI crawlers with Vercel's new AI bot managed ruleset, offering one-click protection against known AI bots while automatically updating to catch new crawlers without any maintenance.
Proxied responses now cacheable via CDN-Cache-Control headers
Vercel's CDN now supports CDN-Cache-Control headers for external backends, giving you simple, powerful caching control without any configuration changes.
Resources tab allows instant searching and filtering of functions, middleware, and static assets
The Resources tab is replacing the Functions tab for deployments in the Vercel Dashboard. It displays and allows you to search and filter middleware, static assets, and functions.
Updated v0 pricing
More flexible pricing for v0 that scales with your usage and lets you pay on-demand through credits.
The spring 2025 cohort of Vercel’s Open Source Program
Announcing the spring 2025 cohort of Vercel's Open Source Program. Open source community frameworks, libraries, and tools we rely on every day to build the web,
New quick actions in Observability
Copy, filter, or exclude any value in Observability queries with new one-click actions, making it faster to analyze incoming traffic.
New usage dashboard for Enterprise users
We’ve launched a new usage dashboard for Enterprise teams to analyze Vercel usage and costs with detailed breakdowns and export options.
Up to 80% pricing reduction for Web Analytics
Hobby and Pro teams on Vercel now have higher usage limits on Web Analytics, including reduced costs and smaller billable increments
CDN origin timeout increased to two minutes
Vercel’s CDN proxy read timeout now increased to 120 seconds across all plans, enabling long-running AI workloads and reducing 504 gateway timeout errors. Available immediately at no cost, including Hobby (free) plans.
MCP server support on Vercel
Run MCP servers on with Next.js or Node.js in your Vercel project with 1st class support for Anthropic's MCP SDK
Bot activity and crawler insights now in Observability
Find out which ai crawlers or search engines are scraping your content. Act later on using Vercel Firewall if wanted
Flags Explorer is now generally available
View and override feature flags in your browser with Flags Explorer – now generally available for all customers
Faster builds now available with compute upgrades on paid plans
Enhanced Builds can now be enabled on demand per project for Pro and Enterprise teams. These builds offer double the compute. Customers already using Enhanced Builds are seeing, with no action required, up to 25% reductions in build times.
Introducing the Flags Explorer, first-party integrations, and updates to the Flags SDK
Introducing first-party integrations, the Flags Explorer, and improvements to the Flags SDK to improve feature flag workflow on Vercel.
Join the Vercel AI Accelerator
A six-week program to help you scale your AI company offering over $4M in credits from Vercel, v0, AWS, and leading AI platforms