11 min read

Laravel AI SDK Goes Stable on March 17: What Changed and What to Check Before You Ship

The Laravel AI SDK hits stable on March 17. Here's what changed during beta and what to check before shipping AI features to production.

Laravel AI SDK Goes Stable on March 17: What Changed and What to Check Before You Ship

The Laravel AI SDK goes stable on Tuesday, March 17, alongside Laravel 13. If you've been watching from the sidelines waiting for the official green light, this is it.

But here's the thing: "stable" isn't just a label. It changes how you depend on the package, what you can expect from version updates, and whether it's safe to build production features on top of it. And if you already followed my overview post or the Part 1 and Part 2 tutorials, you're probably wondering: does my code still work? What actually changed?

That's what this post covers. No feature recaps you've read five times already. Just what changed, what "stable" means in practice, and a checklist to run through before you ship.

What "Stable" Actually Means Here

When Laravel tags a package as stable, a few concrete things happen.

First, semantic versioning kicks in properly. During the beta period, the v0.x releases could introduce breaking changes between minor versions, and they did. The API shifted. Namespace conventions got tidied up. Method signatures changed. Once the package hits 1.0, breaking changes are reserved for major version bumps only. You'll get bug fixes and new features without your code suddenly not compiling.

Second, the package becomes safe to lock in production composer.json files. During beta, the right call was "laravel/ai": "dev-main" or a loose ^0.x constraint. After March 17, you'll want ^1.0. That pins you to the stable release channel and means composer update won't pull in something that breaks your agents.

Third, first-party support is now official. Security vulnerabilities get patched. The team maintains backward compatibility. Other packages in the ecosystem (Filament, Prism, and others) can safely declare it as a dependency without worrying the ground will shift under them.

So it's not that the features change dramatically on March 17. It's that the reliability contract is now formal.

What Changed During the Beta Period

The SDK launched publicly in early February 2026 as a bold first release. The core idea was solid from day one: a unified, Laravel-native API for working with AI providers, built around the Agent class pattern.

But the beta period was genuinely used to polish things. Here's what shifted between the initial release and v0.3.0:

Provider tool configuration became cleaner. The WebSearch, WebFetch, and FileSearch provider tools are now properly namespaced under Laravel\Ai\Providers\Tools. Early beta code that referenced these directly needed updating.

Configurable timeouts for embedding requests landed in v0.3.0. If you're running large embedding jobs and hitting timeout issues, this is the fix. You can now set timeout values per provider in config/ai.php.

The RemembersConversations trait stabilised. Early implementations had some rough edges around the agent_conversations and agent_conversation_messages tables. The migration schema is now final.

Structured output via JsonSchema cleaned up its fluent API. Some method chains that worked in early beta were deprecated in favour of a more consistent interface. If you wrote structured output agents before February 2026, double-check your schema definitions.

Testing fakes became comprehensive. The ability to fake agents, images, audio, transcriptions, embeddings, reranking, and file stores is now a first-class feature. This wasn't complete in the initial launch.

The v0.3.0 release on March 12 (which Taylor tweeted about) also added a handful of fixes around tool request handling. Nothing that breaks existing code, but it's worth pulling the latest before Monday.

The Full Feature Set, Now Locked In

Since the stable release freezes the API, it's worth knowing exactly what you're working with. Here's everything that's now officially supported:

flowchart TB
    A["Your Laravel app<br/>agents · controllers · jobs"]
    A --> B["Laravel AI SDK unified interface"]
    B --> C[OpenAI]
    B --> D[Anthropic]
    B --> E[Gemini]
    B --> F["+ 8 more"]

Agents are the core primitive. You create them via php artisan make:agent and configure instructions, tools, memory, and output schema inside the class. Or you use the agent() helper for quick inline agents. Think of each agent as a focused specialist: it has one job, one set of instructions, and one output contract.

Multi-provider support covers OpenAI, Anthropic, Gemini, ElevenLabs, Groq, Cohere, DeepSeek, xAI, and OpenRouter. You swap providers by changing a single string. No client library changes, no interface refactoring. This is the part that makes the SDK genuinely different from just wrapping the OpenAI PHP client.

Provider failover is built in. Pass an array of providers and the SDK automatically tries the next one if it hits a rate limit or outage:

$response = (new SalesCoach)->prompt(
    'Analyse this transcript...',
    provider: [Lab::OpenAI, Lab::Anthropic],
);
flowchart TD
    A[Your Laravel app] --> B[SDK tries OpenAI]
    B -- success --> D[Response returned]
    B -- rate limit / outage --> C[SDK retries Anthropic]
    C --> D

Structured output uses a JsonSchema builder that generates the schema definition automatically and casts the response to your specified types. No more manually parsing JSON strings from AI responses. The agent returns a typed array you can work with directly.

Image, audio, and embedding generation all use the same fluent interface pattern. Images can be queued and stored to any filesystem disk. Audio supports both synthesis and transcription. Embeddings work with any vector store you configure.

Vector stores for RAG use cases are fully supported, with SimilaritySearch available for semantic document retrieval. If you built the support bot from my Part 2 tutorial, this is the feature you used most.

Rate limiting per user is configurable in config/ai.php. Important for any app where end users trigger AI requests directly. Without this, a single user can burn through your API quota in minutes.

Middleware for agents lets you intercept and modify requests and responses. Useful for logging every prompt and response, filtering sensitive content before it reaches the model, or injecting user context automatically so you don't repeat it in every system prompt.

Production Readiness Checklist

Before you ship anything using the stable SDK, run through this list. Some of these will seem obvious. But I've seen each one catch someone out on a real project.

1. Update your composer constraint

composer require laravel/ai:^1.0

Don't stay on dev-main or ^0.x in production. The stable tag is exactly when you tighten this.

2. Publish and review the config file

php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider"

If you published this during beta, diff the new config against yours. Timeout settings, rate limit configuration, and the provider list may have new options you're not using yet.

3. Run the migrations fresh

php artisan migrate

If you installed the SDK during beta, confirm your agent_conversations and agent_conversation_messages tables match the final migration schema. The conversation storage schema stabilised in v0.3.0 and it's worth verifying your existing tables haven't drifted.

4. Set up provider failover for anything user-facing

Single-provider setups are fine for internal tools. For anything a real user waits on, configure a fallback:

// config/ai.php
'default_providers' => [
    Lab::OpenAI,
    Lab::Anthropic,
],

Or set it per agent class using the provider argument. Rate limits and brief outages are normal. Your users shouldn't see them.

5. Move heavy AI operations to queued jobs

Image generation, embedding indexing, audio transcription: none of these should run synchronously in a request cycle. The SDK supports queuing natively:

Image::of('A product banner for...')
    ->generate()
    ->store('products/banner.webp');

Wrap that in a queued job and your HTTP response times stay fast regardless of what the AI provider is doing. I covered Laravel queue patterns in depth here if you want the full setup.

6. Write tests using fakes, not real API calls

This is where the stable SDK really delivers. Every AI operation has a fake:

use Laravel\Ai\Testing\AgentFake;

AgentFake::fake([
    SupportAgent::class => 'Here is the answer to your question...',
]);

// Your test now runs without hitting OpenAI
$response = (new SupportAgent)->prompt('How do I reset my password?');

$this->assertStringContainsString('answer', $response->text);

Tests that hit real AI APIs are slow, expensive, and flaky. Use the fakes. They're comprehensive and they cover images, audio, embeddings, and file stores too.

7. Validate structured output schemas

If you're using JsonSchema for structured agent responses, run a quick sanity check against the final stable API. Some fluent methods changed between early beta and v0.3.0. This is especially important if you wrote agents before mid-February. When debugging structured outputs, a JSON formatter saves time when you're staring at raw API response bodies trying to figure out where the schema mismatch is.

Does Your Existing Tutorial Code Still Work?

If you built the document analyser from Part 1 or the RAG support bot from Part 2, the answer is mostly yes, with one thing to check.

The agent() helper function, agent class structure, Promptable trait, tool interface, and SimilaritySearch usage are all unchanged. Your business logic is fine.

The one thing to verify: if you referenced provider tools like WebSearch or WebFetch directly in your agents, confirm the namespace is Laravel\Ai\Providers\Tools\WebSearch. That moved during beta.

Also bump your composer constraint to ^1.0 now and run composer update to get any fixes from the v0.3.0 release before the stable tag lands.

When You Shouldn't Rush to Use It

The stable launch doesn't mean AI is the right call for every new feature. I want to be direct about this.

The SDK is great when you have a specific, bounded use case with clear inputs and outputs. Structured output agents for classification, document analysis, support routing, content generation with known schemas: these all work well and are genuinely production-ready.

It's harder when you need deterministic behaviour. AI responses aren't consistent the way database queries are. If your feature requires exact, reproducible outputs, you're still better off with conventional code.

And cost matters. Every AI API call has a price. For user-facing features that get hit frequently, run the numbers before shipping. The SDK's rate limiting helps control this, but it doesn't make the problem disappear.

The PHP Attributes post from March 4 is actually a good example of what goes stable alongside this SDK. Laravel 13 as a whole is taking a measured, practical approach to new capabilities. The AI SDK fits that same philosophy. It's ready for production use cases. It's not a reason to retrofit AI into things that don't need it.

Frequently Asked Questions

Does upgrading to the stable SDK require any database migration changes?

If you're already using the SDK in beta, run php artisan migrate and check that your agent_conversations table matches the final schema. For new installs, publish the migrations fresh after upgrading to ^1.0.

Can I use the stable SDK on Laravel 12 or do I need to upgrade to Laravel 13?

The SDK currently lives in the Laravel 12.x docs and has been available since February 2026 on Laravel 12. The March 17 stable tag coincides with Laravel 13's release, but you don't need to upgrade Laravel to use the stable SDK. It works on Laravel 12.

What's the difference between using the agent() helper and creating a dedicated Agent class?

The agent() helper is great for quick, inline agents: one-off operations where you don't need reusability. Dedicated Agent classes are for agents you'll prompt from multiple places in your app, or agents with complex tool configurations and middleware. Start with the helper, extract to a class when it grows.

How do I handle AI API rate limits in production without the failover feature?

If you're locked to a single provider, queue your AI operations and add retry logic via Laravel's built-in job retry mechanisms. Set public $tries = 3 and public $backoff = [30, 60, 120] on your job class. The failover feature is better, but queued retries work as a fallback.

Is the Laravel AI SDK a replacement for Prism or should they coexist?

The official SDK covers the most common use cases and is now the first-party choice for new Laravel projects. Prism has a larger provider surface and some features the official SDK doesn't have yet. For new projects, start with the official SDK. If you have existing Prism code that works, there's no urgent reason to migrate.

What to Do Right Now

Install the stable SDK on Tuesday, update your composer constraint to ^1.0, and run through the checklist above. If you've been holding off on building AI features because the SDK was in beta, the wait is over.

This is genuinely the cleanest path to adding AI to a Laravel app right now. Not because it's the only option, but because it follows every Laravel convention you already know: service providers, facades, artisan commands, queue jobs, and config files. There's no mental context switch. You're writing Laravel code that happens to call an AI provider.

The full Laravel AI SDK docs are already live and cover everything with code examples. They're the authoritative reference once the stable tag lands on Tuesday.

Building an AI-powered product and want to ship it fast? I help founders and teams go from idea to working Laravel MVP. Let's talk.

Share: X/Twitter | LinkedIn |
Hafiz Riaz

About Hafiz

Senior Full-Stack Developer with 9+ years building web apps and SaaS platforms. I specialize in Laravel and Vue.js, and I write about the real decisions behind shipping production software.

View My Work →

Get web development tips via email

Join 50+ developers • No spam • Unsubscribe anytime