10 min read

Laravel Search in 2026: Full-Text, Semantic, and Vector Search Explained

Laravel 12 now has built-in support for full-text, semantic, and vector search. Here's when to use each one.

Laravel Search in 2026: Full-Text, Semantic, and Vector Search Explained

Laravel just got a brand new documentation page dedicated entirely to search. Taylor Otwell shipped it today, and it quietly changes how we should think about search in Laravel applications.

Why does this matter? Because Laravel now treats full-text search, semantic search, vector search, and reranking as first-class features. Not as afterthoughts. Not as "install this third-party package and figure it out." They're all documented in one place, with clear guidance on when to use what.

I've been building search functionality in production Laravel apps for years. One of my SaaS projects handles thousands of search queries daily from users across multiple countries. And honestly? I wish this docs page existed two years ago. It would have saved me from overengineering my first search implementation.

Let me walk you through everything this new page covers, what it actually means for your projects, and most importantly, which approach you should pick.

The Search Ladder: Start Simple, Scale When You Need To

Here's something Taylor himself said in the announcement: "I think most applications can get pretty far with the built-in database stuff." He's right. And this is the most important takeaway from the entire docs page.

Think of Laravel's search options as a ladder. You start at the bottom and only climb when your current step isn't enough.

Step 1: WHERE LIKE queries (you're already here)

Step 2: Full-text search with whereFullText (built-in, no packages)

Step 3: Laravel Scout with database driver (still no external services)

Step 4: Scout with Meilisearch, Algolia, or Typesense (external service)

Step 5: Semantic/vector search with embeddings (AI-powered)

Most Laravel apps never need to go past Step 2 or 3. Seriously.

Full-Text Search: Your First Real Upgrade

If you're still doing WHERE title LIKE '%search term%', you're leaving performance on the table. Full-text search is built right into MariaDB, MySQL, and PostgreSQL, and Laravel makes it dead simple.

Adding Full-Text Indexes

First, add a full-text index to your migration:

Schema::create('articles', function (Blueprint $table) {
    $table->id();
    $table->string('title');
    $table->text('body');
    $table->timestamps();

    $table->fullText(['title', 'body']);
});

That's it. No packages. No external services. Just a migration.

Running Full-Text Queries

Now you can search using whereFullText:

$articles = Article::whereFullText(['title', 'body'], 'laravel search')
    ->get();

This is significantly faster than LIKE queries on large datasets. The database engine handles word boundaries, stemming, and relevance ranking for you. A search for "running" will match records containing "run" too. No external service required.

One thing to note: on MariaDB and MySQL, results are automatically ordered by relevance score. On PostgreSQL, whereFullText filters matching records but doesn't order them by relevance. If you need automatic relevance ordering on PostgreSQL, consider using Scout's database engine, which handles this for you.

If you need more control over the search mode, you can pass options:

// Boolean mode for more precise matching
$articles = Article::whereFullText(
    ['title', 'body'],
    '+laravel -wordpress',
    ['mode' => 'boolean']
)->get();

I used this exact approach in a production app with around 50,000 records. It handled everything we threw at it. Search results came back in under 100ms. Not once did we need to bring in an external search engine for that project.

When Full-Text Search Falls Short

There are real limitations though. Full-text search matches keywords, not meaning. If your user searches for "how to fix my website" and your content talks about "debugging web applications," full-text search won't make that connection. It doesn't understand that "fix" and "debug" mean similar things in this context.

Also, LIKE queries with a leading wildcard (%term) can't use indexes at all. Full-text search solves that, but it still won't give you typo tolerance, faceted filtering, or synonym matching out of the box.

That's when you move up the ladder.

Laravel Scout: The Middle Ground

Scout sits between raw database queries and full AI-powered search. It's a package (not built into the framework core), but it's maintained by the Laravel team and deeply integrated.

composer require laravel/scout
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider"

Add the Searchable trait to your model:

use Laravel\Scout\Searchable;

class Article extends Model
{
    use Searchable;
}

The Database Driver (No External Service)

Scout ships with a database driver that works with MySQL and PostgreSQL. No Algolia account needed. No Docker containers to manage.

SCOUT_DRIVER=database
$articles = Article::search('laravel search')->get();

This gives you a cleaner API than raw whereFullText calls, automatic index syncing via model observers, and pagination support. For many applications, this is the sweet spot.

External Search Engines

When the database driver isn't cutting it, Scout supports three external engines out of the box: Algolia (hosted, paid), Meilisearch (open source, self-hostable), and Typesense (open source, self-hostable).

I'll be honest, if you need an external engine, I'd go with Meilisearch for most Laravel projects. It ships with Laravel Sail, it's open source, and the developer experience is excellent. Typesense is a strong alternative if you need built-in vector search capabilities. Algolia is solid but gets expensive fast once you scale past the free tier.

Switching between engines is trivial since Scout abstracts the driver:

# Just change this line
SCOUT_DRIVER=meilisearch

Your application code stays exactly the same. That's the beauty of Scout's driver-based architecture.

Semantic and Vector Search: When Keywords Aren't Enough

This is where it gets interesting. And this is the part of the new docs page that surprised me most.

Laravel 12 now documents how to implement semantic search using vector embeddings. Not as a "maybe someday" feature, but as a real, documented approach with code examples.

What's Actually Happening Here

Traditional search matches words. Semantic search matches meaning. When a user searches for "affordable accommodation near the beach," semantic search understands that "budget hotel oceanfront" is a relevant result even though none of the original keywords match.

This works through embeddings. You convert your text into numerical vectors (arrays of floating-point numbers) using an AI model. Similar concepts end up as similar vectors. Then you search by calculating the distance between the query vector and your stored vectors.

Generating Embeddings

This is where the Laravel AI SDK comes in. If you haven't set it up yet, my step-by-step tutorial gets you running in 30 minutes.

The simplest way to generate an embedding is using Laravel's Stringable class:

use Illuminate\Support\Str;

$embedding = Str::of('affordable accommodation near the beach')
    ->toEmbeddings();

If you need to embed multiple inputs at once (which is more efficient since it makes a single API call), use the Embeddings class:

use Laravel\Ai\Embeddings;

$response = Embeddings::for([
    'affordable accommodation near the beach',
    'luxury resort with ocean view',
])->generate();

$response->embeddings;
// [[0.123, 0.456, ...], [0.789, 0.012, ...]]

Each piece of text becomes a point in high-dimensional space. Similar text ends up nearby. That's the core idea.

Storing and Indexing Vectors

PostgreSQL with the pgvector extension is the most practical option for Laravel developers. Laravel 12 has built-in support for vector columns and indexes right in the migration builder. No extra PHP packages needed.

Schema::ensureVectorExtensionExists();

Schema::create('articles', function (Blueprint $table) {
    $table->id();
    $table->string('title');
    $table->text('body');
    $table->vector('embedding', dimensions: 1536)->index();
    $table->timestamps();
});

The ensureVectorExtensionExists() call enables pgvector on your database. The ->index() chain creates an HNSW (Hierarchical Navigable Small World) index automatically, which dramatically speeds up similarity searches on large datasets. That's two lines doing what used to take a separate package and raw SQL statements.

On your Eloquent model, cast the vector column to an array:

protected function casts(): array
{
    return [
        'embedding' => 'array',
    ];
}

Querying by Similarity

Once your embeddings are stored, you can find semantically similar content using Laravel's built-in whereVectorSimilarTo query builder method:

use Illuminate\Support\Str;

// First, embed the search query
$queryEmbedding = Str::of($searchQuery)->toEmbeddings();

// Then find nearest neighbors by cosine similarity
$results = Article::whereVectorSimilarTo('embedding', $queryEmbedding)
    ->take(10)
    ->get();

This is powerful stuff. But it comes with real costs. Every piece of content needs an embedding generated (API calls cost money). Every search query also needs an embedding (more API calls, added latency). And you need PostgreSQL with pgvector, which not every hosting provider supports yet. All Laravel Cloud Serverless Postgres databases already include pgvector though, so if you're on Cloud you're good to go.

Reranking: The Best of Both Worlds

Here's something most Laravel developers haven't heard about yet. Reranking.

The idea is simple: use a fast, cheap search method first (full-text or keyword search) to get a broad set of candidates. Then use a more expensive AI model to re-order those results by semantic relevance.

use Laravel\Ai\Reranking;

// Step 1: Get candidates with fast full-text search
$candidates = Article::whereFullText(['title', 'body'], $query)
    ->take(50)
    ->get();

// Step 2: Rerank with AI
$reranked = $candidates->rerank('body', $query);

That second line uses Scout's collection macro to rerank all 50 candidates by how semantically relevant their body text is to the query. You can also rerank by multiple fields or use a closure to build a custom document string:

// Rerank by multiple fields
$reranked = $candidates->rerank(['title', 'body'], $query);

// Rerank with a custom document builder
$reranked = $candidates->rerank(
    fn ($article) => $article->title . ': ' . $article->body,
    $query
);

This gives you semantic understanding without embedding every single record in your database. You only run the expensive AI operation on 20-50 candidates, not your entire dataset. Smart tradeoff.

I haven't used reranking in production yet, but I'm excited about it. It solves the biggest problem with full vector search, which is the cost and complexity of maintaining embeddings for every record. With reranking, you get 80% of the benefit at maybe 20% of the cost.

Combining Techniques: The Real-World Approach

The new docs page also covers combining these approaches, and this is where the practical value really shines.

In a production SaaS app I worked on, search evolved over time. We started with basic WHERE LIKE queries. Within a month, we migrated to whereFullText because the dataset grew to 30,000+ records and the LIKE queries were getting slow. Six months in, we added Scout with the database driver for a cleaner API and automatic index syncing.

We never needed vector search for that project. The content was structured (titles, categories, tags) and users searched with specific terms. Full-text search handled it perfectly.

But in another project with user-generated content in multiple languages, I could see vector search being the right call from day one. When your users describe the same thing in completely different ways, keyword matching falls apart fast.

The point is: don't pick your search strategy based on what's technically impressive. Pick it based on what your data and users actually need.

The Decision Framework

Here's my quick guide for choosing:

Use whereFullText when your data is structured, searches use specific keywords, and you have under 100K records. This covers most admin panels, internal tools, and early-stage SaaS apps. Zero additional cost.

Use Scout with database driver when you want a cleaner search API, automatic index syncing, and pagination. Still no external services. Good for any app that's outgrowing raw queries.

Use Scout with Meilisearch/Typesense when you need typo tolerance, faceted search, filtering, or instant search-as-you-type. You'll need to run an additional service, but Sail makes this painless.

Use vector search when your users search by concept rather than keywords, you have multilingual content, or you're building recommendation systems. This requires PostgreSQL + pgvector and an AI embedding provider. Real cost implications here.

Use reranking when you want semantic relevance without the overhead of embedding your entire dataset. Great middle ground between full-text and full vector search.

Common Mistakes to Avoid

Starting with vector search. I've seen developers reach for pgvector and embeddings for a blog with 200 posts. whereFullText would handle that in microseconds with zero ongoing costs. Don't overengineer.

Ignoring the database driver. Scout's database driver is surprisingly capable. Before spinning up Meilisearch, try the database driver first. You might not need anything else.

Not indexing properly. Adding a whereFullText query without a full-text index is actually slower than a regular LIKE query. Always add the index in your migration. I covered database indexing strategies in detail if you want to go deeper.

Embedding everything upfront. If you're adding vector search, embed on write (when content is created/updated), not in bulk. Use queue jobs to handle embedding generation asynchronously.

Forgetting about cost. OpenAI's embedding API is cheap per request, but it adds up. 100,000 records at 1536 dimensions each, plus every search query needing an embedding, that's real money on your monthly bill.

FAQ

Do I need Laravel Scout for full-text search?

No. whereFullText is built into Laravel's query builder and works without Scout. Scout adds a nicer API, automatic syncing, and support for external engines, but it's not required for basic full-text search.

Can I use vector search with MySQL?

Not natively. MySQL doesn't have a vector extension like PostgreSQL's pgvector. If you need vector search, you'll need PostgreSQL, or you can use an external service like Typesense or Meilisearch (which now supports vector search too).

How much does vector search cost in production?

It depends on your embedding provider and dataset size. With OpenAI's text-embedding-3-small model, embedding 100,000 articles costs roughly $2-3. But every search query also needs an embedding, so factor in ongoing API costs. Reranking can reduce this significantly.

Should I switch from Algolia to Meilisearch?

If cost is a concern, yes. Meilisearch is free to self-host and gives you similar features. The migration is simple since Scout abstracts the driver. Just change your .env and you're mostly done.

What about Elasticsearch?

Laravel doesn't include an Elasticsearch driver for Scout out of the box. There are community packages, but if you're starting fresh, Meilisearch or Typesense are easier to integrate and maintain. Elasticsearch makes more sense for very large-scale applications with complex search requirements.

Wrapping Up

The new /docs/search page is a small addition to Laravel's documentation, but it signals something bigger. Search in Laravel has evolved from "use Scout" to a complete toolkit covering everything from simple database queries to AI-powered semantic understanding.

My recommendation? Start with whereFullText. It's free, it's fast, and it's built in. Move to Scout when you need a better API. Add an external engine when you need typo tolerance or faceted search. And only reach for vector search when keywords really can't solve your problem.

Need help implementing search in your Laravel app? Let's talk.


Got a Product Idea?

I build MVPs, web apps, and SaaS platforms in 7 days. Fixed price, real code, deployed and ready to use.

⚡ Currently available for 2-3 new projects

Hafiz Riaz

About Hafiz

Full Stack Developer from Italy. I help founders turn ideas into working products fast. 9+ years of experience building web apps, mobile apps, and SaaS platforms.

View My Work →

Get web development tips via email

Join 50+ developers • No spam • Unsubscribe anytime