11 min read

Database Indexing in Laravel: Boost MySQL Performance with Smart Indexes

Master database indexing in Laravel to optimize MySQL query performance. Learn when and how to create indexes using migrations with real-world benchmarks.

Database Indexing in Laravel: Boost MySQL Performance with Smart Indexes

I learned about database indexes the hard way, after a client's dashboard took 8 seconds to load their user list. Eight. Whole. Seconds.

The table had 500,000 users, and I was running queries with WHERE clauses on unindexed columns. The moment I added proper indexes, response time dropped to 180ms. That's a 97% improvement from one migration file.

Here's the thing about database indexing in Laravel: most developers either ignore it completely or add indexes everywhere "just in case." Both approaches will hurt you. The first leads to slow queries as your data grows. The second creates unnecessary overhead on writes and wastes disk space.

In this guide, I'll show you exactly when to add indexes, how to create them in Laravel migrations, and the performance impact you can expect. Plus, I'll share the mistakes that cost me hours of debugging so you don't repeat them.

What Are Database Indexes and Why Should You Care?

Think of a database index like the index at the back of a textbook. Without it, you'd flip through every single page to find a topic. With an index, you jump straight to the relevant pages.

Database indexes work the same way. When you run a query like WHERE email = 'hafiz@example.com', MySQL without an index performs a full table scan, checking every single row until it finds a match. On a table with 10,000 rows, that's fine. On a table with 1 million rows? Your users are waiting.

An index creates a separate data structure (usually a B-tree) that MySQL uses to locate rows instantly. Instead of scanning 1 million rows, it might check only 3-4 index nodes to find your data.

But indexes aren't free. Every time you INSERT, UPDATE, or DELETE a row, MySQL must also update all related indexes. That's why blindly adding indexes everywhere can actually slow down your application, especially on write-heavy tables with queue workers doing bulk inserts.

When to Add Database Indexes in Laravel

After working on several Laravel apps with tables ranging from 100K to several million rows, I've developed a simple rule: index columns you filter, sort, or join on frequently.

Here are specific scenarios where indexes make a massive difference:

Foreign Keys. If you're running queries like WHERE user_id = 123, you need an index on user_id. Laravel's foreignId() method automatically creates an index, but if you're defining foreign keys manually with unsignedBigInteger(), you'll need to add the index yourself.

Email and Username Columns. Any column used for authentication or lookups needs an index. I once worked on a project where login queries took 2+ seconds because the email column wasn't indexed on a 500K user table.

Status and Type Fields. Columns like status, is_active, or type that you filter frequently should be indexed. However, if the column has very low cardinality (only 2-3 distinct values), a standalone index becomes less effective. These work better as part of composite indexes.

Created_at and Updated_at. If you're building dashboards or reports that sort by date or filter date ranges, index these columns. They're used constantly but rarely indexed.

Search Fields. Columns you search with LIKE queries benefit from indexes, but only for prefix searches (LIKE 'term%'). Wildcard searches (LIKE '%term%') can't use regular indexes and need full-text indexing instead.

Don't index columns you never filter or sort by, boolean columns with 50/50 distribution (minimal benefit), small tables under 1,000 rows where full scans are already fast, or columns that change frequently but are rarely queried.

How to Create Database Indexes in Laravel Migrations

Laravel makes index creation straightforward through migration methods. Let me show you the most common patterns I use in production.

Single Column Indexes

The simplest case, indexing one column:

Schema::create('users', function (Blueprint $table) {
    $table->id();
    $table->string('email')->unique(); // Creates unique index automatically
    $table->string('username')->index(); // Regular index
    $table->enum('status', ['active', 'inactive', 'suspended'])->index();
    $table->timestamps();
});

When you call unique(), Laravel creates a unique index, which also speeds up queries. The index() method creates a regular non-unique index.

Composite Indexes (Multiple Columns)

Sometimes you need to index multiple columns together. This is crucial for queries that filter on multiple conditions:

Schema::table('orders', function (Blueprint $table) {
    // Query: WHERE user_id = 123 AND status = 'pending'
    $table->index(['user_id', 'status'], 'idx_user_status');
});

Column order matters here. A lot. MySQL reads composite indexes left-to-right, so this index helps queries filtering on user_id alone, and queries filtering on both user_id AND status. But it won't help queries that only filter on status. For that, you'd need a separate index on status.

I learned this the hard way on a client project. I created an index on ['status', 'user_id'], but the main query filtered by user_id first. The index was useless until I reversed the column order. Put the most selective column first (usually foreign keys), then less selective ones.

Foreign Key Indexes

Laravel's foreignId() method automatically creates an index:

Schema::create('posts', function (Blueprint $table) {
    $table->id();
    $table->foreignId('user_id')->constrained()->onDelete('cascade');
    // Automatically creates index on user_id
    $table->timestamps();
});

If you're defining foreign keys with unsignedBigInteger() instead, don't forget to add the index manually:

Schema::table('posts', function (Blueprint $table) {
    $table->unsignedBigInteger('user_id');
    $table->foreign('user_id')->references('id')->on('users');
    $table->index('user_id'); // Don't forget this!
});

This is one of the most common performance mistakes I see in Laravel codebases. The foreign key constraint exists (so data integrity is fine), but without the index, joins and lookups on that column still do full table scans.

Adding Indexes to Existing Tables

You'll often realize you need an index after your table is already in production with millions of rows. Create a new migration:

public function up(): void
{
    Schema::table('users', function (Blueprint $table) {
        $table->index('email', 'idx_users_email');
    });
}

public function down(): void
{
    Schema::table('users', function (Blueprint $table) {
        $table->dropIndex('idx_users_email');
    });
}

Always name your indexes explicitly using the second parameter. Laravel auto-generates names, but they're often too long and get truncated at MySQL's 64-character limit. I use a convention: idx_tablename_columns.

Important for large tables: By default, MySQL locks the table during index creation. On a table with millions of rows, this can take minutes and block all reads and writes. If you're on Laravel 12+, you can use the lock modifier to create indexes without blocking:

$table->index('email', 'idx_users_email')->lock('none');

This uses MySQL's online DDL to allow concurrent reads and writes while the index builds. For older Laravel versions, run index migrations during low-traffic periods.

Full-Text Indexes for Search

For search functionality, regular indexes don't cut it. Laravel has had native full-text index support since version 9, so you don't need raw SQL:

Schema::create('articles', function (Blueprint $table) {
    $table->id();
    $table->string('title');
    $table->text('content');
    $table->timestamps();
    
    // Native full-text index - no DB::statement needed!
    $table->fullText(['title', 'content'], 'idx_articles_search');
});

Then query with Laravel's whereFullText() method:

$articles = Article::whereFullText(
    ['title', 'content'],
    $searchTerm
)->get();

This generates the proper MATCH...AGAINST SQL behind the scenes. It's way faster than LIKE '%term%' queries, especially on large text fields. Full-text search is supported on MySQL/MariaDB and PostgreSQL.

If you need even more advanced search (fuzzy matching, facets, typo tolerance), look into Laravel Scout with a Meilisearch or Typesense driver. But for most apps, native full-text indexes get the job done.

Real-World Performance Benchmarks

Let me show you actual numbers from production projects. These aren't synthetic benchmarks, they're from real tables under real load.

User Lookup Query (500,000 rows):

// Before index on email column
User::where('email', 'test@example.com')->first();
// Query time: 3,200ms (full table scan)

// After adding index
Schema::table('users', function (Blueprint $table) {
    $table->index('email');
});
// Query time: 12ms (97% improvement)

Dashboard Analytics Query (1.2 million order rows):

// Before composite index
Order::where('user_id', 123)
     ->where('status', 'completed')
     ->whereBetween('created_at', [$start, $end])
     ->count();
// Query time: 5,800ms

// After adding composite index
Schema::table('orders', function (Blueprint $table) {
    $table->index(['user_id', 'status', 'created_at']);
});
// Query time: 85ms (98% improvement)

Sorting Without Index (800,000 product rows):

// Before index on created_at
Product::orderBy('created_at', 'desc')->paginate(20);
// Query time: 2,100ms

// After index
Schema::table('products', function (Blueprint $table) {
    $table->index('created_at');
});
// Query time: 45ms (98% improvement)

The pattern is clear: indexes can reduce query time by 95-99% on large tables. But remember, these are read queries. Let's talk about the write performance cost.

Insert Performance Impact:

On the orders table with 5 indexes, inserting 10,000 rows:

  • Without indexes: 8.2 seconds
  • With 5 indexes: 14.7 seconds

That's a 79% slowdown on bulk inserts. This is why I'm selective about indexing. Every index adds overhead to writes. If you're doing heavy batch processing through queue jobs, consider disabling indexes during bulk inserts and rebuilding them afterward.

Step-by-Step: Optimizing a Slow Query with Indexes

Let's walk through a real scenario. You've got a slow dashboard that shows recent activity for logged-in users.

Step 1: Identify the Slow Query

Install Laravel Debugbar or use Telescope:

// In your controller
$activities = Activity::where('user_id', auth()->id())
    ->where('type', 'purchase')
    ->orderBy('created_at', 'desc')
    ->limit(50)
    ->get();

Debugbar shows this query takes 4.2 seconds on 600,000 rows.

Step 2: Check Current Indexes

Run this SQL to see existing indexes:

SHOW INDEXES FROM activities;

If you only see the primary key (id), you've got a problem. In Laravel 11+, you can also check programmatically:

Schema::hasIndex('activities', 'idx_user_activity'); // returns bool

Step 3: Analyze the Query

This query filters on user_id and type, then sorts by created_at. You need a composite index covering all three columns in the right order.

Step 4: Create the Migration

php artisan make:migration add_indexes_to_activities_table
public function up(): void
{
    Schema::table('activities', function (Blueprint $table) {
        // Composite index for user activity queries
        $table->index(
            ['user_id', 'type', 'created_at'],
            'idx_user_activity'
        );
    });
}

public function down(): void
{
    Schema::table('activities', function (Blueprint $table) {
        $table->dropIndex('idx_user_activity');
    });
}

Step 5: Run the Migration

php artisan migrate

On large tables, this might take a while. If you're on Laravel 12 with MySQL 8.0+, chain ->lock('none') to avoid blocking production traffic.

Step 6: Verify Performance

Check Debugbar again. Query time should drop to under 100ms. You can also verify MySQL is actually using your index with EXPLAIN:

DB::select(DB::raw("EXPLAIN SELECT * FROM activities WHERE user_id = 1 AND type = 'purchase' ORDER BY created_at DESC LIMIT 50"));

Look at the key column. If it shows idx_user_activity, you're good.

Step 7: Monitor Over Time

Use Telescope or a tool like New Relic to monitor query performance as your table grows. As data volume increases, you might need to consider optimizing your queries further beyond just indexing.

This exact process helped me fix a client's analytics dashboard that was timing out. After adding three strategic indexes, every query completed in under 200ms.

Common Indexing Mistakes (and How to Fix Them)

I've made every indexing mistake in the book. Here are the ones that cost me the most time.

Mistake 1: Wrong Column Order in Composite Indexes

I created index(['status', 'user_id']) for a query that filtered by user_id first. MySQL couldn't use the index efficiently because it reads left-to-right.

Fix: Put the most selective column first (usually foreign keys), then less selective ones. The order should match your most common query patterns.

Mistake 2: Over-Indexing Small Tables

I added indexes to a settings table with 15 rows because it felt "professional." Total waste. Full table scans on 15 rows are microseconds.

Fix: Only index tables with 1,000+ rows or tables you know will grow significantly.

Mistake 3: Indexing Low-Cardinality Columns Alone

I indexed a gender column with only two values. The index was larger than the actual data and provided minimal benefit.

Fix: Low-cardinality columns work better as part of composite indexes, not standalone.

Mistake 4: Not Naming Indexes Explicitly

Laravel auto-generated index names that were 64+ characters and got truncated by MySQL. Debugging foreign key errors was a nightmare.

Fix: Always provide explicit names: $table->index('column', 'idx_short_name');

Mistake 5: Forgetting to Index Foreign Keys

I used unsignedBigInteger() for foreign keys without adding indexes. Queries with joins were painfully slow even though the foreign constraint existed.

Fix: Always index foreign key columns, either with foreignId() (which auto-indexes) or manually with index().

Mistake 6: Using Functions on Indexed Columns

I wrote WHERE LOWER(email) = 'test@example.com' and wondered why MySQL ignored my index on email. Functions on indexed columns prevent index usage entirely.

Fix: Store data in the format you'll query it. If you need case-insensitive lookups, use a case-insensitive collation (like utf8mb4_unicode_ci, which is Laravel's default) instead of wrapping columns in functions.

Indexing Trade-offs: When to Skip Indexes

Not every column needs an index. Here's when I deliberately avoid them:

Write-Heavy Tables. If you're logging thousands of events per second, indexes slow down inserts significantly. I had a logs table where removing two indexes improved insert performance by 40%.

Temporary or Queue Tables. Tables that hold data briefly (like Laravel's jobs table) don't benefit much from indexes beyond the primary key.

Frequently Updated Columns. A column that changes on every update (like last_modified_by) requires constant index updates. Only index it if you absolutely need to query it.

Boolean Flags with Even Distribution. A column like is_verified where 50% are true and 50% are false won't speed up queries much. MySQL often decides a full table scan is faster than using the index.

I use this mental model: Is this column used in WHERE, ORDER BY, or JOIN clauses in slow queries? If yes, index it. If no, skip it.

Advanced: Checking If MySQL Uses Your Index

You've added indexes, but is MySQL actually using them? Use the EXPLAIN statement:

DB::select(DB::raw("EXPLAIN SELECT * FROM users WHERE email = 'test@example.com'"));

Look at the key column in the results. If it shows your index name (like idx_users_email), you're good. If it says NULL, MySQL is ignoring your index.

Common reasons MySQL ignores indexes:

You're using functions on indexed columns (WHERE LOWER(email) = ... breaks indexes). Data types don't match (comparing a string column to an integer causes implicit casting). The query optimizer thinks a full scan is faster (common on small tables or when the index would return too many rows). Or you're doing a LIKE '%term%' search, which can't use a B-tree index.

You can I usually validate query patterns with a regex tester first when building complex WHERE clauses, then verify the actual execution plan with EXPLAIN.

Alternative Approaches to Consider

Indexes aren't the only solution for slow queries. Here are alternatives I've used:

Caching. For queries that don't need real-time data, cache results. Leaderboard data cached for 5 minutes eliminates the need to index every column used in the ranking algorithm. Laravel's cache system makes this trivial.

Pagination. Loading 10,000 rows at once is slow no matter how many indexes you have. Use Laravel's pagination or cursor-based pagination for large datasets. Cursor pagination is especially efficient because it uses indexed columns for navigation instead of OFFSET.

Denormalization. Sometimes it's faster to store pre-calculated values. A users.total_orders column updated via database triggers or Eloquent observers removes the need to run COUNT() queries every time.

Database Partitioning. For tables with 10+ million rows, partition by date or user ID. This gives MySQL smaller chunks to scan even without indexes on the partition key.

Read Replicas. Separate read queries from writes by using a read replica. Laravel makes this easy with its database read/write configuration. This doesn't replace indexes but reduces load on your primary database.

Indexes should be your first optimization. These alternatives come into play when indexes alone aren't enough.

Frequently Asked Questions

How many indexes is too many on a single table?

There's no hard limit, but I rarely go above 5-6 indexes per table. Each index slows down writes and uses disk space. If you find yourself adding more than 6, step back and look at whether some can be consolidated into composite indexes, or whether your query patterns need refactoring.

Should I index timestamps like created_at on every table?

No. Only index created_at if you actually query or sort by it on that specific table. A settings table or a table you always access by primary key doesn't need a timestamp index. Index it on tables where you run date range queries, dashboards, or orderBy('created_at') on large datasets.

Does adding an index on a production table cause downtime?

On MySQL 8.0+ with InnoDB, most index operations use online DDL, which means reads and writes continue during index creation. But on very large tables (millions of rows), index creation can still take minutes and use significant I/O. If you're on Laravel 12, use ->lock('none') on your index definition. Otherwise, run the migration during low-traffic periods.

What's the difference between unique() and index() in Laravel?

unique() creates a unique index that prevents duplicate values and speeds up queries. index() creates a regular index that only speeds up queries without enforcing uniqueness. Use unique() for columns like email or username where duplicates shouldn't exist. Use index() for everything else like foreign keys, status columns, or timestamps.

How do I know which columns to index on a legacy project?

Start with Telescope or Debugbar to identify slow queries. Then look at the WHERE, ORDER BY, and JOIN clauses in those queries. Those are your indexing targets. You can also run EXPLAIN on the slow queries to confirm MySQL is doing full table scans. Focus on the queries that run most frequently and take the longest, not every query.

Conclusion: Index Strategically, Not Randomly

Database indexing in Laravel isn't complicated, but it requires thought. Add indexes to columns you filter, sort, or join on frequently. Skip indexes on write-heavy tables, low-cardinality columns, or tiny tables where full scans are already fast.

The three indexes I add to almost every project: foreign key columns (user_id, post_id, etc.), email and username columns for authentication, and composite indexes on [user_id, status] for multi-tenant filtering.

Start there, then profile your slow queries with Telescope or Debugbar. Let real performance data guide your indexing decisions, don't guess. And if your queries are still slow after indexing, check out my guide on query optimization techniques that go beyond just adding indexes.

Next time you're tempted to add indexes everywhere "just in case," remember: every index costs write performance and disk space. Be strategic. Your database will thank you.

Need help optimizing your Laravel application's database performance? I've scaled apps from 1,000 to 1,000,000+ rows and can help you identify exactly which indexes will make the biggest impact. Let's talk.

Share: X/Twitter | LinkedIn |
Hafiz Riaz

About Hafiz

Senior Full-Stack Developer with 9+ years building web apps and SaaS platforms. I specialize in Laravel and Vue.js, and I write about the real decisions behind shipping production software.

View My Work →

Get web development tips via email

Join 50+ developers • No spam • Unsubscribe anytime