Laravel Queue Jobs: Processing 10,000 Tasks Without Breaking
Learn how to efficiently process 10,000+ tasks using Laravel queues, Redis, and Horizon for scalable background job processing.
 
        
        
        Your application just hit production. Users are uploading files, sending emails, generating reports, and suddenly everything grinds to a halt. The culprit? You're processing everything synchronously, and your server can't keep up.
I learned this the hard way last year working on a client's document processing platform. They needed to handle bulk PDF uploads, extract text, generate thumbnails, and run OCR, sometimes 500 files at once. The first version locked up their entire application for minutes while users stared at loading spinners. After implementing Laravel queues with proper Redis configuration and monitoring, we processed 10,000+ documents daily without a single timeout.
Here's what you'll learn: setting up Redis for production queue workloads, implementing job batching for complex workflows, handling failures gracefully, and monitoring everything with Horizon. By the end, you'll have a bulletproof queue system that scales.
Why Laravel Queues Matter for Production Apps
Laravel queues move time-consuming tasks to background workers so your application stays responsive. Instead of making users wait while you send emails or process images, you dispatch jobs to a queue and return a response immediately.
The real power shows up when you need to process hundreds or thousands of tasks. I've used queues for everything from bulk email campaigns to video transcoding to generating thousands of PDF reports. Without queues, these operations would be impossible.
Here's the thing though, queues add complexity. You need a reliable queue driver (Redis is my go-to), worker processes that don't crash, proper failure handling, and monitoring to catch issues before users notice. Let's build this right from the start.
Setting Up Redis as Your Queue Driver
Laravel supports multiple queue drivers, but Redis is the sweet spot for most applications. It's fast, reliable, and handles job priorities better than database queues.
First, install Redis and the PHP extension:
# Install Redis server
sudo apt-get install redis-server
# Install PHP Redis extension
sudo pecl install redis
sudo echo "extension=redis.so" > /etc/php/8.2/mods-available/redis.ini
sudo phpenmod redis
Configure Redis in your .env:
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
REDIS_DB=0
For production, I always set up a dedicated Redis database for queues. This separates queue data from cache data and makes monitoring cleaner. Update config/database.php:
'redis' => [
    'client' => env('REDIS_CLIENT', 'phpredis'),
    
    'default' => [
        'url' => env('REDIS_URL'),
        'host' => env('REDIS_HOST', '127.0.0.1'),
        'password' => env('REDIS_PASSWORD'),
        'port' => env('REDIS_PORT', '6379'),
        'database' => env('REDIS_DB', '0'),
    ],
    
    'cache' => [
        'url' => env('REDIS_URL'),
        'host' => env('REDIS_HOST', '127.0.0.1'),
        'password' => env('REDIS_PASSWORD'),
        'port' => env('REDIS_PORT', '6379'),
        'database' => env('REDIS_CACHE_DB', '1'),
    ],
    
    // Dedicated database for queues
    'queues' => [
        'url' => env('REDIS_URL'),
        'host' => env('REDIS_HOST', '127.0.0.1'),
        'password' => env('REDIS_PASSWORD'),
        'port' => env('REDIS_PORT', '6379'),
        'database' => env('REDIS_QUEUE_DB', '2'),
    ],
],
Then update your queue configuration in config/queue.php:
'connections' => [
    'redis' => [
        'driver' => 'redis',
        'connection' => 'queues', // Use dedicated Redis database
        'queue' => env('REDIS_QUEUE', 'default'),
        'retry_after' => 90,
        'block_for' => null,
    ],
],
Why retry_after at 90 seconds? If a job takes longer than this, Laravel assumes the worker died and makes it available again. I set this based on my longest-running job plus a safety margin.
Creating Your First Production-Ready Job
Let's build a real job that processes uploaded documents. This example extracts text and generates thumbnails, something I've built variations of at least five times now.
Generate a job class:
php artisan make:job ProcessDocument
Here's a production-ready implementation:
<?php
namespace App\Jobs;
use App\Models\Document;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Facades\Storage;
use Smalot\PdfParser\Parser;
use Intervention\Image\Facades\Image;
class ProcessDocument implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    // Maximum retry attempts
    public $tries = 3;
    
    // Job timeout in seconds
    public $timeout = 120;
    
    // Delete job if model is deleted
    public $deleteWhenMissingModels = true;
    public function __construct(
        public Document $document
    ) {}
    public function handle(): void
    {
        try {
            // Extract text from PDF
            $path = Storage::path($this->document->file_path);
            $parser = new Parser();
            $pdf = $parser->parseFile($path);
            $text = $pdf->getText();
            
            // Update document with extracted text
            $this->document->update([
                'content' => $text,
                'word_count' => str_word_count($text),
            ]);
            
            // Generate thumbnail
            $thumbnailPath = 'thumbnails/' . $this->document->id . '.jpg';
            Image::make($path)
                ->resize(300, null, function ($constraint) {
                    $constraint->aspectRatio();
                })
                ->save(Storage::path($thumbnailPath));
            
            $this->document->update(['thumbnail_path' => $thumbnailPath]);
            
            Log::info("Document processed successfully", [
                'document_id' => $this->document->id,
                'word_count' => $this->document->word_count,
            ]);
            
        } catch (\Exception $e) {
            Log::error("Document processing failed", [
                'document_id' => $this->document->id,
                'error' => $e->getMessage(),
            ]);
            
            // Mark document as failed after max retries
            if ($this->attempts() >= $this->tries) {
                $this->document->update(['status' => 'failed']);
            }
            
            throw $e; // Re-throw to trigger retry
        }
    }
    
    public function failed(\Throwable $exception): void
    {
        // This runs after all retries are exhausted
        $this->document->update([
            'status' => 'failed',
            'error_message' => $exception->getMessage(),
        ]);
        
        // Notify admin
        // Notification::route('mail', 'admin@example.com')
        //     ->notify(new DocumentProcessingFailed($this->document));
    }
}
Notice the key production features here, I set explicit retry attempts, timeout values, and proper error handling. The failed() method is crucial; it runs after all retries are exhausted, giving you a place to clean up or notify someone.
Dispatch the job:
ProcessDocument::dispatch($document);
// Or dispatch to a specific queue
ProcessDocument::dispatch($document)->onQueue('documents');
// Or delay execution
ProcessDocument::dispatch($document)->delay(now()->addMinutes(5));
Batch Processing: Handling Thousands of Jobs
Here's where queues get really powerful. You can dispatch thousands of jobs, track their progress, and run code when they all complete. I use this constantly for bulk operations.
Let's say you need to process 1,000 documents at once:
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
$documents = Document::where('status', 'pending')->get();
$batch = Bus::batch(
    $documents->map(fn($doc) => new ProcessDocument($doc))
)->then(function (Batch $batch) {
    // All jobs completed successfully
    Log::info("Batch completed", ['batch_id' => $batch->id]);
    
    // Maybe send a notification
    Notification::route('mail', 'admin@example.com')
        ->notify(new BatchCompleted($batch));
        
})->catch(function (Batch $batch, \Throwable $e) {
    // First failure in the batch
    Log::error("Batch failed", [
        'batch_id' => $batch->id,
        'error' => $e->getMessage(),
    ]);
    
})->finally(function (Batch $batch) {
    // Batch finished (success or failure)
    Log::info("Batch finished", [
        'batch_id' => $batch->id,
        'total_jobs' => $batch->totalJobs,
        'processed_jobs' => $batch->processedJobs(),
        'failed_jobs' => $batch->failedJobs,
    ]);
})->name('Process Documents Batch')->dispatch();
You can track batch progress:
$batch = Bus::findBatch($batchId);
return [
    'total' => $batch->totalJobs,
    'pending' => $batch->pendingJobs,
    'processed' => $batch->processedJobs(),
    'failed' => $batch->failedJobs,
    'progress' => $batch->progress(),
    'finished' => $batch->finished(),
];
I built a Vue component that polls this endpoint every 2 seconds to show real-time progress bars. Users love seeing the progress instead of wondering if something's happening.
One gotcha I discovered, batches require database storage. Run this migration:
php artisan queue:batches-table
php artisan migrate
Queue Workers: Keeping Your Jobs Running
Queue workers are processes that pull jobs from Redis and execute them. You need at least one worker running for jobs to process.
Start a worker manually:
php artisan queue:work
But this stops when your SSH session ends. For production, you need process management.
Setting Up Supervisor for Production Workers
Supervisor keeps your workers running 24/7, automatically restarting them if they crash. Install it:
sudo apt-get install supervisor
Create a configuration file at /etc/supervisor/conf.d/laravel-worker.conf:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/worker.log
stopwaitsecs=3600
Let me break down these settings because they matter:
- numprocs=8: Runs 8 parallel workers. I scale this based on server resources and workload. Start with 4-8 workers for most apps.
- --max-time=3600: Restarts workers every hour. This prevents memory leaks from accumulating.
- --sleep=3: Workers sleep 3 seconds when queue is empty. Reduces CPU usage.
- stopwaitsecs=3600: Gives workers 1 hour to finish current job before force-killing. Match this to your longest job.
Start the workers:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*
Check worker status:
sudo supervisorctl status laravel-worker:*
After deploying code changes, restart workers:
php artisan queue:restart
This gracefully stops workers after they finish current jobs, then Supervisor automatically restarts them with new code.
Priority Queues: Critical Jobs First
Not all jobs are equal. Sending password reset emails should happen immediately, while bulk report generation can wait.
Create multiple queues with priorities:
// High priority - process immediately
ResetPasswordEmail::dispatch($user)->onQueue('high');
// Default priority
ProcessDocument::dispatch($document)->onQueue('default');
// Low priority - background tasks
GenerateMonthlyReport::dispatch()->onQueue('low');
Configure workers to process high-priority queues first:
php artisan queue:work redis --queue=high,default,low
Workers check high first, then default, then low. Critical jobs never wait behind bulk operations.
I typically use three queues in production apps: high for user-facing operations (emails, notifications), default for standard processing, and low for batch operations that can wait.
Monitoring with Laravel Horizon
Horizon is Laravel's gorgeous dashboard for monitoring queues. It shows real-time metrics, failed jobs, and lets you retry or delete jobs with one click.
Install Horizon:
composer require laravel/horizon
php artisan horizon:install
php artisan migrate
Publish configuration:
php artisan vendor:publish --tag=horizon-config
Configure config/horizon.php for your environment:
'environments' => [
    'production' => [
        'supervisor-1' => [
            'connection' => 'redis',
            'queue' => ['high', 'default'],
            'balance' => 'auto',
            'minProcesses' => 1,
            'maxProcesses' => 10,
            'balanceMaxShift' => 1,
            'balanceCooldown' => 3,
            'tries' => 3,
            'timeout' => 300,
        ],
        
        'supervisor-2' => [
            'connection' => 'redis',
            'queue' => ['low'],
            'balance' => 'auto',
            'minProcesses' => 1,
            'maxProcesses' => 3,
            'tries' => 1,
            'timeout' => 600,
        ],
    ],
],
I run two supervisors, one for high-priority queues with more processes, and one for low-priority with fewer resources.
Start Horizon (replaces manual queue:work commands):
php artisan horizon
For production, run Horizon under Supervisor. Create /etc/supervisor/conf.d/horizon.conf:
[program:horizon]
process_name=%(program_name)s
command=php /var/www/html/artisan horizon
autostart=true
autorestart=true
user=www-data
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/horizon.log
stopwaitsecs=3600
Access the dashboard at your-app.com/horizon. You'll see:
- Real-time metrics: Jobs per minute, failed jobs, execution times
- Recent jobs: What's processing right now
- Failed jobs: Retry or delete with one click
- Monitoring: Set up alerts when wait times exceed thresholds
Truth is, Horizon changed how I manage queues. Before, I had no visibility into what was happening. Now I catch issues immediately.
Handling Failed Jobs
Jobs fail. Servers crash. APIs go down. Your queue system needs to handle failures gracefully.
Laravel automatically retries failed jobs based on your $tries property. But after all retries are exhausted, jobs land in the failed_jobs table.
Create the failed jobs table:
php artisan queue:failed-table
php artisan migrate
View failed jobs:
php artisan queue:failed
Retry a specific failed job:
php artisan queue:retry 5  # Retry job with ID 5
Retry all failed jobs:
php artisan queue:retry all
Delete a failed job:
php artisan queue:forget 5
Clean up old failed jobs:
php artisan queue:flush
Custom Failure Handling
I always implement custom failure handling for critical operations. Here's a pattern I use:
class ProcessPayment implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    public $tries = 5;
    public $backoff = [10, 30, 60, 120, 300]; // Exponential backoff
    public function __construct(
        public Order $order,
        public string $paymentMethod
    ) {}
    public function handle(): void
    {
        // Process payment logic
        $result = PaymentGateway::charge(
            $this->order->total,
            $this->paymentMethod
        );
        
        $this->order->update([
            'payment_status' => 'completed',
            'transaction_id' => $result->id,
        ]);
    }
    
    public function failed(\Throwable $exception): void
    {
        // Mark order as failed
        $this->order->update([
            'payment_status' => 'failed',
            'payment_error' => $exception->getMessage(),
        ]);
        
        // Notify customer
        $this->order->user->notify(
            new PaymentFailed($this->order, $exception->getMessage())
        );
        
        // Alert admin for manual review
        Slack::send(
            "#payments", 
            "Payment failed for order {$this->order->id}: {$exception->getMessage()}"
        );
        
        // Log for debugging
        Log::critical("Payment processing failed", [
            'order_id' => $this->order->id,
            'amount' => $this->order->total,
            'error' => $exception->getMessage(),
            'trace' => $exception->getTraceAsString(),
        ]);
    }
}
The backoff property uses exponential backoff, retries happen at 10s, 30s, 60s, 120s, and 300s intervals. This gives temporary issues (like API rate limits) time to resolve.
Performance Optimization for High-Volume Processing
When you're processing thousands of jobs, small optimizations compound. Here's what I've learned:
1. Chunk Large Batches
Instead of dispatching 10,000 jobs at once:
// Bad - creates 10,000 database rows at once
$documents = Document::all();
foreach ($documents as $doc) {
    ProcessDocument::dispatch($doc);
}
Chunk the work:
// Good - processes in manageable chunks
Document::chunk(200, function ($documents) {
    foreach ($documents as $doc) {
        ProcessDocument::dispatch($doc);
    }
});
2. Use Queue Priorities Strategically
I spent 3 hours debugging slow email delivery before realizing bulk report jobs were clogging the queue. Split by priority:
// User-facing: high priority
NotifyUser::dispatch($user)->onQueue('high');
// Background processing: low priority  
GenerateBulkReports::dispatch()->onQueue('low');
3. Optimize Job Payload
Jobs are serialized to Redis. Large payloads slow everything down. Instead of passing full models:
// Bad - serializes entire user object with relations
class SendEmail implements ShouldQueue
{
    public function __construct(public User $user) {}
}
Pass only IDs:
// Good - only stores user ID
class SendEmail implements ShouldQueue
{
    public function __construct(public int $userId) {}
    
    public function handle(): void
    {
        $user = User::find($this->userId);
        // Process...
    }
}
Laravel's SerializesModels trait handles this automatically for Eloquent models, but it's worth understanding.
4. Set Appropriate Timeouts
Jobs that might run long need higher timeouts:
class ProcessLargeVideo implements ShouldQueue
{
    public $timeout = 600; // 10 minutes
    
    // Or use per-job timeout
    public function retryUntil(): DateTime
    {
        return now()->addMinutes(10);
    }
}
5. Monitor Memory Usage
Long-running workers accumulate memory. Restart them periodically:
php artisan queue:work --max-time=3600 --memory=512
This stops workers after 1 hour or 512MB memory usage. Supervisor automatically restarts them.
Common Mistakes to Avoid
I've made these mistakes so you don't have to:
1. Not setting up Supervisor in production Your workers will stop when you log out or deploy. Always use Supervisor or a process manager.
2. Forgetting to restart workers after deployment Workers keep running old code until restarted. Add to your deployment script:
php artisan queue:restart
3. Using database queues at scale Database queues work for low-volume apps but slow down significantly above ~100 jobs/minute. Use Redis.
4. No monitoring or alerting Install Horizon or set up custom alerts. You need visibility into queue health.
5. Ignoring failed jobs Check failed jobs regularly. Set up alerts when failure rates spike. I built a simple Slack notification that fires when failed jobs exceed 10.
6. Processing everything synchronously If an operation takes more than 1-2 seconds, move it to a queue. Users shouldn't wait for PDFs to generate or emails to send.
7. Not testing job failures
Write tests that verify your failed() methods work correctly. These are critical code paths.
Real-World Case Study: Document Processing Platform
Let me show you how this all comes together. Last year I built a platform that processes legal documents, extracting text, generating summaries, and creating searchable indexes.
Initial requirements: Handle 500-1000 document uploads daily, each 10-100 pages. Users needed results within 30 minutes.
Architecture:
- High-priority queue for documents uploaded through the UI (processed immediately)
- Low-priority queue for bulk API uploads (processed overnight)
- Separate queue for long-running OCR jobs
Configuration:
- 10 workers on high/default queues
- 3 workers on low-priority queue
- 2 dedicated workers for OCR (timeout: 20 minutes)
Results:
- Processing time: 2-5 minutes per document
- Daily volume: 1,200-1,500 documents
- Failure rate: <0.5% (mostly corrupt PDFs)
- Zero user complaints about performance
The secret was proper queue prioritization and monitoring. Horizon showed exactly where bottlenecks occurred, letting me scale specific workers. We started with 6 workers total and scaled to 15 based on actual metrics.
Advanced Patterns: Job Chaining
Sometimes you need jobs to execute in sequence. Job chaining ensures one job completes before the next starts:
ProcessDocument::withChain([
    new ExtractText($document),
    new GenerateThumbnail($document),
    new UpdateSearchIndex($document),
    new NotifyUser($document->user),
])->dispatch($document);
If any job fails, the chain stops. This is perfect for workflows where later steps depend on earlier ones.
I use this for onboarding flows: create account → send welcome email → create sample data → notify admin. Each step must complete successfully.
Testing Queue Jobs
Don't forget to test your queue logic. Laravel makes this straightforward:
use Illuminate\Support\Facades\Queue;
public function test_document_processing_dispatches_job()
{
    Queue::fake();
    
    $document = Document::factory()->create();
    
    ProcessDocument::dispatch($document);
    
    Queue::assertPushed(ProcessDocument::class, function ($job) use ($document) {
        return $job->document->id === $document->id;
    });
}
public function test_job_processes_document_correctly()
{
    $document = Document::factory()->create([
        'file_path' => 'test-document.pdf',
        'content' => null,
    ]);
    
    $job = new ProcessDocument($document);
    $job->handle();
    
    $this->assertNotNull($document->fresh()->content);
    $this->assertNotNull($document->fresh()->thumbnail_path);
}
When NOT to Use Queues
Queues aren't always the answer. Skip them for:
- Operations under 200ms (database queries, API calls)
- Critical user flows where immediate feedback matters
- Simple CRUD operations
- Data that must be immediately consistent
Queues add complexity. Use them when the benefits (responsiveness, scalability) outweigh the costs.
Production Checklist
Before going live with queues, verify:
✓ Redis is running and configured correctly
✓ Supervisor is managing queue workers
✓ Horizon is installed and accessible
✓ Failed jobs table exists
✓ Workers restart after deployment
✓ Monitoring/alerting is set up
✓ Job timeouts match longest operations
✓ Priority queues are configured
✓ Backup workers are ready for high-load periods
✓ You've tested failure scenarios
Wrapping Up
Laravel queues transform how you build applications. What used to require complex infrastructure, background processing, job scheduling, failure recovery, is now a few commands and configuration files.
The key points:
- Use Redis for reliable, fast queue storage
- Set up Supervisor to keep workers running 24/7
- Implement proper failure handling with the failed()method
- Monitor everything with Horizon
- Use job batching for bulk operations
- Configure priority queues for different workload types
Start simple, get basic queue processing working, then add complexity as needed. I've built apps processing millions of jobs monthly with exactly the patterns shown here.
Your queue system should be invisible to users but bulletproof behind the scenes. When it's working right, nobody notices. When it's failing, everyone knows.
Need help implementing Laravel queues in your application? I've built queue systems for document processing, email campaigns, video transcoding, and more. Let's work together to build something reliable.
Need Help With Your Laravel Project?
I specialize in building custom Laravel applications, process automation, and SaaS development. Whether you need to eliminate repetitive tasks or build something from scratch, let's discuss your project.
⚡ Currently available for 2-3 new projects
 
            About Hafiz Riaz
Full Stack Developer from Turin, Italy. I build web applications with Laravel and Vue.js, and automate business processes. Creator of ReplyGenius, StudyLab, and other SaaS products.
View Portfolio →Get web development tips via email
Join 50+ developers • No spam • Unsubscribe anytime
Related Articles
 
                                                        Setting Up Laravel 10 with Vue3 and Vuetify3
A complete guide to seamlessly integrating Vue3 and Vuetify3 into Laravel 10 usi...
 
                                                        Effortlessly Dockerize Your Laravel & Vue Application: A Step-by-Step Guide
Unleash the Full Potential of Laravel 10 with Vue 3 and Vuetify 3 in a Dockerize...
 
                                                        Mastering Design Patterns in Laravel
Unraveling the Secrets Behind Laravel’s Architectural Mastery