9 min read

Handling Large File Uploads in Laravel Without Crashing Your Server

Learn how to handle large file uploads in Laravel with chunked uploads, queue processing, and proper server configuration for production apps.

Handling Large File Uploads in Laravel Without Crashing Your Server

Large file uploads can wreck your Laravel app if you're not careful. I learned this the hard way when StudyLab users started uploading 500MB video lectures and my server just gave up.

The default PHP settings allow maybe 2-8MB uploads. That's fine for profile pictures, but when you're building something like a learning platform or document management system, you need a better approach. Plus, you can't just bump up those limits to 1GB and call it a day. That's asking for memory issues.

In this guide, I'll show you exactly how I handle large file uploads in production. We'll cover chunked uploads (the smart way), queue processing, S3 integration, and all the server configs you actually need. This is the setup I use in ReplyGenius for document uploads and it handles files up to 2GB without breaking a sweat.

Why Default File Uploads Fail at Scale

Here's the thing. PHP has hard limits that make large uploads practically impossible out of the box.

The typical PHP installation limits you to:

  • upload_max_filesize: 2MB
  • post_max_size: 8MB
  • max_execution_time: 30 seconds
  • memory_limit: 128MB

So even if you bump upload_max_filesize to 500MB, you're still capped by post_max_size. And if you fix both of those, the script will timeout after 30 seconds anyway. It's like playing whack-a-mole with configurations.

But here's what really kills performance. When someone uploads a 200MB file, that entire file sits in your server's memory during processing. Your PHP process is stuck handling that one request. If ten users upload simultaneously, you just consumed 2GB of RAM for uploads alone.

I ran into this exact scenario with StudyLab. Teachers were uploading lecture videos during the same time window (early morning before classes), and the server would crawl to a halt. Response times for everyone else went from 200ms to 5+ seconds. Not acceptable.

The solution isn't bigger servers. It's smarter handling.

The Right Way: Chunked Uploads with JavaScript

Chunked uploads split a large file into small pieces (usually 5-10MB each) and send them one at a time. The server receives manageable chunks, writes them to temporary storage, and assembles them once all pieces arrive.

This approach solves three critical problems:

  • No memory bloat (you're processing 5MB at a time, not 500MB)
  • No timeouts (each chunk completes in under a second)
  • Resume capability (if chunk 47 fails, you retry just that chunk)

Let me show you the implementation.

Backend Laravel Route Setup

First, create a controller to handle chunk uploads:

// routes/web.php
Route::post('/upload-chunk', [FileUploadController::class, 'uploadChunk'])
    ->middleware('auth');
Route::post('/upload-complete', [FileUploadController::class, 'completeUpload'])
    ->middleware('auth');

The controller handles two operations: receiving chunks and assembling the final file.

// app/Http/Controllers/FileUploadController.php
namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use App\Jobs\ProcessUploadedFile;

class FileUploadController extends Controller
{
    public function uploadChunk(Request $request)
    {
        $request->validate([
            'file' => 'required|file',
            'chunkIndex' => 'required|integer',
            'totalChunks' => 'required|integer',
            'fileId' => 'required|string',
        ]);

        $fileId = $request->input('fileId');
        $chunkIndex = $request->input('chunkIndex');
        
        // Store chunk in temporary location
        $chunkPath = "chunks/{$fileId}/chunk_{$chunkIndex}";
        Storage::disk('local')->put(
            $chunkPath,
            file_get_contents($request->file('file')->getRealPath())
        );

        return response()->json([
            'success' => true,
            'chunkIndex' => $chunkIndex
        ]);
    }

    public function completeUpload(Request $request)
    {
        $request->validate([
            'fileId' => 'required|string',
            'fileName' => 'required|string',
            'totalChunks' => 'required|integer',
        ]);

        $fileId = $request->input('fileId');
        $fileName = $request->input('fileName');
        $totalChunks = $request->input('totalChunks');

        // Verify all chunks exist
        for ($i = 0; $i < $totalChunks; $i++) {
            $chunkPath = "chunks/{$fileId}/chunk_{$i}";
            if (!Storage::disk('local')->exists($chunkPath)) {
                return response()->json([
                    'success' => false,
                    'error' => "Missing chunk {$i}"
                ], 400);
            }
        }

        // Dispatch job to assemble and process file
        ProcessUploadedFile::dispatch($fileId, $fileName, $totalChunks, auth()->id());

        return response()->json([
            'success' => true,
            'message' => 'File upload complete, processing...'
        ]);
    }
}

Notice I'm not assembling the file synchronously. That happens in a queue job. This is crucial because assembling a 500MB file from chunks takes time, and you don't want the user's browser hanging on that request.

Frontend JavaScript Implementation

For the frontend, I use a simple vanilla JavaScript approach. You can adapt this for Vue.js or React easily:

class ChunkedUploader {
    constructor(file, chunkSize = 5 * 1024 * 1024) { // 5MB chunks
        this.file = file;
        this.chunkSize = chunkSize;
        this.totalChunks = Math.ceil(file.size / chunkSize);
        this.fileId = this.generateFileId();
        this.uploadedChunks = 0;
    }

    generateFileId() {
        return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
    }

    async upload(onProgress) {
        for (let i = 0; i < this.totalChunks; i++) {
            await this.uploadChunk(i);
            this.uploadedChunks++;
            
            if (onProgress) {
                onProgress((this.uploadedChunks / this.totalChunks) * 100);
            }
        }

        // Notify server that all chunks are uploaded
        await this.completeUpload();
    }

    async uploadChunk(chunkIndex) {
        const start = chunkIndex * this.chunkSize;
        const end = Math.min(start + this.chunkSize, this.file.size);
        const chunk = this.file.slice(start, end);

        const formData = new FormData();
        formData.append('file', chunk);
        formData.append('chunkIndex', chunkIndex);
        formData.append('totalChunks', this.totalChunks);
        formData.append('fileId', this.fileId);

        const response = await fetch('/upload-chunk', {
            method: 'POST',
            body: formData,
            headers: {
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            }
        });

        if (!response.ok) {
            throw new Error(`Chunk ${chunkIndex} upload failed`);
        }

        return await response.json();
    }

    async completeUpload() {
        const response = await fetch('/upload-complete', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            },
            body: JSON.stringify({
                fileId: this.fileId,
                fileName: this.file.name,
                totalChunks: this.totalChunks
            })
        });

        return await response.json();
    }
}

// Usage
const fileInput = document.getElementById('file-input');
fileInput.addEventListener('change', async (e) => {
    const file = e.target.files[0];
    const uploader = new ChunkedUploader(file);
    
    await uploader.upload((progress) => {
        console.log(`Upload progress: ${progress.toFixed(2)}%`);
        // Update your progress bar here
    });
    
    console.log('Upload complete!');
});

This gives you a real progress bar, resume capability (you can track which chunks succeeded), and it works with files of any size.

Processing Uploads with Queue Jobs

Here's where things get interesting. Once all chunks arrive, you need to assemble them and process the file. This happens in a queue job so it doesn't block the web request.

// app/Jobs/ProcessUploadedFile.php
namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Storage;
use App\Models\Upload;

class ProcessUploadedFile implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $timeout = 600; // 10 minutes for large files
    
    protected $fileId;
    protected $fileName;
    protected $totalChunks;
    protected $userId;

    public function __construct($fileId, $fileName, $totalChunks, $userId)
    {
        $this->fileId = $fileId;
        $this->fileName = $fileName;
        $this->totalChunks = $totalChunks;
        $this->userId = $userId;
    }

    public function handle()
    {
        // Create temporary file to assemble chunks
        $tempPath = storage_path("app/temp/{$this->fileId}");
        $outputHandle = fopen($tempPath, 'wb');

        // Combine all chunks
        for ($i = 0; $i < $this->totalChunks; $i++) {
            $chunkPath = storage_path("app/chunks/{$this->fileId}/chunk_{$i}");
            
            $chunkHandle = fopen($chunkPath, 'rb');
            while (!feof($chunkHandle)) {
                fwrite($outputHandle, fread($chunkHandle, 8192));
            }
            fclose($chunkHandle);
            
            // Delete chunk after processing
            unlink($chunkPath);
        }
        
        fclose($outputHandle);

        // Upload to S3 (or wherever you store files)
        $s3Path = "uploads/{$this->userId}/" . basename($this->fileName);
        Storage::disk('s3')->put(
            $s3Path,
            file_get_contents($tempPath)
        );

        // Save upload record to database
        Upload::create([
            'user_id' => $this->userId,
            'file_name' => $this->fileName,
            'file_path' => $s3Path,
            'file_size' => filesize($tempPath),
            'status' => 'completed'
        ]);

        // Clean up
        unlink($tempPath);
        rmdir(storage_path("app/chunks/{$this->fileId}"));
    }

    public function failed(\Throwable $exception)
    {
        // Handle failure (notify user, log error, etc.)
        \Log::error("File upload processing failed: {$exception->getMessage()}");
        
        // Clean up chunks
        Storage::disk('local')->deleteDirectory("chunks/{$this->fileId}");
    }
}

I use file handles instead of loading entire files into memory. This is critical for large files. The fread with 8192 bytes means you're only ever holding about 8KB in memory at once, even if you're assembling a 2GB file.

Also, notice the timeout is set to 600 seconds (10 minutes). The default queue timeout is 60 seconds, which isn't enough for large file operations.

Server Configuration You Actually Need

All the code in the world won't help if your server config is wrong. Here's what you need to change.

PHP Configuration (php.ini)

upload_max_filesize = 10M
post_max_size = 10M
max_execution_time = 120
memory_limit = 256M
max_input_time = 120

Wait, why only 10MB if we're handling 2GB files? Because we're using chunked uploads. Each request only sends 5-10MB, so we don't need massive limits. This actually improves security because you're not accepting giant POST requests.

Nginx Configuration

If you're using Nginx (you probably are), you need to set the client body size:

server {
    listen 80;
    server_name yourdomain.com;

    client_max_body_size 10M;
    client_body_timeout 120s;

    # Your other config...
}

Again, 10MB is enough because of chunking.

Laravel Queue Worker Configuration

Make sure your queue worker has enough timeout to process large files. In your supervisor config:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/artisan queue:work --sleep=3 --tries=3 --timeout=900
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/path/to/worker.log

The --timeout=900 gives 15 minutes per job. For really massive files, you might need more.

Direct S3 Upload Strategy (Advanced)

Here's an even better approach for very large files. Skip your server entirely and upload directly to S3 using presigned URLs.

The flow works like this:

  1. User requests upload permission from your Laravel backend
  2. Laravel generates a presigned S3 URL (valid for 1 hour)
  3. JavaScript uploads directly to S3 using that URL
  4. S3 notifies your Laravel app when upload completes
  5. Laravel processes the file metadata

This completely removes your server from the upload path. No bandwidth costs, no processing overhead.

// app/Http/Controllers/S3UploadController.php
namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use Aws\S3\S3Client;

class S3UploadController extends Controller
{
    public function getPresignedUrl(Request $request)
    {
        $request->validate([
            'fileName' => 'required|string',
            'fileType' => 'required|string',
        ]);

        $s3 = new S3Client([
            'version' => 'latest',
            'region' => config('filesystems.disks.s3.region'),
            'credentials' => [
                'key' => config('filesystems.disks.s3.key'),
                'secret' => config('filesystems.disks.s3.secret'),
            ],
        ]);

        $bucket = config('filesystems.disks.s3.bucket');
        $key = 'uploads/' . auth()->id() . '/' . $request->fileName;

        $cmd = $s3->getCommand('PutObject', [
            'Bucket' => $bucket,
            'Key' => $key,
            'ContentType' => $request->fileType,
        ]);

        $presignedRequest = $s3->createPresignedRequest($cmd, '+60 minutes');
        $presignedUrl = (string) $presignedRequest->getUri();

        return response()->json([
            'uploadUrl' => $presignedUrl,
            'fileKey' => $key,
        ]);
    }
}

On the frontend, you upload directly to that presigned URL:

async function uploadToS3(file) {
    // Get presigned URL from Laravel
    const response = await fetch('/api/s3-presigned-url', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
        },
        body: JSON.stringify({
            fileName: file.name,
            fileType: file.type
        })
    });

    const { uploadUrl, fileKey } = await response.json();

    // Upload directly to S3
    const uploadResponse = await fetch(uploadUrl, {
        method: 'PUT',
        body: file,
        headers: {
            'Content-Type': file.type
        }
    });

    if (uploadResponse.ok) {
        // Notify your Laravel backend that upload completed
        await fetch('/api/s3-upload-complete', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            },
            body: JSON.stringify({ fileKey })
        });
    }
}

The beauty of this approach? Your server never touches the file. A user uploading a 5GB video doesn't affect your application performance at all.

I use this method in StudyLab for video lectures. Teachers upload 2-3GB files regularly, and it just works. The Laravel server only stores metadata (file name, size, S3 key) and serves signed URLs for playback later.

Handling Multipart Uploads for Extremely Large Files

For files over 5GB, even chunked uploads can get tricky. That's when you need S3's multipart upload feature.

S3 multipart uploads split files into parts (minimum 5MB each, except the last part), upload them in parallel, and assemble them server-side. You can upload parts in any order, retry failed parts, and even pause/resume days later.

Here's how I implement this in Laravel:

// app/Services/MultipartUploadService.php
namespace App\Services;

use Aws\S3\S3Client;
use Aws\S3\MultipartUploader;
use Aws\Exception\MultipartUploadException;

class MultipartUploadService
{
    protected $s3Client;
    protected $bucket;

    public function __construct()
    {
        $this->s3Client = new S3Client([
            'version' => 'latest',
            'region' => config('filesystems.disks.s3.region'),
            'credentials' => [
                'key' => config('filesystems.disks.s3.key'),
                'secret' => config('filesystems.disks.s3.secret'),
            ],
        ]);
        
        $this->bucket = config('filesystems.disks.s3.bucket');
    }

    public function upload($filePath, $s3Key)
    {
        $uploader = new MultipartUploader($this->s3Client, $filePath, [
            'bucket' => $this->bucket,
            'key' => $s3Key,
            'part_size' => 10 * 1024 * 1024, // 10MB parts
            'concurrency' => 5, // Upload 5 parts simultaneously
        ]);

        try {
            $result = $uploader->upload();
            return $result['ObjectURL'];
        } catch (MultipartUploadException $e) {
            // If upload fails, get the upload state
            $params = $e->getState()->getId();
            
            // You can store these params and resume later
            \Log::error('Multipart upload failed', [
                'upload_id' => $params['UploadId'],
                'error' => $e->getMessage()
            ]);
            
            throw $e;
        }
    }
}

The concurrency setting uploads five parts at once. For a 1GB file with 10MB parts, that's 100 parts total, but you're uploading 5 at a time. This dramatically speeds up large uploads.

Monitoring and Error Handling

Things will go wrong. Networks drop, users close browsers, disks fill up. You need proper error handling.

First, add upload tracking to your database:

// database/migrations/xxxx_create_upload_sessions_table.php
Schema::create('upload_sessions', function (Blueprint $table) {
    $table->id();
    $table->foreignId('user_id')->constrained();
    $table->string('file_id')->unique();
    $table->string('file_name');
    $table->integer('total_chunks');
    $table->integer('uploaded_chunks')->default(0);
    $table->string('status'); // 'in_progress', 'completed', 'failed'
    $table->text('error_message')->nullable();
    $table->timestamps();
});

Track upload progress in your chunk upload handler:

public function uploadChunk(Request $request)
{
    // ... validation and storage code ...

    // Update progress
    $session = UploadSession::where('file_id', $fileId)->first();
    if (!$session) {
        $session = UploadSession::create([
            'user_id' => auth()->id(),
            'file_id' => $fileId,
            'file_name' => $request->fileName,
            'total_chunks' => $request->totalChunks,
            'uploaded_chunks' => 0,
            'status' => 'in_progress'
        ]);
    }
    
    $session->increment('uploaded_chunks');
    
    if ($session->uploaded_chunks === $session->total_chunks) {
        $session->update(['status' => 'completed']);
    }

    return response()->json([
        'success' => true,
        'progress' => ($session->uploaded_chunks / $session->total_chunks) * 100
    ]);
}

This lets users resume uploads if they refresh the page:

async function resumeUpload(fileId) {
    const response = await fetch(`/api/upload-status/${fileId}`);
    const { uploaded_chunks } = await response.json();
    
    // Skip already uploaded chunks
    for (let i = uploaded_chunks; i < totalChunks; i++) {
        await uploadChunk(i);
    }
}

Security Considerations You Can't Ignore

Large file uploads are a security risk if you're not careful. Here's what I've learned the hard way.

Validate file types server-side. Don't trust the browser's MIME type. Use PHP's finfo:

$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mimeType = finfo_file($finfo, $filePath);
finfo_close($finfo);

$allowedTypes = ['video/mp4', 'application/pdf', 'image/jpeg'];
if (!in_array($mimeType, $allowedTypes)) {
    throw new \Exception('Invalid file type');
}

Scan for malware. For production apps handling user uploads, integrate ClamAV or a similar scanner:

$output = shell_exec("clamscan {$filePath}");
if (strpos($output, 'FOUND') !== false) {
    // Malware detected, delete file and alert admin
    unlink($filePath);
    throw new \Exception('Security threat detected');
}

Rate limit uploads. Don't let one user hammer your server:

// In your route middleware
Route::post('/upload-chunk', [FileUploadController::class, 'uploadChunk'])
    ->middleware(['auth', 'throttle:100,1']); // 100 requests per minute

Set disk quotas. Track total storage per user and enforce limits:

$userStorage = Upload::where('user_id', auth()->id())
    ->sum('file_size');

if ($userStorage + $newFileSize > 10 * 1024 * 1024 * 1024) { // 10GB limit
    throw new \Exception('Storage quota exceeded');
}

Common Mistakes That Will Bite You

After building file upload systems for three different products, I've seen (and made) every mistake. Here's what to avoid.

Mistake 1: Not setting proper timeouts. Your chunk uploads will randomly fail because Nginx or PHP killed the request. Set timeouts everywhere: PHP, Nginx, queue workers.

Mistake 2: Storing chunks permanently. Those temporary chunks add up fast. Delete them after assembly or you'll fill your disk. In one month of forgetting this in StudyLab, I accumulated 80GB of orphaned chunks.

Mistake 3: Blocking the main thread. Never assemble files in the web request. Always use queues. A user uploading a 500MB file shouldn't lock up a PHP worker for 5 minutes.

Mistake 4: Not validating chunk order. If chunks arrive out of order and you blindly append them, your file is corrupted. Always verify chunk indices match expected sequence.

Mistake 5: Ignoring disk space. Check available disk space before starting uploads. Nothing worse than failing at 95% because you ran out of space:

$freeSpace = disk_free_space(storage_path('app'));
$requiredSpace = $fileSize * 1.5; // 50% buffer

if ($freeSpace < $requiredSpace) {
    throw new \Exception('Insufficient disk space');
}

Performance Optimization Tips

Want to squeeze more performance out of your upload system? Here's what actually works.

Use a CDN for uploads. CloudFlare or AWS CloudFront can handle uploads closer to users, reducing latency. Your Laravel app only gets notified when upload completes.

Compress before uploading. For documents and text files, compress client-side before chunking:

async function compressAndUpload(file) {
    const compressed = await new Promise((resolve) => {
        const reader = new FileReader();
        reader.onload = () => {
            const blob = new Blob([pako.gzip(reader.result)]);
            resolve(blob);
        };
        reader.readAsArrayBuffer(file);
    });
    
    await uploadFile(compressed);
}

Parallel chunk uploads. Instead of uploading chunks sequentially, upload 3-5 simultaneously:

async function uploadChunksInParallel(chunks, concurrency = 3) {
    const queue = [...chunks];
    const workers = Array(concurrency).fill(null).map(async () => {
        while (queue.length > 0) {
            const chunk = queue.shift();
            await uploadChunk(chunk);
        }
    });
    
    await Promise.all(workers);
}

This cut upload times by 60% in my tests on fast connections.

Use HTTP/2. Make sure your server supports HTTP/2. Multiplexing allows multiple chunks to upload over one connection, reducing overhead.

When NOT to Use Chunked Uploads

Chunked uploads aren't always the answer. For files under 10MB, the added complexity isn't worth it. Just use Laravel's standard file upload:

$request->validate([
    'file' => 'required|file|max:10240', // 10MB max
]);

$path = $request->file('file')->store('uploads', 's3');

Simple, clean, works perfectly.

Also, if you're building an internal tool with a small user base on a fast network, you might not need chunking. I didn't add it to Robobook until we had external users with slower connections.

Know your constraints. Don't over-engineer.

Testing Your Upload System

You need to test with real large files, not 100KB samples. Here's how I do it.

Generate a large test file:

dd if=/dev/urandom of=test_500mb.bin bs=1M count=500

Test upload under various conditions:

  • Fast connection (normal case)
  • Throttled connection (simulate 3G with browser dev tools)
  • Interrupted connection (kill request mid-upload, resume)
  • Concurrent uploads (multiple users at once)

Write automated tests for your chunk assembly logic:

// tests/Feature/ChunkedUploadTest.php
public function test_chunk_assembly_creates_valid_file()
{
    $originalFile = UploadedFile::fake()->create('video.mp4', 50000); // 50MB
    $chunks = $this->splitIntoChunks($originalFile, 5120); // 5MB chunks
    
    foreach ($chunks as $index => $chunk) {
        $this->post('/upload-chunk', [
            'file' => $chunk,
            'chunkIndex' => $index,
            'totalChunks' => count($chunks),
            'fileId' => 'test-file-123'
        ])->assertOk();
    }
    
    $this->post('/upload-complete', [
        'fileId' => 'test-file-123',
        'fileName' => 'video.mp4',
        'totalChunks' => count($chunks)
    ])->assertOk();
    
    // Verify assembled file matches original
    $assembledPath = storage_path('app/temp/test-file-123');
    $this->assertEquals(
        md5_file($originalFile->path()),
        md5_file($assembledPath)
    );
}

This saved me countless times. One update broke chunk ordering, and this test caught it before deployment.

Wrapping Up

Handling large file uploads in Laravel isn't rocket science, but it does require thinking beyond the basics. Chunked uploads give you reliability and resume capability. Queue processing keeps your app responsive. Direct S3 uploads eliminate bandwidth costs.

Start with chunked uploads for files over 50MB. Add queue processing for assembly. Consider direct S3 uploads if you're doing serious volume. Monitor everything, because uploads will fail and you need to know why.

Your users won't thank you for good upload handling (it's invisible when it works), but they'll definitely complain if it breaks. Build it right the first time.

Need help implementing large file uploads in your Laravel application? Contact me, I've built this system multiple times now and can save you weeks of trial and error.


Need Help With Your Laravel Project?

I specialize in building custom Laravel applications, process automation, and SaaS development. Whether you need to eliminate repetitive tasks or build something from scratch, let's discuss your project.

⚡ Currently available for 2-3 new projects

Hafiz Riaz

About Hafiz Riaz

Full Stack Developer from Turin, Italy. I build web applications with Laravel and Vue.js, and automate business processes. Creator of ReplyGenius, StudyLab, and other SaaS products.

View Portfolio →

Get web development tips via email

Join 50+ developers • No spam • Unsubscribe anytime