10 min read

Handling Large File Uploads in Laravel Without Crashing Your Server

Learn how to handle large file uploads in Laravel with chunked uploads, queue processing, and proper server configuration for production apps.

Handling Large File Uploads in Laravel Without Crashing Your Server

Large file uploads can wreck your Laravel app if you're not careful. I learned this the hard way when users started uploading 500MB video files and the server just gave up.

The default PHP settings allow maybe 2-8MB uploads. That's fine for profile pictures, but when you're building a learning platform, document management system, or any SaaS that accepts user files, you need a better approach. And you can't just bump up those limits to 1GB and call it a day. That's asking for memory issues.

In this guide, I'll show you exactly how I handle large file uploads in production Laravel apps. We'll cover chunked uploads (the smart way), queue processing, direct S3 uploads using Laravel's built-in temporaryUploadUrl(), and all the server configs you actually need.

Why Default File Uploads Fail at Scale

Here's the thing. PHP has hard limits that make large uploads practically impossible out of the box.

The typical PHP installation limits you to:

  • upload_max_filesize: 2MB
  • post_max_size: 8MB
  • max_execution_time: 30 seconds
  • memory_limit: 128MB

So even if you bump upload_max_filesize to 500MB, you're still capped by post_max_size. And if you fix both of those, the script will timeout after 30 seconds anyway. It's like playing whack-a-mole with configurations.

But here's what really kills performance. When someone uploads a 200MB file, that entire file sits in your server's memory during processing. Your PHP process is stuck handling that one request. If ten users upload simultaneously, you just consumed 2GB of RAM for uploads alone.

I ran into this exact scenario on a client's edtech platform. Teachers were uploading lecture videos during the same time window (early morning before classes), and the server would crawl to a halt. Response times for everyone else went from 200ms to 5+ seconds. Not acceptable.

The solution isn't bigger servers. It's smarter handling.

The Right Way: Chunked Uploads with JavaScript

Chunked uploads split a large file into small pieces (usually 5-10MB each) and send them one at a time. The server receives manageable chunks, writes them to temporary storage, and assembles them once all pieces arrive.

This approach solves three critical problems: no memory bloat (you're processing 5MB at a time, not 500MB), no timeouts (each chunk completes in under a second), and resume capability (if chunk 47 fails, you retry just that chunk, not the entire file).

Let me show you the implementation.

Backend Laravel Route Setup

First, create a controller to handle chunk uploads:

// routes/web.php
Route::post('/upload-chunk', [FileUploadController::class, 'uploadChunk'])
    ->middleware('auth');
Route::post('/upload-complete', [FileUploadController::class, 'completeUpload'])
    ->middleware('auth');

The controller handles two operations: receiving chunks and assembling the final file.

// app/Http/Controllers/FileUploadController.php
namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use App\Jobs\ProcessUploadedFile;

class FileUploadController extends Controller
{
    public function uploadChunk(Request $request)
    {
        $request->validate([
            'file' => 'required|file',
            'chunkIndex' => 'required|integer',
            'totalChunks' => 'required|integer',
            'fileId' => 'required|string',
        ]);

        $fileId = $request->input('fileId');
        $chunkIndex = $request->input('chunkIndex');
        
        // Store chunk in temporary location
        $chunkPath = "chunks/{$fileId}/chunk_{$chunkIndex}";
        Storage::disk('local')->put(
            $chunkPath,
            file_get_contents($request->file('file')->getRealPath())
        );

        return response()->json([
            'success' => true,
            'chunkIndex' => $chunkIndex
        ]);
    }

    public function completeUpload(Request $request)
    {
        $request->validate([
            'fileId' => 'required|string',
            'fileName' => 'required|string',
            'totalChunks' => 'required|integer',
        ]);

        $fileId = $request->input('fileId');
        $fileName = $request->input('fileName');
        $totalChunks = $request->input('totalChunks');

        // Verify all chunks exist
        for ($i = 0; $i < $totalChunks; $i++) {
            $chunkPath = "chunks/{$fileId}/chunk_{$i}";
            if (!Storage::disk('local')->exists($chunkPath)) {
                return response()->json([
                    'success' => false,
                    'error' => "Missing chunk {$i}"
                ], 400);
            }
        }

        // Dispatch job to assemble and process file
        ProcessUploadedFile::dispatch($fileId, $fileName, $totalChunks, auth()->id());

        return response()->json([
            'success' => true,
            'message' => 'File upload complete, processing...'
        ]);
    }
}

Notice I'm not assembling the file synchronously. That happens in a queue job so it doesn't block the web request. This is crucial because assembling a 500MB file from chunks takes time, and you don't want the user's browser hanging on that request.

Frontend JavaScript Implementation

For the frontend, I use a simple vanilla JavaScript approach. You can adapt this for Vue.js or React easily:

class ChunkedUploader {
    constructor(file, chunkSize = 5 * 1024 * 1024) { // 5MB chunks
        this.file = file;
        this.chunkSize = chunkSize;
        this.totalChunks = Math.ceil(file.size / chunkSize);
        this.fileId = this.generateFileId();
        this.uploadedChunks = 0;
    }

    generateFileId() {
        return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
    }

    async upload(onProgress) {
        for (let i = 0; i < this.totalChunks; i++) {
            await this.uploadChunk(i);
            this.uploadedChunks++;
            
            if (onProgress) {
                onProgress((this.uploadedChunks / this.totalChunks) * 100);
            }
        }

        // Notify server that all chunks are uploaded
        await this.completeUpload();
    }

    async uploadChunk(chunkIndex) {
        const start = chunkIndex * this.chunkSize;
        const end = Math.min(start + this.chunkSize, this.file.size);
        const chunk = this.file.slice(start, end);

        const formData = new FormData();
        formData.append('file', chunk);
        formData.append('chunkIndex', chunkIndex);
        formData.append('totalChunks', this.totalChunks);
        formData.append('fileId', this.fileId);

        const response = await fetch('/upload-chunk', {
            method: 'POST',
            body: formData,
            headers: {
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            }
        });

        if (!response.ok) {
            throw new Error(`Chunk ${chunkIndex} upload failed`);
        }

        return await response.json();
    }

    async completeUpload() {
        const response = await fetch('/upload-complete', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            },
            body: JSON.stringify({
                fileId: this.fileId,
                fileName: this.file.name,
                totalChunks: this.totalChunks
            })
        });

        return await response.json();
    }
}

// Usage
const fileInput = document.getElementById('file-input');
fileInput.addEventListener('change', async (e) => {
    const file = e.target.files[0];
    const uploader = new ChunkedUploader(file);
    
    await uploader.upload((progress) => {
        console.log(`Upload progress: ${progress.toFixed(2)}%`);
        // Update your progress bar here
    });
    
    console.log('Upload complete!');
});

This gives you a real progress bar, resume capability (you can track which chunks succeeded), and it works with files of any size.

Processing Uploads with Queue Jobs

Here's where things get interesting. Once all chunks arrive, you need to assemble them and process the file. This happens in a queue job so it doesn't block the web request.

// app/Jobs/ProcessUploadedFile.php
namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Storage;
use App\Models\Upload;

class ProcessUploadedFile implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $timeout = 600; // 10 minutes for large files
    
    public function __construct(
        protected string $fileId,
        protected string $fileName,
        protected int $totalChunks,
        protected int $userId,
    ) {}

    public function handle(): void
    {
        // Create temporary file to assemble chunks
        $tempPath = storage_path("app/temp/{$this->fileId}");
        $outputHandle = fopen($tempPath, 'wb');

        // Combine all chunks using streaming (never loads full file into memory)
        for ($i = 0; $i < $this->totalChunks; $i++) {
            $chunkPath = storage_path("app/chunks/{$this->fileId}/chunk_{$i}");
            
            $chunkHandle = fopen($chunkPath, 'rb');
            while (!feof($chunkHandle)) {
                fwrite($outputHandle, fread($chunkHandle, 8192));
            }
            fclose($chunkHandle);
            
            // Delete chunk after processing
            unlink($chunkPath);
        }
        
        fclose($outputHandle);

        // Upload to S3 using a stream (critical for large files!)
        $s3Path = "uploads/{$this->userId}/" . basename($this->fileName);
        Storage::disk('s3')->put(
            $s3Path,
            fopen($tempPath, 'r+')  // Stream, not file_get_contents!
        );

        // Save upload record to database
        Upload::create([
            'user_id' => $this->userId,
            'file_name' => $this->fileName,
            'file_path' => $s3Path,
            'file_size' => filesize($tempPath),
            'status' => 'completed'
        ]);

        // Clean up
        unlink($tempPath);
        rmdir(storage_path("app/chunks/{$this->fileId}"));
    }

    public function failed(\Throwable $exception): void
    {
        \Log::error("File upload processing failed: {$exception->getMessage()}");
        
        // Clean up chunks on failure
        Storage::disk('local')->deleteDirectory("chunks/{$this->fileId}");
    }
}

Two critical details here. First, I use file handles with fread($chunkHandle, 8192) instead of loading entire files into memory. This means you're only ever holding about 8KB in memory at once, even if you're assembling a 2GB file.

Second (and this is a bug I've seen in a lot of tutorials), the S3 upload uses fopen() to pass a stream resource, not file_get_contents(). If you use file_get_contents() on a 500MB assembled file, you've just loaded that entire thing into memory and killed the whole point of streaming your chunks. Laravel's Storage put() method accepts stream resources and will handle the upload efficiently via Flysystem.

The timeout is set to 600 seconds (10 minutes). The default queue timeout is 60 seconds, which isn't enough for large file operations.

Server Configuration You Actually Need

All the code in the world won't help if your server config is wrong. Here's what you need to change.

PHP Configuration (php.ini)

upload_max_filesize = 10M
post_max_size = 10M
max_execution_time = 120
memory_limit = 256M
max_input_time = 120

Wait, why only 10MB if we're handling 2GB files? Because we're using chunked uploads. Each request only sends 5-10MB, so we don't need massive limits. This actually improves security because you're not accepting giant POST requests.

Nginx Configuration

If you're using Nginx (you probably are), you need to set the client body size:

server {
    listen 80;
    server_name yourdomain.com;

    client_max_body_size 10M;
    client_body_timeout 120s;

    # Your other config...
}

Again, 10MB is enough because of chunking.

Laravel Queue Worker Configuration

Make sure your queue worker has enough timeout to process large files. In your supervisor config:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/artisan queue:work --sleep=3 --tries=3 --timeout=900
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/path/to/worker.log

The --timeout=900 gives 15 minutes per job. For really massive files, you might need more. If you're running Horizon, configure the timeout in your config/horizon.php instead.

Direct S3 Upload with temporaryUploadUrl (The Modern Way)

Here's an even better approach for very large files. Skip your server entirely and upload directly to S3. Since Laravel 10, there's a built-in method for this that most tutorials still don't mention.

The flow works like this: the user requests upload permission from your Laravel backend, Laravel generates a presigned S3 URL, JavaScript uploads directly to S3 using that URL, and then your Laravel app gets notified when the upload completes. Your server never touches the file.

// app/Http/Controllers/S3UploadController.php
namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;

class S3UploadController extends Controller
{
    public function getUploadUrl(Request $request)
    {
        $request->validate([
            'fileName' => 'required|string',
            'fileType' => 'required|string',
        ]);

        $key = 'uploads/' . auth()->id() . '/' . Str::uuid() . '-' . $request->fileName;

        // Laravel's built-in method - no manual S3Client needed!
        ['url' => $url, 'headers' => $headers] = Storage::disk('s3')
            ->temporaryUploadUrl($key, now()->addMinutes(30));

        return response()->json([
            'uploadUrl' => $url,
            'headers' => $headers,
            'fileKey' => $key,
        ]);
    }

    public function confirmUpload(Request $request)
    {
        $request->validate([
            'fileKey' => 'required|string',
        ]);

        // Verify the file actually exists on S3
        if (!Storage::disk('s3')->exists($request->fileKey)) {
            return response()->json(['error' => 'File not found'], 404);
        }

        // Store metadata in your database
        $size = Storage::disk('s3')->size($request->fileKey);
        
        auth()->user()->uploads()->create([
            'file_path' => $request->fileKey,
            'file_size' => $size,
            'status' => 'completed',
        ]);

        return response()->json(['success' => true]);
    }
}

Look at that. Three lines to get a presigned upload URL. No manual S3Client instantiation, no credential juggling. The temporaryUploadUrl() method returns both the URL and the required headers that the client needs to include.

On the frontend, you upload directly to that presigned URL:

async function uploadToS3(file) {
    // Get presigned URL from Laravel
    const response = await fetch('/api/s3-upload-url', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
        },
        body: JSON.stringify({
            fileName: file.name,
            fileType: file.type
        })
    });

    const { uploadUrl, headers, fileKey } = await response.json();

    // Upload directly to S3 (your server never sees the file)
    const uploadResponse = await fetch(uploadUrl, {
        method: 'PUT',
        body: file,
        headers: {
            ...headers,
            'Content-Type': file.type
        }
    });

    if (uploadResponse.ok) {
        // Notify your Laravel backend that upload completed
        await fetch('/api/s3-upload-confirm', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
            },
            body: JSON.stringify({ fileKey })
        });
    }
}

The beauty of this approach? Your server never touches the file. A user uploading a 5GB video doesn't affect your application performance at all. The Laravel server only stores metadata (file name, size, S3 key) and serves signed URLs for playback later.

One thing to watch out for: you can't validate file contents server-side with presigned uploads since the file never hits your server. If you need virus scanning or content validation, set up an S3 event notification that triggers a Lambda function (or a Laravel queue job via SQS) when files land in your bucket.

S3 Multipart Uploads for Extremely Large Files

For files over 5GB, even direct presigned uploads hit AWS limits (a single PUT caps at 5GB). That's when you need S3's multipart upload feature.

S3 multipart uploads split files into parts (minimum 5MB each, except the last part), upload them in parallel, and assemble them server-side. You can upload parts in any order, retry failed parts, and even pause and resume days later.

Here's how to implement this in Laravel:

// app/Services/MultipartUploadService.php
namespace App\Services;

use Aws\S3\S3Client;
use Aws\S3\MultipartUploader;
use Aws\Exception\MultipartUploadException;

class MultipartUploadService
{
    protected S3Client $s3Client;
    protected string $bucket;

    public function __construct()
    {
        $this->s3Client = new S3Client([
            'version' => 'latest',
            'region' => config('filesystems.disks.s3.region'),
            'credentials' => [
                'key' => config('filesystems.disks.s3.key'),
                'secret' => config('filesystems.disks.s3.secret'),
            ],
        ]);
        
        $this->bucket = config('filesystems.disks.s3.bucket');
    }

    public function upload(string $filePath, string $s3Key): string
    {
        $uploader = new MultipartUploader($this->s3Client, $filePath, [
            'bucket' => $this->bucket,
            'key' => $s3Key,
            'part_size' => 10 * 1024 * 1024, // 10MB parts
            'concurrency' => 5, // Upload 5 parts simultaneously
        ]);

        try {
            $result = $uploader->upload();
            return $result['ObjectURL'];
        } catch (MultipartUploadException $e) {
            $params = $e->getState()->getId();
            
            \Log::error('Multipart upload failed', [
                'upload_id' => $params['UploadId'],
                'error' => $e->getMessage()
            ]);
            
            throw $e;
        }
    }
}

The concurrency setting uploads five parts at once. For a 1GB file with 10MB parts, that's 100 parts total, but you're uploading 5 at a time. This dramatically speeds up large uploads.

Worth noting: if you're using Storage::put() with a stream resource (like we did in the queue job above), Flysystem actually handles multipart uploads automatically behind the scenes for files over a certain threshold. So for server-side uploads you often don't need to drop down to the AWS SDK directly. This service class is more useful when you need fine-grained control over concurrency or resume behavior.

Monitoring and Error Handling

Things will go wrong. Networks drop, users close browsers, disks fill up. You need proper error handling.

First, add upload tracking to your database:

// database/migrations/xxxx_create_upload_sessions_table.php
Schema::create('upload_sessions', function (Blueprint $table) {
    $table->id();
    $table->foreignId('user_id')->constrained();
    $table->string('file_id')->unique();
    $table->string('file_name');
    $table->integer('total_chunks');
    $table->integer('uploaded_chunks')->default(0);
    $table->string('status'); // 'in_progress', 'completed', 'failed'
    $table->text('error_message')->nullable();
    $table->timestamps();
});

Track upload progress in your chunk upload handler:

public function uploadChunk(Request $request)
{
    // ... validation and storage code ...

    // Update progress
    $session = UploadSession::firstOrCreate(
        ['file_id' => $fileId],
        [
            'user_id' => auth()->id(),
            'file_name' => $request->fileName,
            'total_chunks' => $request->totalChunks,
            'uploaded_chunks' => 0,
            'status' => 'in_progress',
        ]
    );
    
    $session->increment('uploaded_chunks');
    
    if ($session->uploaded_chunks === $session->total_chunks) {
        $session->update(['status' => 'completed']);
    }

    return response()->json([
        'success' => true,
        'progress' => ($session->uploaded_chunks / $session->total_chunks) * 100
    ]);
}

This lets users resume uploads if they refresh the page:

async function resumeUpload(fileId) {
    const response = await fetch(`/api/upload-status/${fileId}`);
    const { uploaded_chunks } = await response.json();
    
    // Skip already uploaded chunks
    for (let i = uploaded_chunks; i < totalChunks; i++) {
        await uploadChunk(i);
    }
}

Don't forget to clean up stale upload sessions. I run a scheduled command daily that deletes orphaned chunks older than 24 hours. Without this cleanup, I've seen chunk directories eat 80GB+ of disk space within a month on a busy platform.

// app/Console/Commands/CleanOrphanedChunks.php
// In routes/console.php (Laravel 11+):
Schedule::command('uploads:clean-orphaned')->daily();

Security Considerations You Can't Ignore

Large file uploads are a security risk if you're not careful. Here's what I've learned the hard way.

Validate file types server-side. Don't trust the browser's MIME type. Use PHP's finfo:

$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mimeType = finfo_file($finfo, $filePath);
finfo_close($finfo);

$allowedTypes = ['video/mp4', 'application/pdf', 'image/jpeg'];
if (!in_array($mimeType, $allowedTypes)) {
    throw new \Exception('Invalid file type');
}

Scan for malware. For production apps handling user uploads, integrate ClamAV or a similar scanner:

$output = shell_exec("clamscan {$filePath}");
if (strpos($output, 'FOUND') !== false) {
    unlink($filePath);
    throw new \Exception('Security threat detected');
}

Rate limit uploads. Don't let one user hammer your server:

Route::post('/upload-chunk', [FileUploadController::class, 'uploadChunk'])
    ->middleware(['auth', 'throttle:100,1']); // 100 requests per minute

Set disk quotas. Track total storage per user and enforce limits:

$userStorage = Upload::where('user_id', auth()->id())->sum('file_size');

if ($userStorage + $newFileSize > 10 * 1024 * 1024 * 1024) { // 10GB limit
    throw new \Exception('Storage quota exceeded');
}

You can verify file integrity after chunk assembly by comparing hash values between what the client computed and what landed on the server. SHA-256 is fast enough for this.

Common Mistakes That Will Bite You

After building file upload systems across multiple production apps, I've seen (and made) every mistake. Here's what to avoid.

Mistake 1: Not setting proper timeouts. Your chunk uploads will randomly fail because Nginx or PHP killed the request. Set timeouts everywhere: PHP, Nginx, and your queue workers. If you're running queues through Supervisor, make sure the --timeout flag matches what your jobs actually need.

Mistake 2: Storing chunks permanently. Those temporary chunks add up fast. Delete them after assembly or you'll fill your disk. Schedule a daily cleanup command. Trust me on this one.

Mistake 3: Blocking the main thread. Never assemble files in the web request. Always use queues. A user uploading a 500MB file shouldn't lock up a PHP worker for 5 minutes. This ties directly into proper queue job architecture.

Mistake 4: Not validating chunk order. If chunks arrive out of order and you blindly append them, your file is corrupted. Always verify chunk indices match the expected sequence before assembly.

Mistake 5: Using file_get_contents for S3 uploads. This is the one I see constantly in tutorials. You carefully stream chunks in 8KB buffers to assemble the file, then call file_get_contents() to upload it to S3 and load the entire thing into memory anyway. Use fopen() to pass a stream resource to Storage::put() instead.

Mistake 6: Ignoring disk space. Check available disk space before starting uploads:

$freeSpace = disk_free_space(storage_path('app'));
$requiredSpace = $fileSize * 1.5; // 50% buffer for assembly

if ($freeSpace < $requiredSpace) {
    throw new \Exception('Insufficient disk space');
}

Performance Optimization Tips

Want to squeeze more performance out of your upload system? Here's what actually works.

Parallel chunk uploads. Instead of uploading chunks sequentially, upload 3-5 simultaneously:

async function uploadChunksInParallel(chunks, concurrency = 3) {
    const queue = [...chunks];
    const workers = Array(concurrency).fill(null).map(async () => {
        while (queue.length > 0) {
            const chunk = queue.shift();
            await uploadChunk(chunk);
        }
    });
    
    await Promise.all(workers);
}

This cut upload times by 60% in my tests on fast connections.

Compress before uploading. For documents and text files, compress client-side before chunking. Libraries like pako handle gzip compression in the browser and can dramatically reduce transfer size for text-heavy files.

Use a CDN for uploads. CloudFlare or AWS CloudFront can handle uploads closer to your users, reducing latency. Your Laravel app only gets notified when the upload completes.

Use HTTP/2. Make sure your server supports HTTP/2. Multiplexing allows multiple chunks to upload over one connection, reducing overhead.

When NOT to Use Chunked Uploads

Chunked uploads aren't always the answer. For files under 10MB, the added complexity isn't worth it. Just use Laravel's standard file upload:

$request->validate([
    'file' => 'required|file|max:10240', // 10MB max
]);

$path = $request->file('file')->store('uploads', 's3');

Simple, clean, works perfectly. If you're building a full-stack upload component with Vue.js, check out my complete file upload system guide which covers the simpler approach end-to-end.

Also, if you're building an internal tool with a small user base on a fast network, you might not need chunking at all. I didn't add chunked uploads to one internal admin panel until we had external users with slower connections.

Know your constraints. Don't over-engineer.

Testing Your Upload System

You need to test with real large files, not 100KB samples. Here's how I do it.

Generate a large test file:

dd if=/dev/urandom of=test_500mb.bin bs=1M count=500

Test upload under various conditions: fast connection (normal case), throttled connection (simulate 3G with browser dev tools), interrupted connection (kill request mid-upload, resume), and concurrent uploads (multiple users at once).

Write automated tests for your chunk assembly logic:

// tests/Feature/ChunkedUploadTest.php
public function test_chunk_assembly_creates_valid_file()
{
    $originalFile = UploadedFile::fake()->create('video.mp4', 50000); // 50MB
    $chunks = $this->splitIntoChunks($originalFile, 5120); // 5MB chunks
    
    foreach ($chunks as $index => $chunk) {
        $this->post('/upload-chunk', [
            'file' => $chunk,
            'chunkIndex' => $index,
            'totalChunks' => count($chunks),
            'fileId' => 'test-file-123'
        ])->assertOk();
    }
    
    $this->post('/upload-complete', [
        'fileId' => 'test-file-123',
        'fileName' => 'video.mp4',
        'totalChunks' => count($chunks)
    ])->assertOk();
    
    // Verify assembled file matches original
    $assembledPath = storage_path('app/temp/test-file-123');
    $this->assertEquals(
        md5_file($originalFile->path()),
        md5_file($assembledPath)
    );
}

This test has saved me countless times. One update broke chunk ordering, and this test caught it before deployment.

Frequently Asked Questions

Should I use chunked uploads or direct S3 uploads?

It depends on your requirements. Direct S3 uploads via temporaryUploadUrl() are simpler to implement and completely offload bandwidth from your server. Use them when you don't need server-side processing before storage. Use chunked uploads when you need to validate file contents, scan for malware, or process the file (like generating thumbnails) before storing it.

How do I handle file uploads in a Filament admin panel?

Filament ships with built-in FileUpload and SpatieMediaLibraryFileUpload form components that handle uploads out of the box. For files under 10MB, these work great with zero configuration. For larger files, you'll want to pair Filament with the chunked upload approach from this guide, or use direct S3 uploads with a custom Livewire component. See my Filament admin dashboard guide for more on customizing Filament forms.

What's the maximum file size I can handle with this approach?

With chunked uploads, there's no practical limit since each chunk is only 5-10MB. I've tested with 5GB files. For direct S3 uploads, a single PUT caps at 5GB (AWS limit). For anything larger, use S3 multipart uploads which support objects up to 5TB.

How do I show upload progress to the user?

The chunked upload approach gives you free progress tracking since you know exactly how many chunks have been sent. For direct S3 uploads, use the XMLHttpRequest upload progress event or the fetch API with a ReadableStream. The frontend code examples above include progress callbacks.

Can I use this approach with DigitalOcean Spaces or MinIO instead of S3?

Yes. Laravel's Storage abstraction and temporaryUploadUrl() work with any S3-compatible service. Just update your config/filesystems.php with the correct endpoint URL. DigitalOcean Spaces, MinIO, Wasabi, and Backblaze B2 all support the same presigned URL mechanism.

Wrapping Up

Handling large file uploads in Laravel isn't rocket science, but it does require thinking beyond the basics. Chunked uploads give you reliability and resume capability. Queue processing keeps your app responsive. Direct S3 uploads with temporaryUploadUrl() eliminate bandwidth costs entirely.

Start with chunked uploads for files over 50MB. Add queue processing for assembly. Use direct S3 uploads if you're doing serious volume. And monitor everything, because uploads will fail and you need to know why.

Your users won't thank you for good upload handling (it's invisible when it works), but they'll definitely complain if it breaks. Build it right the first time.

Need help implementing large file uploads in your Laravel application? Get in touch and I'll help you get it right without the weeks of trial and error.

Share: X/Twitter | LinkedIn |
Hafiz Riaz

About Hafiz

Senior Full-Stack Developer with 9+ years building web apps and SaaS platforms. I specialize in Laravel and Vue.js, and I write about the real decisions behind shipping production software.

View My Work →

Get web development tips via email

Join 50+ developers • No spam • Unsubscribe anytime