That Time Our File Upload 'Worked' But Nobody Could Find Their Files
"The file upload is working perfectly," our frontend developer announced during standup. "I can drag and drop, see the progress bar, get the success message—everything's smooth."
"Great," I said. "Have you tested it with the client's content team?"
Twenty minutes later, my Slack lit up with messages from the client.
"I uploaded the PDF but can't find it anywhere."
"The files are uploading but not showing in the media library."
"Are the documents supposed to disappear after upload?"
I logged into their system and immediately saw the problem. The files were uploading successfully—they were just being stored in a completely different location than where the interface was looking for them. Our upload was working perfectly. Our file browsing was working perfectly. They just weren't talking to each other.
The Path Configuration Maze
Here's what we'd built: a beautiful, intuitive file upload interface that dropped files into /storage/uploads/files/. And a equally beautiful file browser that looked for files in /public/media/documents/. Both components worked flawlessly in isolation. Together, they were useless.
The problem wasn't obvious during development because we'd been testing with the same few files over and over. Upload test.pdf, see test.pdf in the browser, everything works. But we hadn't been testing the full workflow—we'd been testing individual components.
When the International Energy Agency's content team started uploading their actual documents—dozens of research reports, policy briefs, and data sheets—every file vanished into a digital black hole. Successfully uploaded, completely unfindable.
The support ticket they filed was diplomatic but pointed: "File uploads appear to work, but we can't access any uploaded files through the admin interface. Are we missing a step?"
The Development vs Production Reality Gap
During local development, everything had worked fine because we'd been lazy about configuration. Local file paths, development storage settings, simplified folder structures—all fine when there's one developer testing with the same three files.
But production environments have different rules:
Security policies that restrict where files can be written
CDN configurations that affect how assets are served
Load balancers that might serve requests from different servers
Docker containers with ephemeral filesystems
Cloud storage that maps local paths to remote buckets
Each hosting setup had its own way of handling file storage, and we'd made assumptions about how paths would work that were true for our development environment but false everywhere else.
The worst part was that the error wasn't visible. Files uploaded successfully (from the server's perspective) and the database recorded their locations correctly (from the application's perspective). But the media library interface couldn't find them because it was looking in the wrong place (from the user's perspective).
The Mental Model Mismatch
The deeper issue wasn't technical—it was conceptual. We'd built the file system around how developers think about file storage (absolute paths, directory structures, filesystem hierarchies) instead of how content teams think about file organization (categories, projects, usage contexts).
Content teams wanted to upload a document to "the media library" and find it in "the documents section." They didn't care about storage/uploads vs public/media. They didn't want to understand directory structures. They just wanted to put a file somewhere and get it back later.
But our interface was exposing all the technical complexity of file paths and storage configurations. Upload a PDF, and you'd need to know which folder it ended up in, what the final URL structure was, and how the CDN was configured to serve it.
During a screen-sharing session with Nike's content team, I watched their marketing coordinator spend ten minutes looking for a file she'd just uploaded. She clicked through every folder in the media library, searched by filename, filtered by date—nothing. The file existed, but it was in a location that didn't correspond to any category or folder structure she could see.
"This is confusing," she said, with the kind of patience that comes from dealing with broken tools all day. "I just want to upload a PDF and be able to find it again."
The Configuration Explosion
Our first attempt to fix this was to make file paths configurable. Add environment variables for upload directories, storage locations, public URLs, CDN prefixes—give administrators complete control over where files go and how they're accessed.
This made the problem worse.
Now instead of files disappearing into one wrong location, they could disappear into dozens of wrong locations depending on how someone configured their environment variables. Our GitHub issues exploded with path-related problems:
"Files upload to local storage but URLs point to S3" (Issue #79)
"Media library shows files that don't exist" (Issue #383)
"Upload path works on dev but breaks on production" (Issue #456)
"Files upload successfully but serve 404 errors" (Issue #521)
Each issue required forensic debugging to figure out which combination of configuration settings had created which particular flavor of broken file handling.
The Support Ticket That Changed Everything
The breaking point was a support ticket from a client that I'll never forget:
"We've been using Twill for three months. Our content team has uploaded over 200 files. We just realized that none of them are actually accessible on the live website. All the upload confirmations were lies. Do we need to re-upload everything?"
I stared at that ticket for a long time. We'd built a file upload system that could successfully lie to users for months. Files appeared to upload correctly, showed up in admin interfaces, but weren't actually available to website visitors because of a mismatch between internal storage paths and public serving URLs.
The content team had been building pages, adding documents, creating workflows—all based on the assumption that their files were working correctly. They weren't malicious or careless; they were trusting the interface to tell them the truth about whether their uploads had succeeded.
What We Actually Built
The fix wasn't more configuration options—it was fewer configuration options with better defaults.
Unified storage handling: Files go in one place, get served from one place, with automatic path resolution that works the same way across environments.
Upload validation: After a file uploads successfully, the system immediately tries to access it via the public URL. If that fails, the upload fails, with a clear error message.
Visual confirmation: The media library shows actual file previews and download links, so users can immediately verify that uploads worked correctly.
Environment detection: The system automatically detects common hosting configurations and sets appropriate defaults instead of requiring manual path configuration.
Error surfacing: When file paths are misconfigured, the system shows clear error messages instead of pretending everything is working.
Path testing tools: Built admin tools that let developers verify file upload and serving configurations before content teams start using them.
The Lesson About User Mental Models
The file upload disaster taught me that users don't care about your technical architecture—they care about their mental model of how the system should work.
Content teams think in terms of "I uploaded a file, so now I should be able to use that file." They don't think about storage backends, CDN configurations, or directory structures. When the system confirms that an upload succeeded, they trust that confirmation.
If your technical implementation doesn't match that mental model, you need to either change the implementation or change the interface. You can't just document the complexity and hope users will understand it.
The most dangerous kind of broken feature is the one that appears to work. Failed uploads are annoying but obvious. Successful uploads that create unusable files are silently destructive—they let users build workflows on quicksand.
These days, when we design file handling features, we test the complete round trip: upload, storage, serving, access, download. We don't just test that uploads succeed—we test that uploaded files can actually be used for their intended purpose.
And we've learned to be suspicious of features that work perfectly in development. If path configuration works great on your laptop with simplified settings, it probably doesn't work great on a production server with security policies, CDN layers, and multiple environments.