WeTransfer was in hot water this week over an (apparently) ill-worded change to their terms and conditions. User outrage seems to have brought them back down to earth for the time being – but questions around digital permissions, data ownership and AI training continue to loom over the tech industry.
There are two types of people in the world: those who carry around memory sticks, and those who use WeTransfer. And with remote working, many don’t have much choice but to be the latter.
I remember the first time a client asked me to send a file via WeTransfer. I was deeply sceptical about this service that just existed on the internet. So you’re telling me that I can upload large files – for free – without even creating a profile, and just send them whizzing through the net, to be received in perfect order by my client? I was doubtful – and yet, it worked perfectly. Since then, I estimate that I’ve sent and received hundreds of WeTransfers, if not more.
I’m sure that my experience is not a unique one. Most of us have come to think of WeTransfer as the trusty digital courier that we actually want on our team. No clunky installs, no intrusive ads, and (best of all) no size limits. Just point your browser to WeTransfer.com, drag in that 5 GB design mock‑up or 2 GB video edit, enter an email or two, and hit send. Moments later, your recipient has a link to download everything. Simple. Elegant. It’s no surprise that WeTransfer soon became the tool of choice for creatives, particularly photographers, filmmakers, graphic designers, and marketing teams – basically anyone who works with files too big to fit into a standard email inbox.
But this week, that trusty magic hit a snag. A small tweak to WeTransfer’s Terms of Service, initially buried in legalese, sounded an alarm bell for thousands of users. The change was made to Section 6.3 of WeTransfer’s Terms of Service, which specifically referred to granting the company “a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licenseable license” and allowing it to use uploaded content “for the purposes of operating, developing, commercializing, and improving the service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation policies,” as well as “the right to reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform content.”
Put simply, under the new terms, that beautiful design you just transferred could be used to teach an algorithm how to spot “good design”, and then potentially generate something similar, all under WeTransfer’s roof. For free. For them, not for you. As that old story goes: if it’s not obvious what the product is, that’s because you’re the product.
That tiny phrase, set to take effect August 8, suddenly felt like handing WeTransfer (and anyone they license or sell that right to) a perpetual, royalty‑free key to a creative vault. And creatives were not having it.
Why creatives saw red
Across the creative world, fear is spreading fast – and not just around WeTransfer. Adobe came under fire in June 2024 after quietly updating its Creative Cloud terms to let “automated and manual methods” access user content, sparking an immediate backlash from photographers, graphic designers, and document creators. Within days, Adobe clarified that it will not train AI on customer work nor claim ownership over it, and rolled back the controversial language
Meanwhile, Meta included clauses in its privacy policy allowing public posts and comments to be used for training its AI models, including Llama and its new Meta AI assistant. EU regulators forced a pause in June 2024, but Meta resumed using public content in the EU and UK after securing assurances, and continues training with US public data.
Even Zoom wasn’t immune. In mid‑2023, terms surfaced that implied meetings and chat transcripts could be used for AI training, prompting widespread concern. Zoom clarified that it would not use audio, video, or chat content for training without explicit consent.
Bottom line: This isn’t a WeTransfer-only problem; it’s the latest flare-up in a sweeping industry trend. Tech companies are increasingly treating user content as AI training fodder, often hiding the permissions to do so in legal fine print. And, time and again, creators are fighting back, pushing for clarity, consent, and real control.
At the core of the latest backlash was a deep sense of betrayal. WeTransfer had long been seen as a friend to creatives, a rare tech company that talked the talk when it came to respecting privacy and supporting artistry. Finding out that a new clause could quietly hand over the rights to use, remix, and monetise work felt less like a policy update and more like the rug being pulled out from under the people who made the platform what it is.
WeTransfer’s rapid backtrack
Within 48 hours of the backlash, WeTransfer came sprinting out with a digital fire extinguisher in hand – a press release. The company was quick to clarify that they’ve never used user files to train AI models, they don’t currently share or sell content for AI development, and the clause in question was just a bit of legal scaffolding for some hypothetical content moderation tools that might be built one day. There are no AI experiments running behind the curtain and no shadowy deals with data-hungry third parties. At least, that’s for the time being.
To help restore trust, the controversial machine learning language was scrubbed from the Terms of Service, swapped out for a simpler, cleaner version that sticks to basics: WeTransfer can use your files to run and improve the service, and that’s it. That means no AI, no derivative works, and no vague future-tech loopholes. They also rolled out a plain-English FAQ to break things down for non-lawyers, walking users through what the update meant and what it didn’t.
But by then, the damage had been done, and users weren’t exactly queuing up to forgive and forget. Many said they were reviewing their subscriptions, looking for alternatives, or at the very least, keeping one wary eye on the next T&C update. Because for all the soothing language and course correction, the incident shook something deeper: the sense that WeTransfer was a safe harbour for creatives. And when that trust wobbles (even briefly) it’s hard to pretend like nothing happened.
What comes next?
WeTransfer’s correction may soothe immediate fears, but it won’t erase the deeper unease about how user‑generated content fuels AI. As long as models require data to train, companies will continue eyeing every upload as a potential resource. To prevent the next backlash, platforms must embrace radical transparency: drafting terms that speak plainly about AI usage, offering opt‑in mechanisms, and even sharing revenue when creators’ work drives value.
Ronald Hans, the Dutch co-founder of WeTransfer, had some choice words about the situation as well. He has re-emerged and announced a new project aimed squarely at creators who feel burned by the recent drama. The project is Boomerang, a new file-sharing service that, in his words, “champions creativity instead of stealing it.” Subtle? Not exactly. In an interview with Dutch newspaper NRC, Hans described the controversial changes to WeTransfer’s terms as “a slap in the face,” cooked up for the benefit of “a handful of people in suits.” Tell us how you really feel, Ronald.
Since stepping away from WeTransfer back in 2018 and watching it get scooped up by Italian tech investor Bending Spoons in 2022, Hans has mostly stayed quiet. But now he says he’s officially “out of retirement” and back to building creator-friendly tools, including a newsletter platform called Rumicat, and yes, a WeTransfer alternative that he says he started working on because, frankly, he saw this whole mess coming.
For creatives, the takeaway is clear: scrutinise your Terms of Service, ask tough questions about AI, and be ready to switch if a service overreaches. And for tech firms, the lesson is equally stark: trust is fragile. Once you trade goodwill for ambiguity, rebuilding it takes far more than a revised clause. In the end, it’s the creators who hold the real power. If they stop uploading, the AI revolution stalls. And right now, they’re watching every line of fine print.
About the author: Dominique Olivier

Dominique Olivier is the founder of human.writer, where she uses her love of storytelling and ideation to help brands solve problems.
She is a weekly columnist in Ghost Mail and collaborates with The Finance Ghost on Ghost Mail Weekender, a Sunday publication designed to help you be more interesting. She now also writes a regular column for Daily Maverick.
Dominique can be reached on LinkedIn here.