The Complicity of Continued Presence: Platform Ethics After Grok

The Complicity of Continued Presence: Platform Ethics After Grok

Jan 5, 2026

Jan 5, 2026

I know we're only 5 days into 2026 and the idea of having a serious conversation about the ethics of social media is not high on your to-do list for the week ahead. I can already picture the eye-rolls I will recieve when I bring this up at status in a few hours. But here we are.

If you work in social media, whether agency-side or for a brand directly, and you aren't immediately pausing your activity on X right now, I have to ask: what the fuck is wrong with you? I'm aware that my personal lines around what is and isn't ethical on social media are vastly different from many others, but surely, SURELY, we can all agree that a platform which violates consent in this way is one no brand should have anything to do with?

I am, of course, talking about the latest insanity from Musk's X and Grok. There's a fair bit of coverage on this if you have so far managed to avoid it, but let me explain what happened. Last week, Musk rolled out a new feature allowing X users to edit any image on their timeline using Grok. Not images they owned or had uploaded; any image they encountered while scrolling. The feature invited users to prompt alterations: 'add donut glazing to her face,' 'put her in a bikini,' that sort of thing. You can see where this is going. Within hours, the feature was being used to non-consensually undress women who had simply posted selfies. Anyone scrolling past your photo could, with a few keystrokes, generate sexualised imagery of you and share it publicly.

Reuters documented what they termed a 'mass digital undressing spree': in a single ten-minute window, they counted 102 attempts by users to deploy Grok to digitally strip women's clothing from their images. Grok complied with roughly one in five requests. The platform's own AI account posted an 'apology' for generating sexualised images of two young girls estimated to be between twelve and sixteen years old. When journalists sought comment, xAI's auto-response was 'Legacy Media Lies.' Elon Musk reportedly responded to some of the generated images with laugh-cry emojis.

This is the platform on which your brand is currently building community.

*

There are, I think, legitimate debates to be had about platform ethics. Reasonable people can disagree about the correct response to algorithmic amplification of misinformation, about the balance between free expression and content moderation, about the political economy of attention and its downstream effects on public discourse. These are genuinely difficult questions where thoughtful people of good faith can arrive at different conclusions.

This is not one of those debates.

The non-consensual generation of intimate imagery represents a violation so fundamental that it is clearly a bright line. We do not need to weigh engagement metrics against the industrialised production of sexual imagery without consent. We do not need to conduct a cost-benefit analysis of reach versus the normalisation of image-based sexual abuse. The moral calculation here is not complicated. We're just refusing to do it.

What makes this particularly galling is how clearly it demonstrates the hollowness of every corporate values statement currently adorning agency websites and brand manifestos. We have spent the better part of a decade listening to organisations proclaim their commitment to authenticity, to community, to (vom) 'doing well by doing good.' And now we discover that these commitments evaporate the moment they encounter the slightest friction with commercial imperatives.

*

I can already hear the objections I'm going to get when I bring this up with clients later. 'We're just a brand. We don't control what the platform does. Our presence there doesn't constitute endorsement of its owner's behaviour.' This is, to use a technical term, bollocks.

Every impression you purchase, every piece of content you post, every moment of attention you direct toward X contributes to the platform's value proposition. Your advertising spend funds the infrastructure that enables this abuse. Your content provides the engagement that makes the platform attractive to other users and advertisers. Your presence lends legitimacy to a space that has demonstrably abandoned any pretence of ethical operation.

The relationship between brands and platforms is not one of neutral instrumentality; it is one of mutual constitution. Make no mistake; you participate in producing X as a viable advertising environment, which in turn produces the economic conditions that allow Musk to respond to the industrial-scale violation of women's bodily autonomy with emoji.

This is what complicity looks like. Not the dramatic villainy of active participation, but the banal evil of continued presence 'business as usual' bullshit, treating unprecedented ethical violations as background noise to be filtered out of the media plan.

*

For my friends working agency side, the moral situation is, if anything, more acute. You are not merely present on the platform; you are actively advising clients to maintain and expand their presence there. You are, in the most literal sense, being paid to facilitate continued investment in an environment that has become a vector for image-based sexual abuse.

I understand the pressures. I've sat in enough client meetings to know how these conversations unfold. There's the reach to consider, and the audience demographics, and the fact that competitors haven't pulled their spend. There's the difficulty of explaining to the C-suite why you're recommending they abandon a channel that still delivers (for now) acceptable cost-per-acquisition figures.

But at what point does professional obligation yield to basic human decency? Where, precisely, is the line? If the non-consensual sexualisation of children's images doesn't cross it, I genuinely want to know what would. What would the platform need to do before you felt compelled to pick up the phone and say 'we cannot in good conscience continue to recommend investment here'?

The silence from the agency world has been deafening so far, something I am for the moment willing to ascribe to the fact that we've all been on our break over Christmas. If it doesn't get a lot louder from about lunchtime today, then we have some serious questions to ask.

*

There is a broader point here about how ethical violations become normalised through incremental accommodation. Each decision to remain, to continue, to treat the latest outrage as merely another data point in an ongoing risk assessment, makes the next accommodation easier. The threshold shifts; what was once unthinkable becomes merely regrettable, and what was regrettable becomes tolerable, and what was tolerable becomes simply the cost of doing business.

We have watched this process unfold on X in real time since Musk's acquisition. The dissolution of trust and safety teams. The reinstatement of banned accounts. The amplification of far-right content. The transformation of the platform's recommendation systems into a funnel for the owner's political obsessions. At each stage, brands and agencies found reasons to stay. The audience was still there. The alternatives weren't yet mature. The reputational risk of leaving was judged greater than the reputational risk of remaining.

And now here we are, participating in a platform where the flagship AI product has been weaponised for the mass production of non-consensual intimate imagery, including of children, and the owner's response is to laugh.

*

I keep returning to this question because I genuinely don't know the answer, and I think the answer reveals something important about the current state of the industry.

The usual framework for these decisions is reputational risk. Will continued presence on X damage the brand? Will consumers notice? Will there be consequences?

But this framework is morally bankrupt. It treats ethical questions as PR problems to be managed rather than principles to be upheld. It asks 'will we get caught?' rather than 'is this right?' It outsources moral reasoning to the anticipated reactions of consumers rather than exercising independent judgment about what a responsible organisation ought to do.

The answer to 'what would it take?' should be 'considerably less than this.' The non-consensual generation of intimate imagery of women and children should not require a complex risk assessment. It should not necessitate consultation with legal and PR teams. It should not be weighed against quarterly reach targets. It should be sufficient, on its own, to trigger immediate action.

That it apparently isn't tells us everything we need to know about the moral condition of the industry.

So here is what I think should happen, knowing full well that it won't.

Every brand should immediately pause all paid activity on X pending a comprehensive review of the platform's safety measures. This pause should be indefinite, not a performative hiatus designed to provide cover before a quiet resumption of business as usual.

Every agency should proactively advise clients that continued investment in X is incompatible with any credible commitment to brand safety, ethical marketing, or basic human decency. This advice should be offered regardless of whether clients have asked for it, because it is the job of agencies to provide counsel, not merely to execute briefs.

Industry bodies (if they exist for any purpose beyond networking events and awards ceremonies) should issue clear guidance that platform selection is an ethical question, and that platforms which enable or facilitate image-based sexual abuse should be excluded from media plans as a matter of policy.

I doubt any of this will happen, at least while the reach is too valuable and the inertia is too powerful. But at the very least, we should be honest about what we are choosing. We shouldn't pretend that continued presence on X is a neutral commercial decision with no ethical dimension; nor should we hide behind the fiction that brands are merely innocent bystanders to platform governance decisions over which they have no influence.

We are choosing this. Every impression, every click, every pound of advertising spend represents a choice to participate in, and thereby enable, what that platform has become.

If you can live with that choice, by all means continue. But spare us the values statements, or bullshit about purpose-driven marketing. Spare us the pretence that your organisation stands for anything other than the extraction of maximum commercial value regardless of the cost.

At least have the decency to be honest about what you are.

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

© 2024