Notice how those unsavoury posts liked by some long-forgotten friend always seem to float to the top of your curated social media feeds?
Wonder how an incitement to violence can stay on your screen for days?
What about that infuriating conspiracy that keeps getting forced down your throat?
According to an Australian digital security researcher, it's no bug. It's a feature. It's a subliminal mechanism designed to extract maximum revenue out of your inbox.
Zac Rogers, a researcher for JBC Digital Technology, Security and Governance, says the evidence is mounting that social media monopolies such as Facebook and Google actually need the powerful appeal of hate forums to energise their marketing algorithms.
"We are participating in a system whereby attention markets are being assembled by an algorithm, which fundamentally is optimising for more rage, more extremism, more violence, more hatred," he says.
Facebook VP of global affairs and communication Nick Clegg last week denied the accusation.
"I want to be unambiguous: Facebook does not profit from hate," writes Clegg.
"Billions of people use Facebook and Instagram because they have good experiences — they don't want to see hateful content, our advertisers don't want to see it, and we don't want to see it. There is no incentive for us to do anything but remove it."
But expert testimony before US Congress appeared to challenge this claim.
They say the self-teaching algorithms at the heart of social media business models have learned that anger keeps users on site longer. It drives more interaction. And that drives advertising.
"An awareness is emerging that outrage, hatred, extremism, violence, all that stuff … that's a feature of how the internet works. It's not a bug," Dr Rogers says.
"The content you've been shown suggests these dynamics are here in Australia," Dr Rogers says of a series of recent screen grabs taken from a local Facebook group. It calls for killings. It calls for violence. It promotes extreme conspiracy theories.
While a few of the posts have been taken down after complaints, the group itself remains active more than a month later.
"Rage groups always exist, but are now being algorithmically exploited. Inflaming these rage groups propels advertising to market. These are the same internet dynamics that are at work in the United States. We don't have a different internet. We don't have different standards and regulations – yet. So we're not immune from the same community fallout," Dr Rogers says.
Consumers say they want happy news. They say they want fair, balanced views.
Their clicks reveal otherwise.
"Moderate content and positive emotions, unfortunately, do not propel information to the extent that rage does. It's a fact of human nature," Dr Rogers says. "Artificial intelligence algorithms have learnt this. And, in the absence of ethics, refined themselves to exploit this."
These algorithms quickly learned how to 'trigger' consumer attention. Which is why unsettling messages are piggybacking' unwitting commercial advertising to our eyeballs.
"Mark Zuckerberg is hiding the fact that he knows that hate, lies, and divisiveness are good for business", digital forensics expert Professor Hany Farid told the US House Committee. "They didn't set out to fuel misinformation and hate and divisiveness, but that's what the algorithms learned."
"What we've been missing about the algorithms is that they are actually able to assemble an attention market – pull like-minds together first – and then propel messages via the traffic that group creates," Dr Rogers says.
The 2017 Cambridge Analytica scandal, which sought to deliver targeted political advertising with laser-like accuracy, was just the tip of the iceberg, he says. "It's all well and good to be able to do micro-targeting, but you need attention first. And the way attention is gathered and exploited is by rage clicking".
Social media services insist on offering algorithmically curated home page content instead of simple timeline-like inboxes. The algorithms are sold as tailoring your feed to suit your individual interests. But it's also being optimised for marketing.
"Facebook's algorithm, for example, is an attention assembly algorithm before it's a targeted algorithm. And the way you assemble attention markets is with rage clicking," Dr Rogers says. "That's why this stuff, this QAnon stuff, all this extremist nonsense is perpetuating more effectively in our society because it's how the algorithms work. Period."
Facebook's Nick Clegg argues his corporation has been active in tackling hate groups. "We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time," he wrote.
But the world's biggest attention market is being repeatedly accused of inaction, inconsistencies and delays in enforcing such standards. Its critics accuse its algorithms of having identified pools of rage as offering a channel through which advertising can be accelerated.
Professor Farid was blunt in his testimony to Congress: "The core poison here is the business model. The business model is that when you keep people on the platform, you profit more, and that is fundamentally at odds with our societal and democratic goals."
THE ANGER ECONOMY
Shock jocks. Screaming headlines. Angry voices on TV. The power of rage to pull attention is nothing new. It's been a part of traditional media for centuries.
Clegg argues "platforms like Facebook hold up a mirror to society".
But Dr Rogers says artificial intelligence has bent that mirror into a high-grade digital marketing lens – a lens that intensifies anxiety, fear and anger.
"Everybody in business needs to advertise on the internet, and that means they're participating in this system. Whether they know it or not, and regardless of whether they think it's ethical or not, they're subject to the way the internet works now."
Where social media is often accused of not respecting the intellectual property (IP) of news organisations and user-generated content, its curation algorithms are the most intensely defended IP in the world.
Nobody – not even governments – are allowed to analyse how they work, how they are optimised, or what human behaviours they exploit.
"If we actually saw them, we'd know for certain how hate and polarisation feature so prominently on our newsfeeds," Dr Rogers says.
THE CLICKS JUSTIFIES THE MEANS
Why is hate allowed to persist on Australian and New Zealand social media?
Why are users' emotions being exploited and manipulated?
In the 1970s, the public responded with outrage at the suggestion that flickers on their TV screens could be carrying "subliminal" messages. The practice was outlawed.
Now, 'attitude engineering' is a concept widely sold to clients by social media marketers.
"Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility," former Google ethicist Tristan Harris told Congress.
"(This) technology has directly led to the many failures and problems that we are all seeing: fake news, addiction, polarisation, social isolation, declining teen mental health, conspiracy thinking, erosion of trust, breakdown of truth."
Dr Rogers says any attempt to add ethics to the algorithms will hurt profits.
"So Facebook regulating these groups is actually hurting their own business model," he says. "When an executive stands up and says 'I'm sorry about extreme content, we didn't mean to leave that up', it does not square with the reality of the business model. They need these rage groups as they propel content to market much more effectively than anything else."
The need now, he says, is for the public and politicians to take urgent action to stop the damage.
"That's the challenge. Censorship and a crackdown on information seems to be a reflexive move. But it won't work. The underlying problem is how these companies built the internet to exploit and monetise our worst human failings."