Livestreamed atrocities, fake news and propaganda, racism, trolling, as well as privacy-busting marketing and manipulation of democracy: the dark side of social media is getting bigger and starting to smell bad.
Anecdotal evidence from several friends suggest that they have had enough of the nastiness and are signing off social media in droves.
Who can blame them? It used take some effort to locate icky things. Not so on social media where it pops up in your face whether you want it or not.
Last week I looked at Twitter in the evening and saw that a follower had re-tweeted a video posted by an anti-Daesh activist. It is a horrific clip that auto-plays and shows prisoners lying on the ground being shot by terrorists, a shockingly brutal propaganda piece that should not be on Twitter.
I'm using present tense because Twitter has not taken down the video depicting mass murder. Instead, Twitter blithely records that over a million people have viewed it.
Maybe some of the people who saw the clip reported it; who knows? Twitter's media people have ignored my questions about that and why such an awful video was being freely shared on the social network and not removed.
Contrast that inaction with how quick and ruthless Twitter is to remove tweets with only the slightest whiff of copyright infringement and you have to ask where their priorities lie.
Meanwhile, an acquaintance has tried to have a fake Facebook profile of the accused Christchurch gunman taken down with no success so far. This despite whoever it is that's behind the profile threatening violent attacks and posting racist hate speech.
Even worse, the profile has been active since March 15 this year. Whatever Facebook says it is doing to combat extremist and harmful content is obviously not working.
There's been lengthy debate about how to deal with horrible things like the above. The is that all-inclusive social networks where anyone can join freely are faulty by design. Instead of acknowledging this, social networks futilely try clean up their content by asking users to report things to lowly-paid employees who read complaints and review the ghastly material to see if it should be taken down.
As you can imagine, having to see the worst of humanity daily for a living is likely to traumatise and damage people.
That and legal uncertainty around staffers acting as censors in all but name means social networks and content providers now desperately look for technical solutions where artificial intelligence tries to recognise harmful and objectionable content so that people don't have to view it.
The key feature of social networks is that they amplify good contacts and communications as well as the bad ones.
Lots of the people I interact and share things with on Twitter are very funny and witty, thoughtful, clever and knowledgeable.
Social media works at a professional and personal level: I get tips for stories, and am able to stay in contact with friends and acquaintances around the world. Most weeks something unusual happens as part of that interaction, like being able to help save a 600 gigabyte genome research database from deletion.
Which is great but at the same time it is like an addictive drug that's really hard to quit taking.
Social networks know they can provide such a powerful and positive experience. The bizarre thing is that they think it's fine to blend that with negative and harmful forces and when people complain, shrug their shoulders and mumble something ineffectual about community standards and inauthentic behaviour.
That's not the experience the vast majority of people joined up for and they will leave. You wouldn't stick around in your favourite local watering hole if suddenly you had to share it with violent extremists. Especially if the management ignored your complaints.