We’ve spent months documenting the bot epidemic on Facebook — the impersonators, the scam accounts, the predatory links, and the eerie flood of AI-generated flattery polluting the platform. Now, our legal writers and former attorneys at Closer to the Edge are preparing something bigger: a potential class action lawsuit.
We believe Meta has failed in its responsibility to protect users from coordinated fraud, emotional manipulation, and algorithmic negligence. And we know we’re not alone.
Were you scammed by a bot pretending to be someone else?
Impersonated on Facebook?
Targeted by fake accounts with real consequences?
If you suffered emotional distress, reputational harm, or financial loss because of Meta’s failure to act — we want to hear your story.
We’re building a legal strategy that centers the real, human impact behind the spam. Your voice could help hold one of the most powerful companies in the world accountable.
You can reach us confidentially by sending us a direct message through Substack.
We’ll never share your story without consent.
This isn’t just about the bots.
It’s about what Meta knew — and what they chose not to do.
Stay tuned. This is just the beginning.
Closer and closer to ditching FB. I no longer comment on public pages because of the Bots. My personal posts are friends only. And only friends of friends have the option to request friendship. Those things alone cut down on unwanted engagement. And do not feed the trolls is my motto.
But so many friends are not engaging at all. You can see they are checking on, or mindless scrolling though.
I spend way more time on Substack now, and reposting many articles to my friends on FB. But the current political climate has worn most down. Only a few ever acknowledge reading the articles.
It’s as if they are either afraid to leave a footprint of their agreement, or disagreement. Or they doubt all news sources now.
Two examples of bots and harmful content in general under the guise of free speech...
On a post about a death in the family, we received two comments linking to fake obituary notices. My mil alerted others commenting on the post to not click the links because the notices were phishing. I reported each to meta and within 3 days they sent me a message saying they did not remove the comments. I reported again explaining meta didn’t correctly identify the posts as spam/phishing and they replied within 24 hours with the same message that the posts did not go against community standards. These two commenters were most likely bots that meta decided to ignore.
Another example, I reported a post spreading false information — it was an interview with Musk saying droves of undocumented migrants were voting illegally in US elections, which is untrue/unsubstantiated. Meta never responded to my complaint and they did not remove the post.
One of the main reasons I’m still on Facebook is for established groups that either share helpful information or connect me with my community. It would be great if that wasn’t coupled with all the spammy nonsense.