Meta is up to its old tricks again.

In a recent release, the company responded to the groundswell of complaints from brands around a lack of guarantees on where their messaging will be displayed, with a system for advertisers to choose where their ads are shown.
This new system has a risk hierarchy of three categories to “better manage advertising adjacency”, designed to give brands control over where they are seen and heard.
Part of the solution is integration with advertising measurement firm, Zefr.
Zefr describes itself as a ‘brand suitability monitor’. They claim to be helping power the age of responsible marketing by enabling transparent, content-level targeting and measurement across complex walled garden platforms.
A shepherd to help guide messages through the heinous black swaps of the internet.
Why that content still exists is a mystery. Meta proudly proclaimed back in December 2021 that it was rolling out a new AI system to help tackle harmful content. Yet, lo and behold your Facebook Feed is still riddled with harmful content.
That’s if your ads are even showing up at all.
Ad fraud is rife. A report in 2022 showed digital advertising fraud will cost brands $68 billion globally this year, rising from $59 billion in 2021. No word on how much of that is happening on Facebook, but for a walled garden that has built an empire on its ability to discern every possible motivation of its users, yet being forced to pledge transparency as an almost annual tradition is not reassuring.
An issue the company has been “tackling” for years.
You say innovation, I say indifferent
That’s the problem with Meta and its previous incarnation, Facebook. They have never been good at innovation. All starting when the original idea was… ‘Borrowed’, from a Harvard colleague's dorm room. They’ve been pilfering, copying and buying out ideas ever since.
The conglomerate behind Instagram, WhatsApp and Facebook has a long track record of ripping off features from competitors in a bid to remain relevant. Snapchat has long been the R&D department for Zuck, starting when Snap’s catchy, in-the-moment snapshots were copied and pasted to create Stories in 2016.
TikTok’s intuitive use of video flipped social media on its head when it tapped into users' declining attention spans. Its success inspired Facebook to slip on the Groucho mask and release Reels in 2020.
Stretching the limits of the social media age takes us back to 2015, when tech platform Timehop harnessed our love of nostalgia by resurfacing memories. Facebook didn’t even pretend to hide their blatant rip-off.
At the time, they claimed the feature was inspired because “we see behaviours from our community and we try to build on top of them.”
Meta’s most recent “innovation” is paid verified ticks – fresh off the back of Twitter owner Elon Musk introducing the same thing.
Of all the things you’re going to copy… At least copy the successes.
To be fair, Zuckerberg has admitted to such in the past. At the House Antitrust Subcommittee hearings in 2020, when asked directly Mr Zuck begrudgingly was forced to admit Facebook has copied its competition.
The fact they’ve brought in an outside resource in Zefr to tackle a systemic, company-wide issue fits with their style.
Zefr promises visibility. The problem is, in a campaign with millions of impressions, how will marketers be able to pinpoint when and where these lines are crossed? Brands can go through the impressions and check sure - but will they?
This means there is still a fair element of trust which remains required to ensure the content ads are shown against is aligned with expectations. This means the question circles around once again - can you trust Facebook?
The thing is, when you copy something, you don’t truly understand the drivers behind that innovation. Why people respond to it in the way they do. The intimate knowledge which allows you to iterate, expand, enhance - and stamp out the bugs.
Which is why we find ourselves in the constant merry-go-round of Facebook transgressions.
Misinformation wars escalate
The timing is right for these kinds of tools. With the prevalence of AI opening up the floodgates for a tsunami of good, bad and downright ugly content to saturate our every channel, telling the real from the fake is going to get much harder.
Even if we believe the moderation tools will reduce alignment with traditionally harmful content, and that they might help stop ads appearing against delicate material like images of weapons, sexual innuendo, and political discussions, is that far enough?
Just this month the social media giant has been advised to provide greater transparency on content removal and assess the platform's impact on public health in a report on covid policies.
This type of misleading content wouldn’t fall under the traditional guise of ‘harmful’ - but it sure is doing a lot of damage. During the height of the pandemic in 2021, Facebook was criticised by the US Government for allowing vaccine misinformation to spread, accusing the platform’s algorithm of boosting false information over accurate content.
Lots of people read it. Many followed it. Too many died from it.
So how do brands protect themselves from ads appearing alongside conspiracy theories, misinformation and downright lies?
Move fast, break things
Anyone who has ever had to report an issue to Meta or try to access customer service knows what a Sisyphean task can be. Or perpetually spinning in a circle, like Ixion. Either way, the process will feel like a Greek tragedy.
While Facebook’s old slogan might have been catchy for a tech bro generation, the approach is far more like Move Slow When Fixing Things.
This week Australia’s broadcast industry group condemned Facebook’s parent company Meta over its handling of scam ads featuring TV stars including David Koch and Karl Stefanovic, saying the company’s response time is inadequate and damages broadcaster reputations.
Not that it’s a new issue, either as the ACCC took action on this very topic a full year and a bit ago.
So these new innovations are being delivered by a company that has never created an original idea, papers over the cracks and consistently drags its feet in providing any real solutions.
The Facebook saga goes on.