iTnews

Bad actors try bespoke lies to avoid misinformation detection

By Ry Crozier on Mar 7, 2019 12:40PM
Bad actors try bespoke lies to avoid misinformation detection

NSA, Facebook and Twitter lament weaponisation of the internet.

The weaponization of the internet is becoming more “handcrafted”, with bad actors intentionally tailoring online content to prevent being detected and shut down by automated systems, an expert panel has said.

At the annual RSA Conference, Facebook’s head of cyber security Nathaniel Gleicher said his company treated the problem of bad actors on social media “fundamentally as a security challenge”.

“Whenever you have a space for public debate where people are having meaningful discussion, you’re going to have bad actors who try to target that debate,” Gleicher said.

“That happens as soon as the debate occurs. Because it’s a security challenge, you know that they’re going to continue trying, they’re going to continue developing new techniques, and they’re going to continue evolving their techniques.

“The way you make progress in a security world is you identify ways to impose more friction on the bad actors and the behaviours that they’re using, without simultaneously imposing friction on the meaningful public discussion. That’s an incredibly hard balance, but it’s also the biggest focus we as a company have right now.”

Gleicher said Facebook used human investigators to identify techniques used by bad actors and then code ways to target those techniques in an automated fashion.

“In medicine, doctors will sometimes inject dye into the bloodstream to see where a wound would be,” Gleicher said.

“The bad actors in this space can act a little bit like dye injected into the bloodstream of public debate because they will find and evolve new techniques first.

“When we see those new techniques, we identify the core behaviours they’re using, and then we work to build scaled solutions to make those behaviours more difficult at scale.

“You create this virtuous cycle. There’s always more to be done, but I think if you address this and approach this as a security problem, you can make progress.”

US National Security Agency senior advisor Robert Joyce said that social media platforms were “getting good at understanding … automated amplification techniques” used by bad actors.

However, Joyce warned that it was “handcrafted” rather than automated techniques that now posed some of the biggest problems.

Indeed, Gleicher noted that bad actors in the public discourse had become a lot better at finding ways not to get themselves or their accounts immediately banned.

“The challenge is that the majority of content we see in information operations doesn’t violate our policies, it’s not clearly hate speech and it’s intentionally framed to not fit into that bucket, and it’s not provably false,” he said.

“A lot of this is driven to fit into that grey space.”

Twitter’s vice president of trust and safety Del Harvey said that “content is actually one of the weaker signals that we would have in saying this person is definitely a bad actor.”

Typically, all panellists tended to look for other behavioural traits around identity and amplification of messages to weed out bad actors, and it typically took time to correlate different pieces of evidence.

Harvey also raised the prospect that constant talk of the presence of bots on social networks had damaged discourse generally.

“Because there has been so much conversation that ‘it’s all the bots’, it is amazing the number of times you’ll see two people get into an argument and one of them decides to end it by saying ‘you’re just a bot’,” she said.

“It is demonstrably not a bot. But there’s this increasing almost exit path that people take from conflict, from disagreement, that ascribes anyone that isn’t aligned with them or anybody who has a differing opinion, they’re like ‘you’re just a bot. In fact you’re a Russian bot and you are here to try and sway my mind on the topic of local football teams’.

“I don’t know why but this is something we genuinely see a lot of. We have bot scope creep where everything is a bot.”

Peter Singer, author of the book ‘LikeWar’, said that one of the challenges for social media platforms today was that they had never been designed for uses that were now playing out.

However, since the power of platforms to disseminate misinformation had been proven, Singer worried about how future platforms could evolve.

“The creators of today’s companies didn’t set out to have this war/politics power,” he said.

“They’re the first generation. What happens in the second generation of this where people realise that they have this kind of power within these platforms?”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:
facebookmisinformationnsarsasecuritytwitter

Partner Content

Security through visibility: supporting Essential Eight cyber mitigation strategies
Promoted Content Security through visibility: supporting Essential Eight cyber mitigation strategies
Teaching tech teams every step of implementing a machine learning project
Promoted Content Teaching tech teams every step of implementing a machine learning project
Vast majority of surveyed firms still rely on password authentication
Promoted Content Vast majority of surveyed firms still rely on password authentication
DoT Victoria turns to Oracle to implement unified cloud-based platform
Promoted Content DoT Victoria turns to Oracle to implement unified cloud-based platform

Sponsored Whitepapers

Planning before the breach: You can’t protect what you can’t see
Planning before the breach: You can’t protect what you can’t see
Beyond FTP: Securing and Managing File Transfers
Beyond FTP: Securing and Managing File Transfers
NextGen Security Operations: A Roadmap for the Future
NextGen Security Operations: A Roadmap for the Future
Video: Watch Juniper talk about its Aston Martin partnership
Video: Watch Juniper talk about its Aston Martin partnership
Don’t pay the ransom: A three-step guide to ransomware protection
Don’t pay the ransom: A three-step guide to ransomware protection

Events

  • iTnews Benchmark Awards 2022 - Finalist Showcase
  • 11th Annual Fraud Prevention Summit 2022
  • IoT Impact Conference
  • Cyber Security for Government Summit
By Ry Crozier
Mar 7 2019
12:40PM
0 Comments

Related Articles

  • Govt targets 'pile-on attacks', encrypted comms in new online safety rules
  • 'Yes or no?' US lawmakers fume over Big Tech's answers on misinformation
  • ACMA to government: digital platforms code needs to be stronger
  • Online safety committee tags algorithms, encryption as perilous
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

Kmart Australia stands up consent-as-a-service platform

Kmart Australia stands up consent-as-a-service platform

NSW digital driver's licences 'easily forgeable'

NSW digital driver's licences 'easily forgeable'

Kmart Australia re-platforms ecommerce site to AWS

Kmart Australia re-platforms ecommerce site to AWS

Westpac promotes its head of technology to mortgage role

Westpac promotes its head of technology to mortgage role

Digital Nation

COVER STORY: From cost control to customer fanatics, AI is transforming the contact centre
COVER STORY: From cost control to customer fanatics, AI is transforming the contact centre
Metaverse hype will transition into new business models by mid decade: Gartner
Metaverse hype will transition into new business models by mid decade: Gartner
Case Study: PlayHQ leverages graph technologies for sports administration
Case Study: PlayHQ leverages graph technologies for sports administration
As NFTs gain traction, businesses start taking early bets
As NFTs gain traction, businesses start taking early bets
The other ‘CTO’: The emerging role of the chief transformation officer
The other ‘CTO’: The emerging role of the chief transformation officer
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of nextmedia's Privacy Policy and Terms & Conditions.