Internet extremism - how to combat it

Here's a Q&A on internet extremism and what's being done to stop it from spreading.

In the wake of Britain's third terrorist attack in three months, British Prime Minister Theresa May is calling on governments to join forces to stop extremism spreading online.

Here's a look at online extremism, what's being done to stop it and what could come next.

Q. What are technology companies doing to stop extremist videos and other terrorist content from spreading across the internet?

A. Internet companies use technology plus human reviewers to flag and remove posts from people who engage in extremist activity or express support for terrorism.

Google says it employs thousands of people to fight abuse on its platforms. Google's YouTube removes any video that has hateful content or incites violence, and its software prevents the video from being reposted. YouTube says it removed 92 million videos in 2015; one per cent were removed for terrorism or hate speech violations.

Facebook, Microsoft, Google and Twitter teamed up late last year to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organisations. Those fingerprints help the companies identify and remove extremist content. After the attack on Westminster Bridge in London in March, tech companies also agreed to form a joint group to accelerate anti-terrorism efforts.

Twitter says in the last six months of 2016, it suspended a total of 376,890 accounts for violations related to the promotion of extremism. Three-quarters of those were found through Twitter's internal tools; just two per cent were taken down because of government requests, the company says.

Facebook says it alerts law enforcement if it sees a threat of an imminent attack or harm to someone. It also seeks out potential extremist accounts by tracing the "friends" of an account that has been removed for terrorism.

Q. What are technology companies refusing to do when it comes to terrorist content?

A. After the 2015 mass shooting in San Bernardino, California, and again after the Westminster Bridge attack, the US and UK governments sought access to encrypted - or password-protected - communication between the terrorists who carried out the attack. Apple and WhatsApp refused, although the governments eventually managed to go around the companies and get the information they wanted.

Tech companies say encryption is vital and compromising it won't just stop extremists. Encryption also protects bank accounts, credit card transactions and all kinds of other information that people want to keep private. But others - including former FBI Director James Comey and Democratic Senator Dianne Feinstein of California - have argued that the inability to access encrypted data is a threat to security. Feinstein has introduced a bill to give the government so-called "back door" access to encrypted data.

Q. Shouldn't tech companies be forced to share encrypted information if it could protect national security?

A. Weakening encryption won't make people safer, says Richard Forno, who directs the graduate cybersecurity program at the University of Maryland, Baltimore County. Terrorists will simply take their communications deeper underground by developing their own cyber channels or even reverting to paper notes sent by couriers, he said.

"It's playing whack-a-mole," he said. "The bad guys are not constrained by the law. That's why they're bad guys."

But Erik Gordon, a professor of law and business at the University of Michigan, says society has sometimes determined that the government can intrude in ways it might not normally, as in times of war. He says laws may eventually be passed requiring companies to share encrypted data if police obtain a warrant from a judge.

"If we get to the point where we say, 'Privacy is not as important as staying alive,' I think there will be some setup which will allow the government to breach privacy," he said.

Q. Is it really the tech companies' job to police the internet and remove content?

A. Tech companies have accepted that this is part of their mission. In a Facebook post earlier this year, CEO Mark Zuckerberg said the company was developing artificial intelligence so its computers can tell the difference between news stories about terrorism and terrorist propaganda. "This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide," Zuckerberg said.

But Gordon says internet companies may not go far enough, since they need users in order to sell ads.

"Think of the hateful stuff that is said. How do you draw the line? And where the line gets drawn determines how much money they make," he said.


Share
5 min read
Published 5 June 2017 9:26am
Source: AAP


Share this with family and friends