Digital Security Risks to Follow in 2019

1

As the scandals plaguing internet companies mount, policymakers could be more inclined to take stronger action…

From the Cambridge Analytica scandal to the deadly mobs mobilized by WhatsApp, 2018 has been another bleak year for digital privacy and security, full of troubling headlines about digital services being used against citizens. And though leaders are beginning to clue into the depth of problems with the world’s most popular internet platforms, technology continues to evolve and malicious actors are finding new and innovative ways to exploit these platforms while evading enforcement. That cycle shows no signs of slowing in 2019.

As the scandals plaguing internet companies mount, policymakers seem to be more inclined to take stronger action. Notable regulations were introduced in Europe in 2018, such as Germany’s NetzDG anti-hate speech legislation and the European Union’s General Data Protection Regulation. By 2019, we can hope to have enough information about their effects to see if similar measures should be implemented more widely — or if changes are needed to make those policies more fair or responsive.

Until then, here are some of the digital security and disinformation risks to watch in the year ahead.

Disinformation Fuelled by Facebook Groups

By now, it shouldn’t be a surprise that foreign actors and domestic trolls alike use social media to get their messages out. Two reports prepared for the US Senate last week — “The Disinformation Report” and “The IRA and Political Polarization in the United States” — were the latest to detail how Russian operatives used fake personas and pages on Facebook, Twitter, Instagram, YouTube and other platforms to influence American voters on different points of the political spectrum. But, as social media companies adjust their practices to crack down on these activities, bad actors are using more evasive means to keep spreading their messages.

One method for doing this takes advantage of Facebook’s “groups” feature, which allows people to share content that can only be seen by other group members. Content that is posted to groups is not surfaced in Facebook searches the way that posts to public pages are. But it can still have a significant reach — some popular American hyper-partisan groups have membership in the tens of thousands. Because they attract like-minded members, groups can serve as echo chambers, where false information that conforms with their worldview is unlikely to be challenged. And they can also allow members to develop disinformation memes and campaigns that will later be deployed to the wider population.

For example, in the weeks leading up to the US midterm elections in November, rumours that philanthropist George Soros was funding the migrant caravan headed toward the US border were rampant on social media. But digital researcher Jonathan Albright found posts about the “Soros-funded” caravan in closed Facebook groups as far back as March. “So the sources of misinformation and origins of conspiracy seeding efforts on Facebook are becoming invisible to the public — meaning anyone working outside of Facebook,” he wrote. “Yet, the American public is still left with the consequences of the platform’s uses and its effects.” Groups have also been used to launch coordinated trolling or harassment campaigns against ideological rivals.

Other platforms like Reddit, 4chan and Discord, an encrypted chat platform, have hosted this kind of activity for years, but Facebook brings it to a broader, less internet-savvy audience. And Facebook has pledged to make groups more prominent in its “news feed” feature as a way to improve user engagement, meaning there is a serious risk that group-based disinformation campaigns will only escalate.

Increasingly Convincing Deepfakes

The 2016 American presidential election highlighted the dangers of false news reports and disinformation. Now technologists are sounding the alarm about new technology that can be used to fake videos, by making someone appear to be saying or doing something they are not. In one well-known example from earlier this year, BuzzFeed put the words of comedian Jordan Peele into former President Barack Obama’s mouth.

The technology, known as deepfake, uses artificial intelligence (AI) software to essentially merge one moving, speaking face with another. It uses machine learning to analyze how the first face looks and moves and maps those movements onto a second moving face and audio feed, which may be saying something completely different. Previously, this kind of technology has only been available to large production studios, but AI has made it more broadly accessible to anyone with a sufficiently powerful laptop. And, as the technology becomes more advanced, gleaning more and more information from increasing amounts of data, the resulting videos will become more convincing.

If disinformation and hoaxes have already been able to take hold using words and a few misleading still images, it is easy to see how forged video evidence could cause havoc. A video of a political leader making offensive comments could influence an election, or in more volatile contexts, spark violence.

The existence of this technology alone could also affect political discourse, the way the term “fake news” is now used to discredit unfavourable news coverage. In an article for the MIT Technology Review, Will Knight suggested that deep fake technology could contribute to a spike in scepticism: “Just as we are now accustomed to questioning whether a photograph might have been Photoshopped, AI-generated fakes could make us more suspicious about events we see shared online. And this could contribute to the further erosion of rational political debate.”

Researchers are developing methods to automatically detect deepfakes, but the AI technology used to identify faked video will likely always be in competition with that used to create it. Another technical alternative may be to develop digital “watermarks” for original videos by embedding them with the metadata that records when and where it was created. But this depends on the widespread acceptance of consistent technical standards, which is unlikely.

As researchers Robert Chesney and Danielle K. Citron wrote in an article for the Council on Foreign Relations: “To have a broad effect, digital provenance solutions would need to be built into all the devices people use to create content, and traditional and social media would need to incorporate those solutions into their screening and filtering systems.”

In the meantime, the next best options are to raise public awareness about this technology as part of media literacy efforts and to train digital gatekeepers at social and traditional media organizations to identify telltale signs of manipulated video, such as inconsistent shadows or unnaturally blinking eyes. The Wall Street Journal’s initiative to train its journalists in deep fake detection is a model worth watching.

The Vulnerability of Personal Email Accounts

The Canadian federal election will almost certainly face online security challenges in the lead-up to the October 2019 vote, from cyber attacks and intrusions to online influence operations. One aspect of this that hasn’t received a lot of attention is the threat posed to the personal email accounts and devices of politicians, candidates and their staff members. Many politicians and staffers who have official government accounts continue to use their personal accounts, and for candidates and their campaign teams — especially those with fewer resources — a free email address from a provider such as Google or Yahoo may be the only option. These accounts can be more attractive targets for hackers because their security settings are configured by individual users, and could be less robust than government email systems. Recently in the United States, foreign state actors have attempted to break into the Google accounts of several senators and Senate aides.

Scott Jones, head of the newly-launched Canadian Centre for Cyber Security (CCCS), told a Canadian Senate committee hearing last month that the CCCS is looking at ways to advise candidates on cybersecurity. But it’s the quality of the advice that matters, as security trainer Maciej Ceglowski wrote: he found that US midterm election campaigns were getting little to no practical support in securing their accounts, just vague and unhelpful tips. Canada has the opportunity to learn from these examples ahead of its election.

Cyberespionage from China

In 2018, the world confronted growing evidence that China is ramping up its cyber espionage efforts. According to an indictment filed by the US Justice Department in late December, hackers with China’s main intelligence agency stole trade secrets and corporate information from companies and government agencies in 12 countries, including Canada. This followed recent reports blaming Chinese hackers for the breaches of the Marriott hotel chain and European diplomatic cables. Security officials and experts say Chinese spying has been escalating for the past year and a half, with the goal of building its economic and technical power by stealing other countries’ commercial secrets.

Three years ago, the US and China signed a ground-breaking agreement to stop corporate cyberespionage. The 2015 deal was hailed as a milestone in the creation of global cyber norms — formalizing the concept that countries could undertake cyberespionage for national security reasons but not for commercial gain. The cyber governance community was further encouraged when the number of Chinese attacks against US companies sharply dropped after the deal was signed. But the activity described in the new US indictments makes it clear that China has been flouting the agreement.

China’s increased spying has been a discouraging sign for the fragile landscape of cybersecurity norms. But several countries stood alongside the US this week in condemning China’s actions, which may suggest that the global community will continue to defend these norms. The year ahead will give us a better idea of what sanctions other countries are prepared to levy against China in response to its malicious activities, and what effect – if any – they will have in curbing those activities.

 

Cigi Online…