Skip to main content

Fake News and Misinformation in the Technology Era

Insight

In the online age, misinformation is a growing issue. Anyone can now publish articles and posts that reach a vast potential of audiences across the world, and in turn, provide readers opportunity to consume information from a variety of sources, credible or not. “Fake news” as we so call it, is information deliberately misrepresented to convince audiences for either political, commercial, or personal gains. Fake news represents an ethical bias that every companies should fight proactively using AI. 

Lack of Regulatory Rules

Nations worldwide are taking up efforts to combat the spread of misinformation, often in the form of good practices set up by technology leaders. Some aim to curb the negative influence of hate speech and political slander during elections, but the issue is much larger than Politics. Every company wants to avoid being the target of an influenced fake-news campaign.

Only a few partial regulations are currently operating to fight misinformation.

In October of 2018, the world saw the first self-regulatory code, the EU Code of Practice on Disinformation. It requires leading online platforms, social media networks, advertisers, and advertising industries to voluntarily self-regulate misinformation, commit to transparency in political advertising, proactively close fake accounts, and demonetize purveyors of disinformation. In the US, social media leader (e.g., Twitter, Google, Facebook), are also proactively trying to regulate themselves to avoid reputational and financial issues, but without nationwide guidelines. 

Social Media Platforms are facing International Pressure and Disorder

Globally, the EU has taken a leadership role in data regulations, leveraging GDPR in Privacy, their GAIA-X project to create cloud security standards, and the Code of Practice on Disinformation mentioned above.

Since the EU has already started adopting early standards there is pressure for the US to follow suit. For modern day tech companies, most business plans scale to a global level, or already operating globally, so early adoption of ethics compliance provides a powerful edge to international development. For companies who choose not to comply with international regulations / guidelines, severe consequences can result, from fines to market restrictions and bans. This is currently happening to the Chinese application TikTok which has operated outside the purview of most regulators and is alleged to be in violation of privacy and international security standards.

In the US, as the technology industry continues to evolve and advance, lawmakers are in a rush to create regulatory standards. Companies, such as Twitter, are testing multiple ways to proactively comply with regulation and combat misinformation on their platforms. In 2019, they acquired a London based AI startup to deploy machine learning algorithms to detect ‘potentially harmful news’ on the platform. In the beginning of 2020, Twitter announced that it will be flagging tweets from politicians and other major public figures with colored labels, and corrected directly beneath by various fact checking, utilizing the deep learning techniques from the acquired startup. 

The EU and US are making significant progress, in parallel, to fight misinformation. Since fake news can originate in any country, there must be global collaboration. This could lead to the emergence of an international agency / or organization to drive the development of a common framework. Indeed, the biggest challenge will be establishing ethical standards to collectively define what is considered “free speech” to capture the value of diversity versus “hate speech” that creates a level of negativity and hate that is not beneficial to society. AI algorithms are only as good as the data inputted and it is going to require significant collaboration between intelligence communities to effectively stop misinformation from being broadcasted.

Any company operating online which is vulnerable to fake news should develop their own protection framework, to avoid reputational and financial damages.

The Right Way to Fight Fake Content, for Every Company

Instead of following a wait and see strategy, companies should consider different approaches, from manual to automatic, to fight misinformation against their own firm. Here are few possible options:

Human Intervention: Teams of fact checkers are slowly being built in various companies. These groups scan and study articles or posts from fake pages and flag them for review by identifying certain language or phrases used. While this can be an effective approach, it requires manpower, sometimes additional cost, and can be quite a laborious process. 

Algorithmic Intervention: Algorithms can be an effective way to combat the spread of fake news, with little or no cost associated compared to a workforce. Varying types of algorithms can be developed: 

Algorithms based on content: 

  • Language analysis
  • Semantic matching (comparison with other known fake content)
  • Image features (scene awareness, facial recognition, image manipulation, computer vision)

Algorithms based on ecosystem: 

  • Sources analysis (linked to other threat actors, influence campaigns, manipulation bots)
  • Credibility signal (authors reliability, funding sources, quality score)

To fight against misinformation, companies can use AI. Not to delete fake content, because it has probably already circulated, but to detect early, measure, and mitigate risk. Real time-triggered alerts and reports should be employed to identify key threats and measure credibility and assess potential reputational threats across a platform. 

Hybrid models where both AI and humans are combatting fake news together is an even more powerful solution. Several companies already engage staff in content review, while algorithms are running in parallel in the background. It seems to be currently the best option to avoid unpredictable reputational damage and reduce reliance on third parties/social media. 

How Sia Partners Can Help

Sia Partners has experience helping companies reinforcing their ethical Policies, Procedures, and Standards, including Fake News mitigation and ESG tracking. Also, our Centers of Excellence have developed technical AI solutions to support initiatives and detect Fake Content.

Artificial Intelligence: Since algorithms can be part of the problem of fake news spread, they can also be leveraged in the solution. AI can be programmed to identify certain content, and to assist in validating outside sources. Sia’s technology uses NLP or natural learning processing, and deep learning to analyze large data sources of language, and to identify and flag potentially fake content to the company for review. 

Process and Implementation: Sia Partners’ team of consultants and data scientists offers several capabilities to mitigate fake-news risk, including: conducting exposure assessments to understand key challenges, developing “ethical by design” processes, defining and implementing ethical standards and reporting, and leveraging online collaboration to effectively manage misinformation.  

Change Management: Implementing ethical policies, standards, and technologies is a multi-step plan that requires strong change management capabilities. Our consultants are trained with tools, methods and approaches to successfully implement change in organization. 

Scalability and Success: Sia Partners has supported several clients in successfully building a culture of integrity, compliance, and ethics. We have helped our clients to build and sustain a global baseline of trust by applying our framework for authentic content.