Concluding what was an otherwise upbeat speech about the myriad societal benefits of technology at a policy dinner in Washington, D.C. in 2014, Google’s then Executive Director Eric Schmidt delivered a halting warning. Schmidt cautioned that as we marvel at the blessings and boundless potential of technology, we must never lose sight of the fact that there will always be people committed to using those same life-improving advancements for deeply nefarious purposes. The technology ecosystem delivered the goods but exposed us to new dangers in the process.
The speech that night came about seven months after Edward Snowden had spilled the beans on the National Security Agency, but Schmidt’s warning was still prescient. It came before the wider world began to prick up its ears to the escalating risk of cyber-malfeasance. It came before the Russians engaged in a concerted effort to manipulate U.S. public opinion ahead of the 2016 election. It came before cable news networks abandoned all pretenses of objectivity, before the deep politicization of social media platforms, and before “Fake News,” content moderation, and Section 230 of the Communications Decency Act all became part of the vernacular.
Widespread access to technology and communications channels has provided ordinary people the tools to generate and propagate endless volumes of news, commentaries, photographs, videos, and audio recordings. Statistics regarding the amount of data we are generating are mind-boggling. Something like 90 percent of the world’s data (from the beginning of time!) was generated in the past two years alone.
Of course, those trends and capabilities have empowered bad actors to abuse technology for purposes of misinforming and manipulating public opinion, and to make accomplices of millions of well-intentioned internet users who unwittingly propagate fake news with the click of a mouse. The misuse and abuse of technology and deepening perceptions of media bias have conspired to sap Americans of trust in what they see, read, and hear—especially on the internet.
Citing the annual Edelman Trust Barometer for 2021, Axios reports: “For the first time ever, fewer than half of all Americans have trust in traditional media [and] trust in social media has hit an all-time low of 27%.” Fabricated stories, doctored photos, and fake videos intended to support false narratives have become rampant problems that breed distrust, foment hatred, subvert democracy, and render our communications infrastructure—a public good that has potential to dramatically improve efficiency, foster civility, and increase living standards—unreliable, if not perilous. Meanwhile, according to a 2020 Pew survey, about three-quarters of U.S. adults say technology companies have a responsibility to prevent the misuse of their platforms to influence the election, but only about a quarter have any confidence these firms will do so. Media and technology have earned the public’s scorn, as confirmed in this roundup of Gallup polls.
Skepticism may be good to the extent it encourages content consumers to be more discerning, but skepticism alone is not enough. Something must be done to curtail the supply of fake news and inauthentic content. Protecting the public from misinformation and disinformation, adopting measures that reduce the incidence of both, and restoring trust in media and social media are responsibilities to be shared by individuals, governments, technology companies, and media outlets. Popular platforms, such as Facebook and Twitter have established rules of the road when it comes to posting and spreading “manipulated media.” But more comprehensive solutions that rely less on subjective human determinations will have to play more significant roles, as the accumulation of data—and its potential to be abused—continues to increase with each passing minute.
MORE FOR YOU
In that vein, one approach that shows promise is something called the Content Authentication Initiative (“CAI”), which partners technology company Adobe, the New York Times, Twitter, and other firms and individuals operating in these spaces for the purpose of helping “content consumers make more informed decisions about what to trust.”
Inauthentic content—both inadvertent and deliberately deceptive—is increasing. With the rapid proliferation of digital content and the tools for creating and editing that content, developing more reliable ways to ensure proper attribution and transparency is critical to restoring and maintaining trust.
The CAI seeks to help consumers make more informed decisions about the authenticity of content and about the lineage of that content—who produced it and how; and when, where, why, and by whom it may have been altered. Currently, there are ways for content creators to embed within their work metadata about authorship, but there are no standards for conveying attribution information in a secure, tamper-evident way across media platforms. And that undermines the capacity of publishers and consumers to determine the authenticity of media content.
The CAI aims to solve this problem by developing a system for digital provenance using cryptographic proof — verifiable metadata comprising information about asset creation, authorship, edit actions, capture device details, software used, and other characteristics. This will make it easier to identify manipulated or inauthentic content and will enable content creators and editors to disclose information about who created or changed an asset, what was changed and how it was changed. The ability to provide content attribution for creators, publishers and consumers is essential to fostering trust online.
Whether it is the CAI or another open-source collaboration among content producers, publishers, and consumers that ultimately mitigates the problem of “fake news,” the technology that has been a blessing and a curse has a chance to redeem itself and play a central role in moving society over the trust gap.