About That Echo Chamber: Combatting Bots, Propaganda & Fake News

One of the thorniest problems bedeviling our unraveling democracy is the distortion of reality–intentional and unintentional– provided via the Internet. That distortion is immensely aided by our tendency to live in echo chambers populated by friends who think like we do.

Most of us trust links from friends – a vulnerability exploited by phishing sites and other forms of online manipulation. An increasing number of us “unfriend” contacts who post uncongenial opinions or facts inconsistent with our political prejudices.

This is a real problem.

On the one hand, citizens who occupy different realities cannot have productive conversations or negotiate practical solutions to common problems; on the other hand, censorship of electronic media, in an effort to separate wheat from chaff, is neither wise nor possible.

Can technology save us?

Most of us, whatever our political orientation, recognize the problem. As an IU Professor of Computer Science and Informatics puts it,

If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

As he notes, the Internet has spawned an entire industry of fake news and digital misinformation.

Clickbait sites manufacture hoaxes to make money from ads, while so-called hyperpartisan sites publish and spread rumors and conspiracy theories to influence public opinion….

This industry is bolstered by how easy it is to create social bots, fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns, also called political astroturfing.

In response, we developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot, our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots.

The real question–as the author readily concedes–is how to combat technology that spreads propaganda, or “fake news.” As he says, the first step is to analyze how these sites are operating.  Then we can hope that smart people adept in use of these technologies can devise tools to combat the spread of false and misleading information.

Long-term, however, “fixing” the problem of fake news will require fixing the humans who have a need to believe whatever it is that such “news” is peddling. That fix will necessarily begin with better civic education and news literacy, but it can’t end there.

Ultimately, we have a problem of political psychology…It would seem that we humans have invented tools that have outstripped our ability to properly use them.

 

[Originally published at SheilaKennedy.net on June 16, 2017]

Sheila Kennedy is a former high school English teacher, former lawyer, former Republican, former Executive Director of Indiana’s ACLU, former columnist for the Indianapolis Star, and former young person. She is currently an (increasingly cranky) old person, a Professor of Law and Public Policy at Indiana University Purdue University in Indianapolis, and Director of IUPUI’s Center for Civic Literacy. She writes for the Indianapolis Business Journal, PA Times, and the Indiana Word, and blogs at www.sheilakennedy.net. For those who are interested in more detail, links to an abbreviated CV and academic publications can be found on her blog, along with links to her books..

Comments

Loading Disqus Comments ...

Leave a Reply

Loading Facebook Comments ...