Crushing the Internet Liars

Frankly, this isn’t the post that I had originally intended. I had a nearly completed blog draft spinning away happily on a disk, a draft that presented a rather sedate, scholarly, and a bit introspective discussion of how Internet-based communications evolved to reach the crisis point we now see regarding misinformation, filter bubbles, and so-called echo chambers in search and social media.

I just trashed that draft. Bye!

Numerous times over the years I’ve tried the scholarly approach in various postings regarding the double-edged sword of Internet personalization systems — capable of bringing both significant benefits to users but also carrying significant and growing risks.

Well, given where we stand today after the 2016 presidential election, it appears that I might have just as well been doing almost anything else rather than bothering to write that stuff. Toenail trimming would have likely been a more fruitful use of my time.

So now — today — we must deal with this situation while various parties are hell-bent toward turning the Internet into a massive, lying propaganda machine to subvert not only the democratic process, but our very sense of reality itself.

Much of this can be blamed on the concept of “false equivalency” — which runs rampant on cable news, mainstream Internet news sites, and throughout social media such as Facebook (which is taking the brunt of criticism now — and rightly so), plus on other social media ecosystems.

Fundamentally, this idea holds that even if there is widespread agreement that a particular concept is fact, you are somehow required to give “equal time” to wacko opposing views.

This is why you see so much garbage prominently surfaced from nutcases like Alex Jones — who believes the U.S. government blew up the World Trade Center buildings — or Donald Trump and his insane birther attacks on Obama, that Trump used to jump-start his presidential campaign. It doesn’t take more than half a brain to know that such statements are hogwash.

To be sure, it’s difficult to really know whether such perpetually lying creatures actually believe what they’re saying — or are simply saying outrageous things as leverage for publicity. In the final analysis though, it doesn’t much matter what their motives really are, since the damage done publicly is largely equivalent either way.

The same can be said for the wide variety of fake postings and fake news sites that increasingly pollute the Net. Do they believe what they say, or are they simply churning out lies on a twisted “the end justifies the means” basis? Or are they just sick individuals promoting hate (often racism and antisemitism) and chaos? No doubt all of the above apply somewhere across the rapidly growing range of offenders, some of whom are domestic in nature, and some who are obviously operating under the orders of foreign leaders such as Russia’s Putin.

Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.

And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.

Facebook’s Mark Zuckerberg is publicly denying that Facebook has a significant role in the promotion of lies. He denies that Facebook’s algorithms for controlling which postings users see creates echo chambers where users only see what they already believe, causing lies and distortions to spread ever more widely without truth having a chance to invade those chambers. But Facebook’s own research tells a very different story, because Facebook insists that exactly those kinds of controlling effects occur to the benefit of Facebook’s advertisers.

Yet this certainly isn’t just a Facebook problem. It covers the gamut of social media and search.

And the status quo can no longer be tolerated.

So where do we go from here?

Censorship is not a solution, of course. Even the looniest of lies, assuming the associated postings are not actually violating laws, should not be banned from visibility.

But there is a world of difference between a lying post existing, vis-a-vis the actual widespread promotion of those lies by search and social media.

That is, simply because a lying post is being viewed by many users, there’s no excuse for firms’ algorithms to promote such a post to a featured or other highly visible status, creating a false equivalency of legitimacy by virtue of such lies being presented in close proximity to actual facts.

This problem becomes particularly insidious when combined with personalization filter bubbles, because the true facts are prevented from penetrating users’ hermetically sealed social media worlds that have filled with false postings.

And it gets worse. Mainstream media in a 24/7 news cycle is hungry for news, and all too often now, the lies that germinate in those filter bubbles are picked up by conventional media and mainstream news sites as if they were actual true facts. And given the twin realities of reduced budgets and beating other venues to the punch, such lies frequently are broadcast by such sites without any significant prior fact checking at all.

So little by little, our sense of what is actually real — the differences between truth and lies — becomes distorted and diluted.

Again, censorship is not the answer.

My view is that more information — not less information — is the path toward reducing the severity of these problems.

Outright lies must not continue to be given the same untarnished prominence as truth in search results and in widely seen social media postings.

There are multiple ways to achieve this result.

Lying sites in search results can be visibly and prominently tagged as such in those results, be downranked, or both. Similar principles can apply to widely shared social media posts that currently are featured and promoted by social media sites primarily by virtue of the number of persons already viewing them. Because — lets face it — people love to view trash. Lots of users viewing and sharing a post does not make it any less of a lie.

As always, the devil is in the details.

This will be an enormously complex undertaking, involving technology, policy, public relations, and the law. I won’t even begin to delve into the details of all this here, but I believe that with sufficient effort — effort that we must now put forth — this is a doable concept.

Already, whenever such concepts are brought up, you quickly tend to hear the refrain: “Who are you to say what’s a fact and what’s a lie?”

To which I reply: “To hell with false equivalences!”

Man landed on the moon. Obama was born in Hawaii. Terrorists destroyed the World Trade Center with jet aircraft. Hillary Clinton never said she was going to abolish the Second Amendment. Donald Trump did say that he supported the war in Iraq. Denzel Washington did not say that he supported Trump. On and on and on.

There are a virtually endless array of stated facts that reasonable people will agree are correct. And if the nutcases want to promote their own twisted views on these subjects that’s also fine — but those postings should be clearly labeled for what they are — not featured and promoted. As the saying goes, they’re free to have their own opinions, but not their own facts.

Obviously, this leaves an enormous range of genuinely disputed issues where the facts are not necessarily clear, often where only opinion and/or philosophy really apply. That’s fine too. They’re out of scope for these discussions or efforts.

But the outright Internet liars must be crushed. They shouldn’t be censored, but they must no longer be permitted to distort and contaminate reality by being treated on an equal footing with truth by major search and social media firms.

We built the Internet juggernaut. Now it’s our job to fix it where it’s broken.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

President Trump and the Nuclear Launch Codes
Unacceptable: How Google Undermines User Trust by Blocking Users from Their Own Data

3 thoughts on “Crushing the Internet Liars”

  1. One of the contributing factors was the reduction in the number (and breadth) of political parties. We used to have four main ones in Canada, from left to right the NDP, the Liberals, the Conservarives and the Reform. None was super narrow, all had strong views, and considering the Reform view and the Liberal view to be the opposites was silly. Everyone knew the opposites were NDP and Reform, and there were at least two other credible view in between.

    Extremists on the right (a certain local Mayor in particular) tried to paint all their opposition as “dippers” (NDP), but everyone knew there were Conservatives and Liberals on council, and they were super easy to tell apart. All you had to do was listen to them.

    Net result? We had bubbles, but they were bigger, they leaked, and they often overlapped like venn diagrams. Much healthier than now!

  2. As you note, the devil will be in the details. There are two ways to mark content as a lie: (a) by having an authority do it, or (b) by crowdsourcing. Each has its drawbacks. In the first case, do I really want Mark Zuckerberg or Julian Assange determining truth on my behalf? In the second case, it is easy for a determined group to game a crowdsourcing system. Certainly you could adopt a FiveThirtyEight type aggregation solution, but we know how well THAT worked out. Are there other possibilities that I’m missing?

    1. Those are possible approaches — there are numerous others. I think the key is that a “one size fits all” approach wouldn’t work very well. This is all going to need to be very carefully tailored to different classes of the problem.

Comments are closed.