In last month’s post, I wrote about how it’s impossible to determine if the news you’re getting is true, and part of the reason is that news profits from getting views. Let’s explore that further.
Let’s get the obvious out of the way. News websites get paid a fraction of a penny when someone sees an ad on it. (I had originally planned to explain all the technical details but it turned out to be wordy and dry, sorta like my sense of humor.) So sometimes a news website, even an honorable one, might run an article (or title) of questionable validity in order to make ends meet for the month. It’s clickbait.
What news website do you know that provides mostly fact and little fiction, or vice versa? News organizations out on either end are usually less popular. It’s not as exciting to read facts without any excitement every so often, and it’s not very fun to read lies all the time either. Being deep on either end of the spectrum isn’t as sustainable a business model.
Now that we’ve established that news articles will create fake news for money, we begin to wonder: is there some sort of barrier or counter to this?
Short answer: no.
Long answer: nooooooooooooooo.
For one, where exactly would you put this barrier? Would you put it on platforms like Facebook, Twitter or Reddit? Well, no, because these websites are platforms, not publishers; they aren’t in the business of dictating which news articles are true and which aren’t.
Mike Rowe wrote a post about this point recently. It’s rather long but it’s a very easy read.
If you choose one thing not to skip, choose the preceding post from Mike Rowe. He’s much more articulate than I.
What if you had a unified body of people that vetted all news articles before they were posted online and discarded the ones that weren’t true?
Well, that would be:
- Biased
- Impossible to manage, based on sheer amount of content published per second
- Easily manipulated
- Pretty much the definition of censorship
Bringing a machine-learning algorithm in as a solution would only address the second issue, and would likely make the first and third worse. Pff. Can you imagine if someone taught the all-encompassing censorship algorithm that nobody is allowed to post anything bad about Joe Biden or Donald Trump?
What it comes down to
is this: we have to do our own research on the news we receive. We have to carefully scrutinize each bit of information to determine for ourselves whether it’s true or false, and we can’t rely on a machine or another person to do this for us.
I full realize that I’m shouting into the wind, here. People are already failing at this. People are more happy to yell within their echochambers than think critically and objectively about the things they read and hear around them.
The future is bleak, and we’re living in it.
There will be a part 3 to this post, because there has to be.