As we think back on the Web’s evolution, there have been many consequential design decisions by Silicon Valley that have helped contribute to the spread of today’s disinformation, misinformation, “fake news” and foreign influence. One of the most important when it comes to the spread of digital falsehoods is the decision by social media platforms to give everyone a voice on every topic, rather than limiting users to narrow fields of expertise like in the journalism and academic worlds. What would happen if social media enforced such gatekeeping rules and would more control over who speaks on social media eliminate the issue of digital falsehoods?
One of the most consequential decisions of modern social media when it comes to the spread of digital falsehoods and foreign influence was the idea that anyone anywhere could publish on anything. While this gave voice to the formerly voiceless, it also gave rise to a world in which everyone viewed themselves as experts on everything.
Even within academia, a poetry professor who has never opened a physics book in their life suddenly sees nothing strange with passing social media judgement on a deeply technical new physics discovery, while an American computer science professor sees it as their imperative to offer their esteemed advice via tweetstorm on how to solve intricate local tribal conflicts in Libya despite not being able to even point out Libya on a map or name a single thing about it.
Social media encourages uninformed opinions and reckless disregard for evidence.
Reading online commentary is often extraordinarily painful, even from prominent figures in the academic and technological communities when they write outside their areas of expertise. An eminent professor with every award in their field to their name and a 1,000-page vita can cause considerable damage by using the prestige of their institution to unwittingly spread falsehoods by commenting on areas outside their own understanding and expertise. Their academic standing and institutional affiliation can often make these falsehoods take on lives of their own and persist long after fact checkers and their peers have thoroughly debunked them.
The technology community relies heavily on online communities to share information, yet these platforms are so awash with so much false information and uninformed opinions masquerading as accepted fact through the loudness of the author’s screaming that it is a miracle Silicon Valley manages to produce anything of use.
The problem is that on social media, everyone is an expert.
In contrast, within its own publication venues the academic world strictly enforces areas of expertise. A poetry professor’s belief that eating avocados will cure cancer or a computer science professor’s opinion that having sugared toast for breakfast prevents obesity may find audiences of millions on social media, but within the scholarly world, gatekeepers guard publication venues, requiring evidence-based research that comports with accepted academic standards and authors that have experience in the field in which they are writing.
The social media world has no such gatekeepers shielding it from lunatic ravings. A random citizen with no medical experience of any kind that submits a journal article to a major medical journal promoting this “one weird trick” for curing all disease based solely on a vision they had in a dream will likely find their missive rejected without the courtesy of review. On social media, they can reach the planet with a mouse click.
This ability of anyone anywhere to comment on anything has created a Web that is awash with falsehoods.
In turn, this is one of the reasons that today’s deep learning systems have so many problems, from bias to comically bad answers. They are learning from a Web that is at times little better than a digital garbage heap.
Imagine if the Apollo 11 moon landing happened today. The NASA officials, scientific experts and clinical live documentation would be competing with skeptics claiming the entire expedition was being faked on a Hollywood stage. Every word from NASA would be countered by a hundred words from those claiming it was staged.
Without gatekeepers to ensure high quality information, the moon landing would have devolved into a mess of conspiracy theories and scientific falsehoods, with the loudest voices winning.
This raises the question of whether social media should be limited to experts writing in their areas of expertise.
Should users be required to have a medical degree to post about medical issues? A law degree to post legal commentary?
Imagine if every social media user was required to submit a vita that documented their experience, degrees and areas of expertise, along with school transcripts and other materials verifying each entry. Much like journalists and academics are assigned topical beats they have little flexibility to veer from, so too would each social media user be assigned a set of topics they could write about with authority and knowledge.
Of course, this would merely turn social media back into the gatekept mediums it replaced, but would largely eliminate much of the digital falsehoods that plague today’s social platforms.
What about the lived experiences and eyewitness accounts that were a great selling point of the social revolution?
Perhaps social platforms could allow anyone to post an unverified “experience” or “opinion” while verified experts would have specialized checkmarks beside their names when they post on their assigned topics, representing that a post by a Mayo doctor about a medical condition in their area of expertise should be considered more trustworthy than a post by a non-medical professional. Similarly, a post by that doctor about a topic outside their expertise would lack their checkmark, indicating they have strayed from their area of expertise, ensuring institutional affiliations do not lend undue credibility to falsehoods.
Putting this all together, what if the early Web had been designed to enforce the same gatekeepers that managed the flow of information in the offline world? What if to post online required proof of expertise in a given area and every Internet user was assigned a set of topical areas they could not stray beyond? Could such an approach, modeled after journalism and academia, have prevented today’s deluge of digital falsehoods?
In the end, it is clear that whatever the answer to this question, today’s free-for-all in which everyone everywhere is an expert on everything is leading to a toxified digital world filled with hate, horror, falsehoods and foreign influence at every turn.
Could gatekeepers finally restore order to our digital anarchy?