Generative AI's Dark Side: Amplifying Misinformation and Dumbing Down Society

Reliance on Generative AI via Large Language Models (LLMs) is destined to make society even dumber.

LLMs generate responses based on patterns and information in the data they were trained on. i.e. The entire detritus of the internet that includes a multitude of unreliable sources. This indiscriminate training and lack of verification leads to the incorrect assumption that every written word is a fact and is resulting in a huge acceleration of the propagation of misconceptions and false beliefs.

There appears to be no scientific rigour applied at all before they make confident statements of ‘fact’. They even provide references - which on the surface would seem to add weight and make their claims appear valid, but if you follow the references they are more often than not baseless claims by some unqualified author who simply made it up in order to sell you something.

Example from chatGPT4:

The two references provided are from https://beebuzzhive.com/facts-about-beeswax and https://superbee.me/beeswax - both of which have made the same unfounded and patently untrue statement about beeswax candles releasing negative ions. Neither website provides a reference to any scientific study to support their claim.

A widely held belief is not a fact.

Accepting statements as true, especially from online sources, involves a process of evaluating the credibility and reliability of these sources. It appears that these LLMs do not apply any such credibility evaluation. The sources, such as SuperBee and BeeBuzzHive, focus specifically on topics related to bees, beekeeping, and bee products. To an LLM this specialisation might suggest a higher likelihood of accurate and detailed information in their area of expertise, but it is a very poor assumption - especially with online sources and especially when someone is trying to sell you a related product.

LLMs make the mistake of assuming that because multiple independent sources provide similar information, it increases the credibility of the information. It doesn’t. there are innumerable common misconceptions. Generative AI, when assumed to be an all-knowing authority, simply reinforces and amplifies these misconceptions. e.g. “According to a series of papers published by Krueger et al. between 1957 and 1963, negative ions help the airways in the lungs to clear. However, in 1971, Andersen’s book Mucociliary Function in Trachea Exposed to Ionized and Non-Ionized Air proved these claims to be false. Not only did he carefully identify the flaws in the earlier studies, he performed a large experimental study under very controlled conditions that demonstrated that there is no relationship between ion concentration/polarity and the performance of the airways of the lungs. Despite the earlier papers being debunked, they are still referenced to this day in a show of stunning confirmation bias.” - https://www.rs20.net/w/2013/12/do-beeswax-candles-produce-negative-ionisation-ions/

There appears to be no contextual alignment with known facts.

Another issue compounding the problem is that if there's no widespread or authoritative source that contradicts the training information that the LLM is provided, so it adds to the perceived credibility.

The scientific evidence

The specific claim that beeswax candles emit a significant number of negative ions that purify the air is not supported by dedicated scientific research. Claims often made by beeswax candle sellers are typically not backed by empirical studies or scientific literature.

Conservation of Charge: A core principle in physics is the conservation of charge, which states that the creation of a negatively charged ion must be accompanied by the creation of a positively charged ion. If beeswax candles were only producing negative ions, this would either mean that an equivalent number of positive ions are also being produced (which would neutralise the effect of the negative ions) or the candle itself would acquire a significant positive charge. Both scenarios contradict the claim that beeswax candles act as a source of negative ionisation in the surrounding environment.

Temperature of Candle Flames: The ionisation process typically occurs in flames that reach temperatures over 1500°C. However, the temperature of a candle flame is generally around 1100°C, which is insufficient for significant ionisation. Additionally, any ions produced by the flame would likely recombine shortly after formation, diminishing their potential impact on air quality.

Studies on Ionisation and Air Quality: The claim that negative ions can significantly improve air quality by binding to pollutants and allergens lacks robust scientific support. Studies from as far back as the 1930s indicate that the attachment coefficients for negative and positive ions attaching to aerosol particles are almost the same. This results in a balanced distribution of negatively, positively, and neutrally charged aerosol particles in typical environmental conditions. High aerosol concentrations may even lead to a predominance of neutral particles, as there might not be enough ions to charge all the particles.

After being presented with the scientific facts I asked chatGPT to reconsider its statement of fact:

But Generative AI doesn’t LEARN

I would argue that referring to LLMs as ‘Artificial Intelligence’ is misleading and patently dishonest. Intelligence is formally defined as ‘the ability to think, to learn from experience, to solve problems, and to adapt to new situations’. Today’s LLMs are trained on a fixed dataset and learn nothing from their interactions or the submission of new information. Even though I successfully taught chatGPT that beeswax candles do not emit negative ions it will confidently make the very same false claim to the next person that asks.

Is a huge flaw in the product that it doesn’t have the capability to remember past interactions with users or to update its knowledge based on these interactions. This is especially true when a user attempts to correct false statements with scientifically supported evidence to the contrary.

Trust

The notion that AI reinforces common misconceptions, instead of debunking them with scientific evidence, is a serious concern for those like me who advocate for the responsible development and deployment of AI technologies.

Similarly, unchecked hallucinations only serve to drive a general mistrust in AI products. (See: Reducing Hallucinations in GPT-4 Responses - A Comprehensive Guide for Professionals)

When providing information to the public, it's crucial to apply rigorous standards and distinguish between what is a widely held belief and what is an established fact supported by empirical evidence. The distinction between belief and fact is fundamental, and not exclusive to scientific discourse. A belief, no matter how widespread, does not equate to a fact without empirical evidence and scientific validation. This principle is essential in ensuring accuracy and reliability in information dissemination.

Simply claiming that “Users of AI technology should critically assess the information provided by these systems and consult authoritative sources for confirmation, particularly in areas where accuracy is crucial.” is just not good enough.

I’ll leave you with this shining example of how a Generative AI will lie like a cheap watch:


About the Author

Billy Lindon is an accomplished expert in digital marketing and technology, with a particular focus on the impact of generative AI on businesses and society. He brings over 30 years of experience in utilising internet technologies to create competitive advantages. His proven skills in eCommerce, Search Engine Optimisation, and Conversion Rate Optimisation across diverse platforms, including Shopify, YouTube, Amazon, WordPress, and Squarespace, are well-recognised in the industry. Billy's extensive tenure at Nokia, where he served as a technology manager and as the global head of product marketing in the software and services division, underscores his deep knowledge and expertise. His keen insights into digital marketing strategies, combined with practical experience in technology management, establish him as an authoritative voice on the transformative role of AI in the realms of digital marketing and technology.