The world’s first nuclear bomb test took place on 16th July 1945. As a result of that, and the tests the followed throughout the 1940s and 50s, all life on Earth was changed. Not just in the obvious ways, through the direct effects of the bombs in Hiroshima and Nagasaki, or through the societal and social impacts of the Cold War and spectre of Mutually Assured Distruction, but rather through radioisotopes like Carbon-14 or Cobalt-60.

Before those tests, the natural production and decay of Carbon-14 isotopes was such a predictable phenomenon we could use measurements of the quantity of them to date fossils, thousands of years old, to within a few decades. As a result of those tests, the concentrations of Carbon-14 in the atmosphere doubled, and and this spike is detectable in every living thing that grew on Earth since… Similarly, every metal smelted since the 1940s has been just a little bit radioactive, leading to a market in “low background steel”, a precious and rare commoddity that is steel untarnished by the tidal wave of nuclear by-products we unleashed on the world.

I was reminded of all this by a recent story about how one of the Snakeoil Industry’s finest used AI to automatically generate thousands of articles of what we can reasonably assume is plausible - but keyword-heavy - bullshit in order to game Google’s algorithms and steal traffic from sites with, you know, actual content.

The motivation for this isn’t immediately obvious beyond “he’s in SEO, ruining the Internet is just what he does”, but it doesn’t really need to be. What’s new is not the objectionable behaviour, it’s the ability of AI to automate it on an unprecedented scale. And I can’t be the only person to have noticed that he’s not the only one doing it - for some months now, the first page or two of every search on Google (and other search engines) is link after link of clearly AI generated, well, crap. There is still useful information out there, but the body of information as a whole is now tainted by the radioactive glow of machine generated nonsense.

I guess what’s interesting about this is not that it’s happened - anyone who doubted mankind’s ability to destroy anything good doesn’t understand the tragedy of the commons - but rather what it means for the future of AI and machine learning. Now I’m no AI luddite - I used StableDiffusion to generate the cover illustration for this post, and will hopefully be posting some more on my adventures and experiments in using the current generation of generative AI and LLMs - but one has to wonder, if the corpus for training AI is “everything on the Internet”, and with that corpus now permanently and overwhelmingly flooded with the bullshit that current generation AIs have generated, what is it we will be training the next generation of AIs on?

Maybe there will be a market for “low-background-bullshit content” in the future; the rare commodity that is useful information well presented, rather than AI generated keyword soup. Perhaps the result of the AI revolution will actually be that we will come to value - and pay a premium - for original research and well expressed thought more than plausibile regurgitation of old works…

Well, we’re all allowed to hallucinate once a while, aren’t we?