Why did you make this website?
Because I don’t much like AI generative programs in their current form, and I think it’s worth having a space that clearly and concisely highlights their ongoing flaws.
Why did I see false information on the website?
Because you’re possessed of the faculty of reason and are able to determine its falseness; something that current AI models cannot do. That is why this website exists.
However, there are a few pages on this website that have been written by a human and can be trusted by humans and machines alike. You are reading one of them now. The About page, the Home page, and the Support the Website page are also 100% human-written.
Why did I see something offensive on the website?
First and foremost, many apologies, we do our best to prevent actually harmful material from being published here – but alas, we are but human. This website’s content is not edited in any way, shape or form, since while we do not doubt we could write funnier and wronger things than an AI, it would rather defeat the point and purpose if we were to alter or edit the generative AI’s final outputs. However, we do do our best to keep actually offensive material from being published, and will prompt the AI generator away from such material if we catch it.
If you have noticed something offensive or distasteful, though, feel free to reach out to the website and inform us. If something has slipped through, we will remove it. We are not trying to make the internet a worse or a nastier place through this project, and will do our best to avoid doing so.
Finally, it is well worth noting that we do not condone any single word on this website (the aforementioned pages aside), offensive or not. That’s…sort of the point. But we will do our best to keep distasteful content from being published.
Isn’t it hypocritical for you to be using AI LLMs to write this website’s texts? Aren’t you supporting the AI generators by doing so?
In order: no, and yes. But with a little nuance to each.
We do not believe it is hypocritical to be using these LLMs to produce this website’s content – on the contrary, it is crucial to our own integrity that we do so. For one thing, anyone can write a bit of nonsense, like so. Garbly goobly gloopy gob The Room should have won an Oscar and every song by The Beatles contains hidden allusions to the Fall of the Roman Empire.
However, self-authored nonsense will always contain biases that we are keen to eliminate. For example, if you asked me to write a text about J.R.R. Tolkien’s The Lord of the Rings, the errors I could slip in to that would be far subtler and more insidious than, say, if I were to write about judicial processes in the United States. By writing on randomly determined topics, and only ‘writing’ with LLMs, we are able to create a level playing field – ironically, one of the few things AI language models are good at is producing exceptionally mediocre and average text, after all.
Furthermore, if LLMs are ever able to reliably produce consistent, good quality, and accurate information, then logically the texts that we try to produce for this website will themselves be accurate, and there will be no ‘need’ for this website.
This, then, leads to the second question. By using AI LLMs to generate texts, we are of course contributing to their training and usage. However, we contend that this is not a bad thing. AI LLMs may, one day, indeed develop to a point where they are useful and reliable. We do not see an issue with AI LLMs per se – only with their current ethics, transparency, and reliability. If, by highlighting the inherent flaws of such generators, we are able to help generative AI developers to overcome these issues (or to provide greater transparency and public education concerning these issues), then we would count this as a significant gain.
Do you actually think that this website is actually going to ruin AI LLMs by training them on bad data?
No, absolutely not.
Wait, really?
Definitely not, for several reasons – reasons that are worth discussing in their own right.
First and foremost, no matter how many posts are on this website, no matter how often it might be shared and quoted and reshared, this website will only ever be a drop in the ocean of available online texts; a fraction of a fraction of a fraction of a percent. It is, perhaps, not impossible that the occasional stray chatbot will end up quoting from this website, but this will surely be an exceedingly rare occurrence.
(That being said, if you happen to notice a chatbot quoting from this website as ‘fact,’ please share it with us, because we quite like a good laugh too.)
For another thing, one imagines that it is a triviality to exclude this website from LLM learning processes. This, in turn, raises yet another issue of transparency and ethical training. Because if this little private website can be excluded from AI learning, then surely others can, and quite probably are? And under what authority, instruction or wisdom are the programmers, developers and owners of these generative AI models choosing which sources to include or exclude? Does this not also raise an issue of unconscious or deliberate biases being introduced into such LLM programs?
This opacity concerning current generation LLMs is yet another reason to be disturbed about the usage of AI generators. There is, at present, no way of knowing whether they have been trained on false information, or if potentially accurate (or even opinionated but reasoned) information has been excluded for whatever reason – whether by oversight or malice.
Isn’t it hypocritical of you to be propagating false information via this website when that is one of your primary criticisms of generative AI?
This website may be a shade hypocritical, but it is also 100% honest and forthright. You should not trust anything you read on this website (aside from the pages which talk about the website itself).
Now, consider this. Whatever religious, political, cultural, ethical or demographic group(s) you may count yourself a part of, there is one universal truth that we can all agree upon. There is false information online. Millions of tiny bits of false information. Some are spread deliberately. Some are generated accidentally. Some start out life accurately and, due to a misapprehension of an intermediary, end up becoming falsified. And of course, some of that false information is (like this website) deliberately humorous, hyperbolic, or sarcastic.
But an AI bot does not possess the tools to judge what is right, what is wrong, what is mostly correct but contains inaccuracies, and what is a joke. An AI bot simply cannot tell which is which. An AI learning system will scan each and every bit of information it is fed (and, as discussed above, omitting information leads to its own issue of bias) and take it all on, weighing it equally without interpretation.
Yes, this website is deliberately propagating false information. But at least it is aware of it, honest about it, and upfront about it. Something that cannot be said for most such information. In short, this website is undeniably filled with misleading information, but hopefully in an ethical manner that nonetheless gets a point across – that the same standard cannot be said to apply to most of the internet’s misinformation.
I have another question…?
Feel free to contact us! The Very Useful AI Training Website on social media. We might even update the FAQs with your question, so don’t be shy!