Why AI Models Turn Out Bad: Its Our Fault
Recently, there was a story about an AI model that used 4chan to learn behaviors. Needless to say, the AI it created was pretty horrible.
Prior to that, there was another story about an AI trained on Reddit. It, too, turned out to be pretty terrible.
Even before that, Microsoft created a bot called Tay which it had trained on Twitter. Unfortunately, it, too, turned out awful.
One wonders why all of these bots are so terrible. After all, they were trained based on human communication, each harvesting data, and activity based on the corpus of data from those sources. For example, the 4chan bot was built after ingesting the 4chan corpus, the Reddit bot, the Reddit corpus, and Tay, the Twitter corpus. This “corpus” is a database of all the interactions between humans on the platforms.
What does this prove? Well, if you listen to the media, it confirms that people, in general, are pretty horrible to each other. However, are they genuinely horrible to each other, or are they only horrible to each other online and in spaces where you can have anywhere from no to complete anonymity? Are online spaces more indicative of our true feelings because we’d have qualms saying these things directly to people’s faces, where we won’t be able to deal with the reaction – which could range from love to rage to violence?
I have another theory, and it’s all based on attention.
One of the things that we know is that the human-animal pays more attention to negative stimuli than positive stimuli. This is instinct; it goes way back to the days we wandered the savannah – constantly looking for threats to ourselves and our family. This is one of the reasons why the news always plays up the negative angle – if we see things as a threat, we pay more attention to them.
Enter the internet. In the early days, it wasn’t too crowded. People could contribute and post things across the internet and be reasonably sure they would be seen. Every new platform as it was born, blogs, LinkedIN, podcasts, Facebook, etc., started swell, easily allowing people to build audiences. But then we hit scale issues, and there was simply too much out there, and the platforms had to introduce curation.
At first, you didn’t need to shout to be heard. Then you had to shout to be heard. Now not only do you have to shout to be heard, but you also have to do it loud, long, and in line with what the platforms allow you to say. This led to or current attention economy.
All humans crave attention. When they can’t get it, they get increasingly extreme in their attention-getting efforts until they are heard. So what do humans do?
We communicate in the most base, horrible way to each other online, trying to outdo each other to grab that little sliver of attention. This creates a corpus of communication where people are just being horrible to each other. And we wonder why AI’s trained on these corpora turn out so bad – garbage in, garbage out.
In the end, it’s our fault. Instead of attention-craving harmful communications, we should treat the internet more like our child. Unfortunately, we are the “parents” of the internet. Thousands of companies use our public communications to train their AIs, and they are all coming up short because we treat each other so poorly online. If we want this trend to change, we need to reintroduce common courtesy to other humans online and talk to them as if they were directly in front of us. I’m not saying that we need to censor ourselves – just the opposite – we need to treat other humans on the internet the way we wish to be treated.
Once we do this, our AIs might also end up better, being trained off of a more polite, genteel internet instead of the roiling insult-fest it has become.
One can dream.