Blenderbot 3: Trump fan, anti-Semite, climate change denier – sure – economy

In the movie “She”, a man falls in love with the voice of his computer operating system until he realizes that a woman, so educated and understanding, serves thousands of others in addition to him, and has fallen in love with another operating system. With today’s artificial intelligence you can’t have such deep conversations as in the movie, but they are getting better and better at it. After all, it’s so good that there are people who are firmly convinced that there is awareness in the bits and bytes. Like a Google employee who got fired and who insisted that one of the company’s chat programs was made aware.

Now, another automated chat program, a so-called chatbot called Blenderbot 3, has caused a stir. On Twitter, where words are fired and only then does the thinking start. users are excited about itthat the bot represents anti-Semitic attitudes, it denies Trump’s guilt for storming the Capitol and climate change. You are right, this is what we expected, and the developers did not expect anything else.

The developers of Blenderbot are in a difficult position. If you really want to know what will happen when you unleash your bot on Americans (it is not unlocked in other countries), you need to do so. Problems only become apparent when the software chat with users in large numbers.

So it came as it had to come. When the software learns from the conversations and the Internet in a country as divided as the United States, it learns not only useful things but also extreme opinions and stupid rumors that many people do not dare to honk to the world about.

As if only uneducated children were throwing out nonsense

Meta’s reaction to the expected fiasco is not particularly satisfying, and that is the real problem. Much has been done to ensure safety. Registration is only possible for people over the age of 18 – as if only uneducated children were telling nonsense. It was also noted that the bot could make false or even offensive claims. And: Users should not be tempted by the bot to give inappropriate answers. And the bot actually serves to see people try to tempt it into sending hate messages. In order to avoid this in the future.

But that certainly doesn’t stop a rabid citizen, paid, or just convinced troll from taking the garbage out. What Meta and other tech companies need to do is create transparency. How exactly are they going to stop their AI from going crazy? The fact that other researchers outside of the group should be able to see the data is a step in the right direction. The public has a right to know about systems that possess such explosive power.

After all, too much damage has already been done when it comes to dividing society, and social networks have played a role in that. The meta claims that the bot is for research purposes only. This is commendable, especially as research is to be carried out on how to prevent AI from spreading even more untruths and hatred around the world. But when a company like Meta presents innovation of this magnitude, caution is always required. Ultimately, profit always triumphed over fears at Meta. The problem remains that a commercially oriented company is above truth and morality.

However, any necessary criticism must not overlook one thing: the fact that software like Blenderbot is able to talk about almost any topic is a huge achievement. For which it required an equally tremendous effort. This can only be done by someone with enormous resources, i.e. money, data and excellent, i.e. extremely expensive, personnel. This automatically rules out other companies that cannot afford it. This is also a problem.

Leave a Comment