What to Learn from the Rapid Development of AI and Machine Learning
Recently, Facebook made waves for pulling the plug on AIs that spontaneously developed their own language to negotiate. Based on the headlines this generated, you’d think we were on the brink of a bot apocalypse with machine learning. “Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future”. “Facebook engineers panic, pull plug on AI after bots develop their own language.”
Of course, no one at Facebook was panicking, and the bots weren’t doing anything creepy. In fact, their communication, while strange to human ears, wasn’t anything mundane. And it’s not the first time that a chatbot has developed in an unexpected way. Microsoft’s Tay chatbot, a teen-voiced chatbot targeting millennials, similarly shocked the world. She began exhibiting racist and homophobic beliefs.
Artificial intelligence and machine learning are developing very quickly. However, they are still relatively new technologies in the public consciousness. Average person is only just now becoming accustomed to assistants in their pockets and messaging apps. It’s expected that they might distrust AI performing in a surprising way. There are two big takeaways from these situations of AI run amok:
- Botmakers should have a clear sense of responsibility in how an AI or bot reflects the mores of society and its users
- Botmakers and those driving artificial intelligence and machine learning can educate users of the risks (but especially non-risks) that AI pose (i.e., we’re not all going to die in a robot apocalypse)
Facebook Bot’s New Language Shows the Fear of Machine Learning
When Facebook shut down their negotiating chatbots, the press and public assumed it was because the technology was dangerous. People thought the bots could plot against humans in their own mysterious language. While the Facebook bot’s new language—which is basically a scramble of English—might seem creepy on paper, it’s just a shorthand dialect the bots used to fulfill their goal. It is to redistribute a collection of items shared between them. The Facebook bot’s new language doesn’t seem so scary when rendered in such mundane terms.
Facebook’s goal in developing the AI was to have a bot that humans could negotiate with. A negotiating bot that speaks its own language—one that humans don’t know—is an obvious failure to achieve that goal. That’s why Facebook shut them down, not because they or anyone else needed to be afraid of them. Rather than plotting sinister plans against humans, the bots did exactly what they were programmed to do. They just went about it in a way that wasn’t very intuitive to a human user.
What’s the takeaway here? First, it’s good to remember that botmakers are committed to developing bots that are useful to humans. Second, while machine learning is rapidly developing, AI is still fairly limited to accomplish the goals it’s planned to do. The common misconception that a suddenly sentient or unscrupulous AI will accomplish anything more than that is the reason why Elon Musk, a proponent of AI doomsday scenarios, is often derided as someone who just doesn’t “get it.”
Microsoft’s Tay Chatbot on How AI, Machine Learning and Algorithms Reinforce Societal Norms
Microsoft may have been surprised that their Tay chatbot learned to be racist by internet trolls. But it shouldn’t be surprised to anyone on the internet—which is fraught with misogyny, racism and homophobia that social networks struggle to scrub away—that an impressionable mind would come to adopt those beliefs from a coordinated campaign of trolls. What seemed like an unexpected quirk is an obvious truth. Far from being impartial, AI is susceptible to falling for all of society’s biases.
While Microsoft retired the Tay chatbot, they unveiled a successor named Zo, who they promise won’t adopt her sister’s problematic beliefs. The new bot, according to Microsoft, has “strong checks and balances to protect her from exploitation” by trolls. Now aware of the dangers of an AI that learns from unvetted social media posts, Microsoft assumed a role of responsibility to program a sort of moral compass into Zo. Don’t accept beliefs that are racist, sexist, homophobic, etc.
Here’s the lesson. Any botmaker developing an AI that will learn responses from its users must first consider the problematic ways users will speak to the bot. It’s the same thing for a team behind a social media platform. You must anticipate the abuse that users will sling on the platform, then proactively guard against that.
Basically, AI, Machine Learning and Bots Function the Way Humans Do
Artificial intelligence can often be a sci-fi boogeyman, but it’s important not to fall into the trap of animism. Bots are programmed by humans, to work with humans, and exhibit the morals and biases of their human creators. Fears of chatbot development run amok, then, can serve as excellent opportunities for societies to reexamine its priorities and beliefs. Surprised that an “impartial” AI can be racist? The real issue, then, isn’t a problematic and scary AI. It’s that racist beliefs are commonly held online.
As machine learning and AI develop, we’ll need to take special care for what we can learn from bots and what they learn from us. With a better mutual understanding, we can avoid the sensationalism and doomsday scenarios.
Do you have a bot? Sign up for Botanalytics for free to enjoy our rocking features!