Can a Bot be Moral? Why Chatbot Ethics Matter

Can a Bot be Moral? Why Chatbot Ethics Matter

HBO’s popular Westworld series has introduced a recurring response about chatbot ethics to those questioning who’s real and who’s a robot in the near future: “If you have to ask, does it matter?”

If the response to Google Duplex is something to go by, the answer is a resounding yes. Shortly after Google made their impressive demo of the product, tech professionals and everyday people alike expressed worry about the AI’s failure to disclose that it wasn’t a human.

Not only did Google withhold the fact that it was a bot to the person on the other line. The primary issue was that the AI was deceptive by using speech disfluencies like “hmm” and “um” to appear human. A reality in which humans and bots are indistinguishable is a common theme of dystopian sci-fi flicks. That clearly isn’t an issue that people want to wrestle with in their daily lives. Duplex’s reveal—and Google’s quick response to promise the bot will disclose its status as an AI—is a high-profile case study in the messy ethical questions that come with the development of AI that other businesses should learn from.
chatbot ethics

 

Botmakers Need to Design with Keeping Chatbot Ethics in Mind

If you’re a business who recently deployed a chatbot, or are in the process of designing one, don’t panic about consumers’ recent worries and mistrust of AI. Take a few deep breaths; everything will be okay. While unethical AI and bots are a concern for many consumers (even Elon Musk worries about an AI-influenced Armageddon), chances are your bot is crafted with love and is tailored to be users’ new BFF.

The problem, then, is convincing users your bot won’t become a frenemy who stabs them in the back. Because many users are still becoming accustomed to the idea of AI (in a time when user privacy and data collection is a big concern), botmakers need to very carefully ensure they employ user-focused design that doesn’t betray trust. How does one do that, you might ask? By sticking to a tried-and-true list of rules. Consider them the four commandments on chatbot ethics.

#1: Don’t Deceive Users.

It’s generally recommended that chatbots don’t deceive users by pretending to be human. As we’ve seen with Duplex, users take great offense to deception. Why? Because they want to know who they’re talking to is another human with agency and not simply a cold, calculating ruleset designed to protect a business’ profit margins. And if a chatbot deceives users about its identity, how can someone believe anything they say?

Striving to design a humanlike bot is a matter of making conversation feel intuitive. But you’re not trying to pass the Turing Test to fool your users. Therefore, let users know up-front that they’re talking to a bot, either through onboarding or by having the word “bot” in the chatbot’s name.

#2: Take a Customer-First Approach

Good customer service requires empathy. We’ve all made a frustrated customer service call in which a gracious rep was willing to bend the rules a little to resolve a problem. But while a human might be willing to accept a return long after the window has passed, many fear AI is colder and more calculated. AI is a rules-based system, after all.

A chatbot following the rules of chatbot ethics will always put customer interests before business. This doesn’t mean you should have no rules or policy, allowing a bot to let customers have whatever they want. But it means your approach should focus on providing users with the service or product that benefits them most—not just whatever bolsters your business profit margins. This includes things like offering different prices to different demographics or deceptive upselling. It also means you user data shouldn’t be used against them.

How you interpret this rule will depend on your chatbot and business. Just take careful consideration in what’s best for the user when designing the conversation flow. Because an ideal chatbot should lead the conversation.

#3: Always Be Transparent with Data

Chatbots collect a lot of data. You don’t want your bot to seem like a creepy stalker; imagine if you showed up to a first date only to discover your date scoured your social profiles the night before. Don’t let users feel like this is happening when they first meet your bot. Always be up-front with what kind of data the bot collects, what it’ll be used for, where it will be stored and who will have access. In fact, doing so will now be required under the new GDPR rules.

Sensitivity matters for chatbot ethics, too. When devising what information your chatbot asks users for (location, health information, occasion for purchasing a gift, etc.), consider how sensitive the information is and deal with it accordingly. Users are prone to over-share, so guide the conversation in a way that discourages providing more information than necessary.

#4: Don’t Allow Abuse

A chatbot should treat users with respect, and that means it should serve as a model for good communication skills. While Duplex received backlash, another new Google Assistant feature gained praise. Assistant encourages users to say “please” and “thank you” when making a request in a specific mode. The idea is that it will instill good manners for younger users.

Likewise, your chatbot should recognize abusive language and push back against it. Alternatively, the bot can reward users who are unusually friendly and polite. On one hand, these features will make the bot feel more empathetic. On the other, it won’t encourage or cosign abuse. Botanalytics, a conversational analytics tool, offers various ways to track your chatbot’s interaction with humans. After making these changes on your bot’s language, you can use retention rate feature to see if your bot is doing its job well.

Keeping these rules in mind, there’s no need for users to worry about talking to your bot—and no need for you to worry about backlash.