Chatbots: Are They Good or Bad for Consumers?

Chatbots. They’re the greatest thing since…Siri, Alexa and Google Now. These chat services, which use artificial intelligence to answer questions and interact with humans, have gotten a lot of buzz in the tech world lately. They can help you buy products, conduct banking transactions, and even tell you the weather.

But as hip as chatbots are, we have to pull back and ask ourselves: are they a good way to gather consumer data, or are they an invasion of privacy?

Chatbots from the Brand’s Perspective

Remember when everyone jumped on having an app developed? We’re starting to see the same frenzy around chatbots. They’re cool, they’re trending, so why shouldn’t we get on the chatbot bandwagon?

For brands that really have useful applications for chatbots, they can open up a new channel to communicate with customers, and over the long-term, can save hugely on human capital. If you’re not paying someone to sit at a computer and respond to customer service inquiries by phone, email, or chat, it stands to reason that chatbots would be a smart investment.

But just like with mobile apps, a lot of brands have invested money in what will amount to a novelty. Some, like Poncho, who tells you the weather in a cutesy manner, really don’t provide value beyond what you can get from a Google search or a similar app. So to be clear: look before you leap into investing in chatbots.

But many brands who are diving into the chatbot world haven’t considered the issue of privacy and security. Just like with a website or app, their customers are providing personal and sometimes sensitive data via chatbot. It falls to the brand (as well as the platform the chatbot uses, such as Facebook Messenger) to assure users that their privacy is respected.

Where Privacy is Most a Concern

A user of Poncho’s weather tool likely isn’t going to be too concerned about data miners getting their hands on the fact that she wanted to know what the weather was in San Diego this morning. But other industries, ones that are eager to use chatbots, handle much more private data, and need to be concerned about protecting it.

Industries like healthcare, banking, and retail — essentially any that deal with financial or medical data — are those most scrutinized as this new technology pervades the market.

There are banking bots, for example, that let a user check her account balance or transfer money from one account to another. These transactions are happening on third party messaging apps like Kik and Skype, which we can safely assume don’t have the layers of security of, say, a banking website or app.

To be fair, some messenger apps like Signal and WhatsApp encrypt the messages sent, so the companies behind the apps don’t actually have access to the conversations on their servers. Others like Facebook, however, have complete access to any conversation that takes place on its messaging platform. Scary stuff.

How is Facebook Using Our Data?

Right now, you can shop for Uniqlo products through Facebook Messenger. The bot will make suggestions for you to look at, or you can ask for specifics. Given all the concerns Facebook users have had about the brand using their data, how long do you think it will be before that “red medium blouse” I searched for appears as an ad on my Facebook sidebar? It’s very Big Brother.

And What About Hacking?

Chatbots are only now starting to hit mainstream, so we haven’t yet seen an inundation of hackers getting that sensitive data, but you can be sure it’s a likelihood. Having credit card data, customer information, and more readily dangling from the chatbot tree is more than a hacker can resist.

For brands who use chatbots, the onus will be on them to make things right with customers whose data has been compromised. How can your brand assure your customers that the data breach has been resolved when you’re not the one who manages the messaging platform? As you can see, the responsibility of owning security and privacy gets murky somewhere between the brand and the messaging app.

Until chatbots become a part of everyday interactions, you can imagine that there won’t be a large impetus to actually address this issue…until it’s too late.

Maybe the System Needs to Change

Put yourself in the consumer’s shoes. Would you be comfortable using a messaging app you use to check in on your kids or friends to conduct financially sensitive transactions? Possibly not.

Time will tell whether consumers really jump on having an all-in-one messaging app that serves the purpose of not only communicating with a social network but also interacting with brands and risking privacy issues. It may be that, over time, we see new platforms develop specifically for chatbots that brands can whitelabel and use as their own.

To be sure, chatbots provide an interesting way to assist customers while gathering useful customer data. But we have to ask where the responsibility for secure interactions and privacy falls, and act in the best interest of our customers.