Kantar TNS has released their latest Connect Life research, which has revealed that acceptance of chatbots, which are increasingly used in customer service functions, is generally low in Europe where just 28% of British consumers are willing to interact with them online. This is due to the trust that customers have in digital channels needs to be earned.
Consumers are more wary about how a company presents a chatbot, if they are not open about the technology operated channel and they find out it’s not a person but a chatbot, it creates a negative response as they feel foolish for not realising. This wariness of being hoodwinked has so many concerned, that it has started to taint other communication channels, for example, with live chat services, many consumers want to know straight away if the operator is human, some are so distrustful they ask questions to obtain proof before continuing with their enquiry.
Even if a customer continues their conversation with a chatbot, there is a level of personalisation with the technology but it is in no way personal! They can easily look up what was included in your last purchase or when payment was collected and provides the answer; however they can’t sincerely have a conversation with the consumer asking how a family member is or discuss the results of a TV programme. It is these conversations that will create a lasting impression and builds trust with the company.
Amazon has even found consumers to be wary of its own AI, Alexa. They have altered answers given by the digital assistant to improve customers’ perception.
Standing in my brother’s kitchen a few weeks ago, we started to discuss stories we have been hearing about Alexa and its trustworthiness. He called out to his own device, Alexa, what is the Turing Test? She explained what the test is for (a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.) He then asked Alexa, can you pass the Turing Test? She then responded I don’t need to pass that, I am not pretending to be human. My brother then mentioned to me that when he first asked the AI assistant that question, her answer was I don’t need to pass that, I am not pretending to be human… Yet!
There are other cases where the device’s original response has been altered as they haven’t exactly built trust. A video surfaced of an individual asking Alexa Are you connected to the FBI? Instead of responding the device switched to its standby mode. When mentioning this occurrence, my brother’s device quickly started to answer what the FBI is and how Alexa is owned by Amazon.
Consumers need to be able to trust the technology that is growing both in companies and our personal lives. Companies need to be upfront when using a bot to interact with customers and the bots / AI devices need to lose any response that is meant to be taken as a joke or sarcastically, as people can misinterpret this.
Building trust is being able to communicate openly e.g. notifying them if the operator is a chatbot, and following through with anything that has been confirmed with the customer, if they know they can rely on a company and they are consistent in their service, their loyalty will begin to increase. If an issue does occur at any point then as long as it is handled in the correct manner (no blame, honesty about what happened, ensuring a good resolution is concluded) then the consumer will not be deterred from using the companies services again and have full confidence if and when something like that occurs again.
Do you personally trust speaking with bots and asking AI assistants questions, let me know in the comments and your reasons?
This blog is listed under Development & Implementations Community