Pegasystems study highlights the need for greater empathy in artificial intelligence systems
Consumers lack trust in artificial intelligence (AI) and don’t understand the extent to which it can make their interactions with businesses better and more efficient, according to new research from Pegasystems, the software company empowering digital transformation at the world’s leading enterprises.
The study, which was conducted by research firm Savanta and unveiled at PegaWorld in Las Vegas, surveyed 5,000 consumers around the world on their views around AI, morality, ethical behavior, and empathy.
Despite AI delivering the types of customized, relevant experiences people demand, many consumers still aren’t sold on the benefits. With many businesses turning to AI to improve the customer experience, it’s important for organizations to understand their customers’ perceptions, concerns, and preferences. Key findings of the study included:
•Consumers are cynical about the companies they do business with: Sixty-eight percent of respondents said that organizations have an obligation to do what is morally right for the customer, beyond what is legally required. Despite this, 65 percent of respondents don’t trust that companies have their best interests at heart, raising significant questions about how much trust they have in the technology businesses use to interact with them. In a world that purports to be customer centric, consumers do not believe businesses actually care about them or show enough empathy for their individual situations.
•There are serious trust issues with AI: Less than half (40 percent) of respondents agreed that AI has the potential to improve the customer service of businesses they interact with, while less than one third (30 percent) felt comfortable with businesses using AI to interact with them. Just nine percent said they were ‘very comfortable’ with the idea. At the same time, one third of all respondents said they were concerned about machines taking their jobs, with more than one quarter (27 percent) also citing the ‘rise of the robots and enslavement of humanity’ as a concern.
•Many believe that AI is unable to make unbiased decisions: Over half (53 percent) of respondents said it’s possible for AI to show bias in the way it makes decisions. Fifty-three percent also felt that AI will always make decisions based on the biases of the person who created its initial instructions, regardless of how much time has passed.
•People still prefer the human touch: Seventy percent of respondents still prefer to speak to a human than an AI system or a chatbot when dealing with customer service and 69percent of respondents agree they would be more inclined to tell the truth to a human than to an AI system. And when it comes to making life and death decisions, an overwhelming 86 percent of people said they trust humans more than AI.
•Most believe that AI does not utilize morality or empathy: Only 12 percent of consumers agreed that AI can tell the difference between good and evil, while over half (56 percent) of customers don’t believe it is possible to develop machines that behave morally. Just 12 percent believe they have ever interacted with a machine that has shown empathy.
One of the critical ways organizations can increase customer trust and satisfaction is to use all the tools at their disposal and demonstrate more empathy in their interactions. But empathy is not a common corporate trait – especially when trying to maximize profitability. As AI becomes increasingly important in driving customer engagement, companies need to think about how to combine AI-based insights with human supplied ethical considerations.
“Our study found that only 25 percent of consumers would trust a decision made by an AI system over that of a person regarding their qualification for a bank loan,” said Dr Rob Walker, vice president, decisioning and analytics at Pega. “Consumers likely prefer speaking to people because they have a greater degree of trust in them and believe it’s possible to influence the decision, when that’s far from the case. What’s needed is the ability for AI systems to help companies make ethical decisions. To use the same example, in addition to a bank following regulatory processes before making an offer of a loan to an individual it should also be able to determine whether or not it’s the right thing to do ethically.
“An important part of the evolution of artificial intelligence will be the addition of guidelines that put ethical considerations on top of machine learning. This will allow decisions to be made by AI systems within the context of customer engagement that would be seen as empathetic if made by a person. AI shouldn’t be the sole arbiter of empathy in any organization and it’s not going to help customers to trust organizations overnight. However, by building a culture of empathy within a business, AI can be used as a powerful tool to help differentiate companies from their competition.”
To help improve empathy in AI systems, Pega has announced the launch of its Customer Empathy Advisor.