Chatbots: Too Good to Be True? (They Are, Here’s Why)

chat-bot

Chatbots: Too Good to Be True? (They Are, Here’s Why)

Chatbots have exploded in popularity as a multi-industry, enterprise solution over the last few years. But despite their widespread attention, most companies who have adopted chatbot technology find that their users are unsatisfied with their experiences. There’s a hard technological truth as to why these solutions seem inadequate, and understanding it can help financial institutions navigate through the noise to say goodbye to chatbots and find a conversational solution that actually works.

So what is that technological truth? It’s actually quite simple: human language is one of the most difficult tasks for an AI to master. According to research conducted at the University of Arizona, the average person uses around 16,000 words a day. Speech is so natural that we often forget how complicated it is, and how amazing our brains are for enabling us to do it with such ease. Renowned computer scientists Donald Knuth articulated this when he said,“AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do without thinking”

When considering an AI powered virtual assistant for financial services, it’s important to consider the vernacular of the people who will ultimately use that technology. Will they ask “how much money do I have in my personal checking account?” or rather, “hey, do I got any cheddar left?”

Even if you’ve never used the word “cheddar” to refer to money, you probably still understood what I meant. But that’s not all you did in the millisecond it took you to read and comprehend that sentence. You also noted that “left” meant in the bank, and “do I got” meant “do I have”. In that small sentence, grammar rules where eschewed, conversational maxims were ignored, and word choice was non-traditional. Yet, you still understood its meaning.

This is precisely what is so difficult for an AI to master. Context, irregular word choice, improper grammar. For us, it’s simple, because we’ve evolved to manipulate language, and understand each other’s meaning. It’s not that the AI is insufficient for being confused, it’s that we’re incredibly intelligent for understanding one another in the first place.

An AI chatbot or “voice recognition platform” would associate “bread, cheddar, and gravy” with their explicit definitions. In this case, food. Therefore, if a user were to ask a financial AI “how much bread do I have?” it would tell the user they have none; an alarming response when inquiring to an “intelligent” assistant about your finances.

So why not simply change those explicit definitions to reflect common financial slang to the AI? Well that’s exactly what nearly every other conversational AI is doing. It’s called the rule based approach, and even if that approach involves deep learning, the AI isn’t going to understand like a human, it merely remembers and learns the synonyms and syntactic structures it’s taught.

Why is this insufficient? Because our language is constantly evolving. According the the Global Language Monitor, 5,400 new words are created every year. And even if you taught the AI all of those new words, it’s unpredictable how people will actually use them. Ignoring the fact that merely training an AI new synonyms doesn’t enable it to decipher context, it still only takes 1 irregular use of a speech to break an AI using the rule-based approach. And it only takes one bad experience with a chatbot to ruin a brand’s reputation, according to a DigitasLBi study,

73% of Americans will not use a company’s chatbot again after having one bad experience with it.

Meanwhile, the team at Clinc, which boasts 2 professors and 9 PhDs, has spent years researching and developing a conversational assistant that is capable of understanding language, specifically financial language, like a real human. Clinc employs natural language understanding, slot value extractions, word context analysis, and long term/short term memory techniques to build an AI that doesn’t just act human, but learns like a human too. The conversational abilities of this technology are brand new, and its existence is pushing the limitations of what artificial intelligence can really do.

The engineers at Clinc are pioneering research in conversational AI in meaningful, tangible ways. They have created a deep neural-network architecture that mimics the learning capabilities of the human brain. Clinc’s team doesn’t teach their AI synonyms, they teach it how to learn language, So you can ask “how much cheddar’s left?” and it won’t break, instead, it will tell you “1,000 dollars”, without blinking an artificial eye. Clinc’s expertise is not a marketing gimmick, it’s the deployment of years of scientific research, conducted by the founders themselves.

If the chatbot you’re looking into sounds too good to be true, it probably is. If the team behind the company you’re interested in hasn’t specialized their research in conversational AI, they’re simply replicating features that already exist, and marketing them to appeal to you. So is that FinTech that promises their AI is “Siri for Banks” telling the truth? Sure, if you consider that only ⅓ of smartphone users actually use their virtual assistants.

If you want your 73% of your customers to abandon your product after 1 use, go with rules-based chatbot. If you want to amaze all of your customers, 100% of the time, choose Clinc.

Leave a Reply

Back To Top
Where to find us


Headquarters
ANN ARBOR
200 S. First Street
Ann Arbor, Michigan
48104