The holy grail of AI is to fully understand human language in all its nuances. To do that, it should be able to assess, extract and evaluate information from textual data. Were are we now in 2020?
Robots have come a long way since one of the first chatterbots, MIT’s Eliza, born in the 1960s was able to recognizing clue words or phrases and was programmed on a set of carefully crafted rules.
Siri and Alexa are gradually getting more AI, but when was the last time you had a satisfactory conversation with them or any of existing virtual assistants? Google claims its new chatbot, called Meena, can converse with people better than any other bot out there.
It is based on conversational neural network with 2.6 billion parameters and was end-to-end trained with 40 billion words using the seq2seq model.
The current state of the art in language processing is "Transformers" - a type of neural network architecture; "Attention" a technique for paying attention to specific words and new ways of word representation - like "Embeddings from Language Models" ELMo. Seq2seq that defined Meena is a variation of Google’s Transformer — a neural network that compares words in a paragraph to each other to understand the relationship between them. Meena is a milestone in the history of Natural Language Understanding. It goes a step further than another Transformer-based Microsoft's Turng-NLG that took Massive deep learning language models (such as
BERT and GPT-2) to the next level. A metric called "Sensibleness and Specificity Average (SSA)", which measures Interlocutor's ability to make sense and be specific assigns Meena 79%, 23% higher than Mitsuku and Cleverbot.
Here are examples of Meena talking:
- Human: Hi!
- Meena: Hey there! What's up?
- Human: Same old, Monday blues...what are you up to?
- Meena: I'm procrastinating on a paper I'm supposed to be writing.
- Human: are you thinking of pulling an all nighter?
- Meena: Possibly. What's your paper on?