Analogy and machine-learning : how AI can better understand human language

If deep learning neural networks have shown progress in word recognition through Word2vec models, they still reveal flaws in their learning system. In particular, algorithms fed by mathematical correlations fail to understand semantic associations.

According to Hofstadter and Sander in Surfaces and Essences, these machines lack a human sense of analogy. In comparison, human speakers can apply words or expressions in very different contexts, and extract very different meanings from them. Words don’t belong to rigid categories but easily change and evolve.

This great plasticity of meaning fuels language with many stories. These narratives, otherwise known as “common sense” (e.g. that an apple is eaten by biting, cutting or squeezing it into juice) are missing in AI, and prevent them from reaching this deep understanding.

Here are 4 types of semantic elements that they could be taught to spot and understand.


What are the semantic basis of analogies ?

We use words every day to express ourselves, but how do we learn and make sense of them? It all starts with how easily we use a word and expand its concept to associate it with other meanings.

For example, a child will first understand that his “mother” is the one who feeds and cares for him. Then, little by little, he will understand that his male equivalent is not his mother but his “father” by analogy. He will also associate this word with the women who are kind to him in the park, understanding later on that they are not “his mothers” but sometimes mothers “for other children”. This notion of the mother will be used as a metaphor to understand other similar yet different or opposite notions (grandmother, sister, brother…).

As such, children are great producers of analogies. Adults too, but they learn to subsume these analogies into more precise categories (a woman, a girl, a daughter…) or put these analogies even further from their original meaning (“the American revolution is the mother of the French Revolution”). This allows them to build new concepts to understand a situation from a new angle.

These analogies also give sense to sentences, which convey familiar and meaningful situations that arise in life. For example, phrases such as “what’s new? “ or “Are you out of your mind ? ” gain their full meaning from the associations they suggest and the context in which they are used. They are taken in a broader than literal sense and imply a richer meaning by analogy. “What’s new” does not only imply knowing what’s new, but also more generally “how do you feel ?”, or “Do you want to talk ?”. These meanings are not explicit, yet they derive this primary meaning of knowing the “news” by analogy.

Machines should learn to locate and use these numerous analogical structures to make sense of language.


Proverbs, fables, and common sense analogies

Other semantic elements, such as proverbs and fables, also help us define life situations in a personal sense.

Phrases such as “You can’t judge a book by its cover” or “The grass is always greener” are not just bland assertions. They express a real world of meaning, in which speakers place themselves. By expanding our everyday language with new words, they gather new narratives around a common way of experiencing life. For example, speakers will use “the grass is always greener” for different situations, such as their dilemma of wanting to leave a job that we like but don’t pay us enough; or of leaving a city that we love but which is becoming more and more expensive to live in. It creates a new narrative which is that of cognitive dissonance.

Fables are even deeper inspirations of the “common world” that speakers live by. For example, Aesop’s fable “The Fox and the Grapes” tells the story of a fox who cannot access the grapes he craves for. Out of frustration, he concludes that these grapes are sour anyway. The lesson of this fable is a gold mine for the speakers, who draw from it a new way of framing personal situations. In fact, this fable had such wide success that it has inspired fundamental concepts: hypocrisy, bad faith, and more recently the concept of decision-making rationalization.

This is why each language has a better way of framing a particular way of experiencing and perceiving life. Some speakers of one language have no problem suggesting a particular feeling but have great difficulty translating it into another language. It is easy to see, for example, how French speakers are at ease with words related to fashion and society. So much so, that English speakers do not hesitate to borrow them (“chic,” “haute couture” “dernier cri”).

In order to understand how speakers use their language, machines need to be able to consider closely these narratives. They would then grasp the common sense assumed by a particular language, which defines the sense of fairly used concepts.


How machines can learn language abstractions

When speakers of a language use a word, they give it a different sense depending on the level of abstraction. Indeed, a word like “man” does not mean the same thing, when we talk about “the universal rights of man” and the “man’s basketball team”. In the first case, the word expresses a more universal sense including women. In the other, it is taken in a restricted sense, that of men. These abstract distinctions are commonplace in every language.

While each word is marked differently depending on the situation, speakers have an innate sense of which level of abstraction is being referred to. For example, someone might say they are going to have coffee with their friends and at the same time order tea from the waiter at the restaurant. The word coffee is then taken in a broader sense, meaning all hot beverages. However, when that same person orders “a coffee for me, and two macchiatos for my friends”, the meaning is taken in a restricted way (the regular coffee that we generally assume).

What are the benefits of this kind of semantic abstraction? To have great flexibility in the formulation we choose, without being afraid that our interlocutor doesn’t understand what we’re referring to.

What’s more, these abstractions broaden our thinking, by helping us to discover more systemic analogies in our common realities. This is what physicists have found behind the universal concept of “wave”. Originally taken in the sense of the water wave, it was gradually adopted by physicists to explain the path of sound in space, and then of light by analogy.

In the same way that a wave moves by oscillation, sound moves and emits a certain frequency. But in this case, it is longitudinal waves that operate the same movement but in another direction than motion, which are however also taken by the same for “waves”.

This gives an abstraction at two levels: the wave in the restricted sense of a wave that slides on water, the wave in a deeper sense that refers to a wave that has a certain length and that can explain many physical phenomena. Without such an ability to abstract, humans would not be able to

This is what the machines have to appropriate: an ability to imply different layers of abstractions according to the same word.


Machine translation and analogic associations

But how do speakers manage to choose one level of abstraction or association among others? It comes from their singular ability to spot expressions that are more “familiar” and “aesthetic” than others.

There are actually an infinite number of analogies that can be drawn from a given situation. For example, the problem abc = abd: iijjkk = ? admits more than one solution (iijjkd, iiijdd, iiijll). But the human brain tends to opt for the most symmetrical and aesthetically pleasing solution, namely iiijjll.

Faced with these problems, humans have an innate sense of which association sounds best to these ears. This is what gives the speaker also the ability to translate sentences into another language in a meaningful way.

Even though deep learning has made progress in translation, it still cannot do anything different form literal associations, taking up the word or expression that is regularly correlated between the two languages.

The benefits of human translators is that they understand that not all expressions work in the same way, depending on how they sound to the speaker’s ear. For example, words that seem to have a perfect association, such as “sky” in English and “ciel” in French, do not necessarily mean the same thing. The word “ciel” in French can apply as much to talk about the weather as it does to suggest fate. On the other hand, in English the word does not really suggest the notion of destiny, or by different means. Thus, literally translating a sentence such as “Le ciel nous a imposé son ordre” by using the word “sky” does not make much sense (word-for-word: “the sky has forced its order upon us”). One would rather use the word “fate” or “heaven” to make oneself understood.

To achieve a much better translation, the AI must also learn to choose the right association, depending on the context, meaning, and environment assumed by the sentence. In short, machines should take into account the social and cultural elements defining the world surrounding the speaker.


Here are the giveaways: AI can achieve a deeper understanding of language by considering that it is not only a literal association, but also a game of analogy. It needs to know how to make analogies, to move between different levels of abstraction, and to assess the context and the surrounding “world of meaning” assumed by the speakers when they pronounce them.

Have an automation project in mind ?

Don’t Stop Here

More To Explore