You are on page 1of 11

Some Thoughts on Artificial Neural Networks

The impact of Artificial Intelligence on Society


There has been a lot of talking about AI recently and its impact on society, especially with regard
to employment. Some believe that many jobs will be replaced by robots - those famous algorithms
- in a not so far future. Others already provide for a universal income for those left behind by
technology.

It is true that the risk exists and that AI could well disrupt society on many levels, but we
should not forget that it could also create jobs, some of which we do not expect. We must
therefore prepare for this today. The industrial revolution of the 19th century caused
many upheavals in society, but it also created new jobs that had not been anticipated.

Without going into these details, I would like to make one important point: even if major
companies such as Microsoft, Google, Amazon already offer training in AI and its progeny
- Machine Learning, Deep Learning and Co. -, even if continuing education sessions are
available to everyone in the IT field, we must start by educating the youngsters in
anticipation of a future potentially invaded by AI and its avatars. Personally, I think that
we should even proceed to a comprehensive review of the school curriculum in some
countries, such as Lebanon where I live, so as not to miss the train that is inexorably
leading us to a future society potentially governed by AI. In this respect, it is especially
necessary to have a curriculum that encourages children's imagination, a curriculum
based on the Humanities and IT with Science as a catalyst and a zest of entrepreneurship.
The university should take over adequately to train young people with a solid background
for this coming future. Universities should act as a bridge between schools and the
business world.

We must not panic. Just as the birth of writing relieved our memory of a heavy burden -
to the great displeasure of the Ancients in the Antiquity -, and just as the Internet caused
upheaval on many levels, so too will the AI be useful to us many levels. This is a new shift
in the cognitive paradigm.

The problem is that currently, a tiny part of society is acting in the right direction and the
other part is panicking and does not appreciate the impact of AI on society. We have the
following situation:

A current education, still traditional, is forming a society already overwhelmed by


sophisticated technology and there is no feedback from this new situation on this current
education so that it may change its course adequately. We are not aware of this evolving
situation. We continue to apply the same methods that are two centuries old and,
inevitably, we are not progressing as we should.
Action must be taken immediately, first by learning (about) AI and then by taking
appropriate measures in order not to be overtaken. This goes also for the world of
business.

Businesses today have all the tools they need to implement AI. Despite these capabilities,
few of them take the time to do so and keep failing to make the most of AI as they
realized that their strategy was a bit more than tools without guidance. I’m not an expert
to give them advice on this issue; there are numerous consultants for this. I’ll adopt
another and more general point of view. I think businesses should proceed to the
following process:

Managing from the Future: Preparing for the future


It is clear that AI is bringing about a complete transformation of our lives in all its aspects.
The AI’s dazzling speed tends to destabilize us and to see a bleak future ahead, especially
since the prophets of doom are outbidding on this subject: basically, robots will replace us
everywhere — this is already the case in some European countries where supermarkets
have replaced human cashiers with robots cashiers — and we won’t know what to do with
ourselves. Worse, the robots will exterminate us, which is Ray Kurzweil’s favourite
scenario called the Singularity.

According to neurobiologists, after a certain density of information and interconnections


in the human brain, reflective thought arises: the brain knows that it knows. (Is that when
language came up?).

It seems that the same scenario would apply to our AI-powered cyber-machines, as by the
density of information that we humans are generating, we are creating a living cyber-
organism that will, at some point, realize that it exists. Habeo data, ergo sum. We must
thus expect the emergence of a system that thinks itself. Therefore, we are setting our own
trap, according to Fr. de Closets, because we are currently building a future world that we
are unable to conceive nor control. But, according to de Closets, we are condemned to
progress, thus to the future; and these two are closely linked, as the future is shaped by
progress.

With this threatening situation for human mankind, proposals to manage our future are
well underway. Some, such as techno-guru Ray Kurzweil, propose transhumanism for a
better future for human beings: it is a radical movement proposing to use the advances of
artificial intelligence, biology and nano- and bio-technologies to abolish old-age, diseases
and death and to promise the emergence of a new humanity. Others, such as Dr Eric Topel
for medicine, advocate a synergy between man and machine in a convergence of both
human and artificial intelligences that would improve the situation as a whole.
Joël de Rosnay, a scientist and futurist, had already predicted the current technological
revolutions in his essay Le Macroscope published in 1975. In 1995, in his book L’Homme
Symbiotique (Seuil, Paris), he announced a scenario of man-machine hybridization or
symbiosis — a prefiguration of the current enhanced human — and named it the Cybionte (a
French neologism formed with the terms cybernetics and biology), a metaphorical super-
planetary organism that he conceptualized (without forgetting that the American engineer
JCR Licklider had already designed a human-machine symbiosis in the 1960s). This new
form of collective life is a hybrid, biological, mechanical and cybernetic macro-organism,
linking people, computers and intelligent agents via a global computer network. The
Cybionte represents both a form of collective intelligence and an ecosystem.

Nowadays, with the rise of AI, de Rosnay advocates what he calls hyperhumanism — more
humanity in an increasingly technologized world — without panicking and by “merging”
ourselves with AI and its avatars. But we remain humans.

Thus, in his book entitled Je cherche à comprendre: Les codes cachés de la nature, he
predicts the emergence of an enhanced collective intelligence that will generate
hyperhumanism which, unlike the elitist, selfish and narcissistic transhumanism
addressing the individual and his dream of immortality, speaks to society and can lead to a
better organized, respectful community, capable of creating a new humanity. (This is close
to what the anarchist movement is proposing for a better society).

De Rosnay goes even further in his latest book La symphonie du vivant (LLL, 2019): he
proposes to apply the principles of epigenetics to contemporary society and has named
it epimemetics. Hereunder are some explanations (translated excerpts from his above-
mentioned essay):

Epigenetics shows that our behaviours influence the expression of our genes. It includes
properties that form a genetic metaprogram that each of us has inherited. Thanks to our
behaviour — diet, lifestyle, etc. — we can amplify or inhibit certain genes. According to de
Rosnay, we can thus act on the societal DNA by modifying our behaviour, this societal
DNA being composed of the digital-informational ecosystem, in which the Internet is
integrated, and which would be made more complex by the individual and massive
interventions of Internet users.

Thus, based on the genetic/memetic principle, he proposes to establish the


epigenetic/epimemetic relationship: I define epimemetics as all the modifications of the
expression of societal DNA’s memes made by the behaviour of individuals in a society,
company or any form of human organization.
(Note: It should be recalled that the meme is an element of culture that can be considered
as transmitted by non-genetic means, in particular by imitation. The term is an invention
of Richard Dawkins in his 1976 book, The Selfish Gene).

Apart from the scenarios proposed by the above-mentioned scientists, I propose a scenario
called Managing from the Future, which consists in establishing a compelling goal that
draws organizations out of their comfort zone by standing in the new future and
undertaking of series of step, not in order to get there some day, but as if you were there
already. The task, therefore, involves removing whatever obstacles remain in the way to
reach that future fully. This discipline begins with this mental shift. It whets organization’s
appetite for disequilibrium and provides the compelling goal that draws organizations
toward the edge of chaos, a state of complex systems characterized by the opposition
between conditions that favour entropy (disordered state) and neg-entropy (increased
order). It is a transition space in a constant interaction between order and disorder that
creates paradoxically a dynamic equilibrium. It’s a destabilization condition that is a
source of creativity, innovation and adaptability. Managing from the Future is a business
method that appeared in the early 2000s, but unfortunately, it does not seem to have
flourished. (R. Pascale, M. Millemann, and L. Gioja, Surfing the Edge of Chaos: How the
Smartest Companies Use the New Science to Stay Ahead, Crown Business, December 2001).

Managing from the future can effectively change our vision of the world. We come to
believe that we are part of a broader context that has revolutionary potential. The vision of
the future — the attractor in terms of complexity sciences — acts like a magnetic or
gravitational field, drawing many small day-to-day contributions of collective intelligence
into a constellation of concerted actions. “Being it now” causes belief in the future to fuel
daily activity. A good way to achieve this is through the method of back-casting. It’s is a
planning method that starts with defining a desirable future and then works backwards to
identify policies and programs that will connect that specified future to the present
(Wikipedia). It is about establishing the description of a very precise and very specific
situation in the future, then making an imaginary return in time in as many necessary
steps from the future to the present to reveal the mechanism by which this particular
future could be achieved from the present.

The bar must be set high enough — this is the constraint — because the sought goal, which
must be concrete, tangible, attractive, audacious and worthwhile, must not be easily
achieved without extraordinary effort, but worth a try. This attraction towards a specific
future derives its power from feelings, passions and aspirations and these factors combine
to alter how the present occurs. Managing from the future helps us to discover that which
is latent within us and which seeks fuller expression.
In view of the above, it is up to us to choose what kind of future we want, and then work
to achieve it by remaining awake, enlightened and effective, especially what concerns the
deep functioning of AI, as what is called the black box interpretability issue in Deep
Learning is worrying some people.

Deep Learning Black Box – The problem of interpretability

Opaque systems
The results provided by deep artificial neural networks - those famous deep learning
algorithms - are extremely satisfying, and are at the origin of extraordinary progress in
artificial intelligence. But the fact remains that the way in which the deep layers of these
networks achieve those results is still opaque to the designers themselves: it is impossible
for them to explain how these deep layers work. They know the inputs, they know the
outputs, but what happens in-between remains a mystery. This is the black box effect of
the Cybernetics (and that of its eldest daughter, Systems Thinking).

Let us give two concrete examples to make it short:

1. Olivier Bousquet, Head of Machine Learning at the Zurich Research Lab of Google
France, describes how Google Translation works:
It is a huge neuron network that has taught itself to switch from one language to another.
In some cases, it succeeds in being better than human translators. The other surprising
thing is that it was only taught a few pairs of languages, and it deduced the others from
those pairs. It created a kind of Esperanto of his own. But we still can't decipher it properly.
(In Le talon d'Achille de l'intelligence artificielle – Benoît Georges, pdf document).

2. In August 2017, several media outlets published alarming articles claiming that
Facebook researchers had urgently "disconnected" an AI program that had invented its
own language without being trained to do so. This program, FAIR, based on a research
article from Facebook's artificial intelligence laboratory, concerns the creation of two
chatbots - artificial intelligence programs designed for dialogue - capable of negotiating.
To do this, the program has been "trained" based on many examples of human-to-human
negotiations. It has provided such satisfactory and effective results that it has succeeded
in fooling humans who thought they were talking to one of their peers. It has also
managed to conduct tough negotiations with humans, and in some cases it has even
"pretended" to be interested in an object in order to concede it thereafter for tactical
purposes. We can thus say that it passed the famous Turing test with flying colours.
However, this program gradually "invented" an English-based language by modifying the
English language because it "was not rewarded for respecting the structures of English"; it
was rewarded for its ability to negotiate. But the program has not been disconnected.
Moreover, this case of machines inventing languages of their own is not new in the world
of AI.
Personally, these two examples of "cyber-linguistics" immediately reminded me of
Chomsky's Universal Grammar with its two components: deep structure and surface
structure. Chomsky's hypothesis asserts that the reason why children master so easily the
complex operations of language is that they have an innate knowledge of certain
principles that guide them in developing the grammar of their language. In other words,
Chomsky’s theory is that language learning is facilitated by a predisposition of our brains
for certain structures of language. But, for Chomsky’s theory to hold true, all of the
languages in the world must share certain structural properties. Chomsky and other
linguists from the generativist sphere of the 60s/70s managed to show that the few
thousand languages of the planet, despite their very different grammar, have in common
a set of basic syntactic rules and principles. This "universal grammar" is believed to be
innate and embedded in the neural circuitry of the human brain.
(http://www.lecerveau.mcgill.ca/flash/capsules/outil_rouge06.html)

(N.B.: Recently, Chomsky has timidly refuted the presence of the universal grammar).

This concept of universal grammar dates back to the observations of Roger Bacon, a 13th
century Franciscan friar and philosopher, according to whom all the world's languages
share a common grammar. According to Chomsky, the deep structure consists of innate
and universal principles of grammar on which the languages of the world are based,
despite the great differences in their surface structure. Remember that the deep structure
of a language is achieved in surface structure by a series of transformations giving rise to
comprehensible sentences.

Thus, arises the problem of the interpretability of these black boxes, which would possess
a deep structure that is not understandable by humans (which is a bit normal as the
computers’ machine language is just as incomprehensible for humans). We could
consider these black boxes as being the equivalent of the human subconscious which, in
the case of creativity, problem-solving or decision-making processes, would involve
combinations of ideas that collide and interact in such a way that, without the
individual's knowledge, the best of them selectively combine and lead to the Eureka.

Some programs are already underway in companies involved in this field, including
Darpa's new Explainable Artificial Intelligence (XAI) program, which aim at creating
machine learning technologies that produce more explicable models, while maintaining a
high level of performance, and enable humans to understand, have real confidence and be
able to effectively manage the emerging generation of AI tools.
(In Le talon d'Achille de l'intelligence artificielle – Benoît Georges, pdf document).

In view of this thin analogy between the DLs black boxes and the human subconscious,
the question I have concerning the opacity of these networks is the following:
Wouldn't it be possible to create a Deep Meta-Cognition process for each category of
Deep Learning network - facial recognition, object recognition, Machine Translation,
autonomous cars, etc. - and feed it with the methods of each of these networks in order
to identify a common pattern and thus try to understand the deep functioning of these
deep networks?

This is not a new idea, as we can trace it back to 1979 with Donald Maudsley's work on
meta-learning which is a process by which learners become aware and in control of their
habits of perception, learning and reflection. John Biggs said the same in 1985. In the
context of AI, meta-learning would be the machine’s ability to acquire versatility in the
process of knowledge. Meta-learning methods exist in AI. It would therefore be wise to
use them to feed a Deep Digger to try to understand how the black box of each different
DL works and to achieve its good interpretability.

This idea of creating a Deep Meta-Cognition comes from Systems Thinking which deals
with complex systems. Systems Thinking is the daughter of Cybernetics, a program that
should be used to reach one or more solutions concerning the interpretability of the DL
black boxes. There is always an advantage in referring to the Ancients to improve the
present and, therefore, the future.

Now, apart of this meta-learning process, could it be possible to create smart neural
networks. Could we build

A New Kind of Artificial Neural Network?


This question came to my mind as I reviewed the literature concerning the types of ANN.
I am not an expert in this field, but by dint of intensive documentation and reflection on
the challenges posed by AI, the idea suddenly emerged. I therefore present this idea
briefly; maybe I'll develop it more broadly later.

Without going into the details of this vast field - its literature is abundant - it seems that
we could create a new type of ANN that would proceed through a unveiling process. A
more adequate word would be eduction/educing (lat. exduco) which translates better the
French term supplétion. This type of ANN would proceed by unveiling what is missing or
hidden in any context. As the saying goes: An image is worth a thousand words, let's start
by providing some images to explain this matter a little bit better:
In the pictures above, the "formed" contours in white don’t exist. They are created by
your brain. This is because the brain needs to simplify what it perceives in order to make
this perception more accessible to it. The brain "sees" what is easiest to understand. It
must make sense and that's why it treats these images as units of meaning. It "fills" the
void with something known (analogical process). Otherwise, it imagines something else.
The brain is designed to see meaningful images: faces or animals in clouds, shapes in
smoke volutes or in carpet patterns, etc.

The basic perceptual processes of our brain operate in accordance with a series of
principles describing how we organize chunks of information. These processes were
established at the beginning of the 20th century by a group of German psychologists
working on the study of patterns. They have put forward a series of important principles
that applies for visual and auditory stimuli. Those principles are known as the Gestalt
Organizational Laws.

The German word Gestalt could be translated in English by the words form or shape, but
it's much more complex than thought and no word in any language translates this
German term accurately. All languages therefore use the term Gestalt.
The German verb gestalten can be translated by "formatting", i.e., put in shape, give a
meaningful structure. The result - the gestalt - is then a structured, complete, whole and
meaningful form for our brain. The notion of form was theorized by Christian Von
Ehrenfels in 1890 in an article entitled Über Gestaltqualitäten. He explains that in the act
of perception, not only do we juxtapose a host of details, but we perceive holistic forms
that bring the elements together in such a way as to produce sense. We could trace the
theory of Gestalt back to Goethe's theory of morphogenesis.

Let’s have an example in a field other than visuals. In linguistics, we have the enthymeme.
In general, the enthymeme is a syllogism with an unstated premise - i.e., where one of the
stages of reasoning is avoided because it is considered to be certain. The conclusion could
be also missing. For Aristotle, the enthymeme is reserved for deductions "drawn from
plausibilities and clues" that the listener (or reader) supplants, if they are known. Thus, in
the following statement - which is an enthymeme – the first premise is educed :
Bad children get punished.

You’ve been a bad child.


You’re going to get punished.

Examples could be multiplied by extrapolating to other areas such as music (Jazz in


particular), criminal investigations, medical diagnoses, etc. The question is: What would
be the purpose of a such an educing ANN? It should be noted that this type of RNA would
not reconstruct missing parts of something as, for example, would do a convolutional
RNA for a painting with erased parts. This type of ANN works much more like an ANN
with unsupervised learning.

To begin with, this type of ANN could provide results with Small Data instead of being
fed with Big Data. It seems that this is where Deep Learning is heading. It could also be
very useful in machine translation as it could detect pragmatic processes, such as irony,
lies, innuendoes, etc., and thus proceed to the translation of literature (novels, etc.). It
could possibly detect/infer something different from what is being observed, and that we
would often be unable to observe directly.

In other words, it would proceed through an abductive process by connecting knowledge


- analogical process - while combining the processes of imagination, perception and
memory with reasoning. Thus, combining both abductive and analogical processes - in
particular D. Gentner's structure-mapping - it could then detect simplex systems (systems
where complexity is zipped to retain only the essential (Ockham’s razor) while retaining
their complexity). It could also detect hidden patterns, especially in unstructured or semi-
structured data but also in complex systems that are so difficult to model. And last but
not least, it could guess the process of emergence of a complex system before it arises.

For sure, I haven't gone through the whole issue as the technical part and many other
details are missing, but it seems to me that such a type of ANN would be a little bit more
subtle and smarter than its colleagues on the cognitive level. And this could be applied to
chatbots, as they seem to be the next big thing in the AI field for businesses. Let’s go into

Science-Fiction: Building a Universal Personal Assistant


The Chatbot phenomenon is on the rise. Companies are being urged to create ones to
build customer loyalty but also to solve and enhance business problems: boosting
marketing, improve brand image, hire, etc. Chatbots seem to be the next big revolution
for the way people engage with organisations online, for what will drive the next wave of
digital commerce, and, above all, for what concerns a real virtual personal assistant.
With the new version of chatbots, we have moved from stilted binary conversations to the
understanding of not only a spoken or typed word, but also to the analyzing of the
context in which it is used. The great progress of NLP technology currently allows
chatbots to conduct almost human conversations. They can even be assigned a
personality that matches the brand's need while enriching the customer's experience.
Many companies provide their chatbots with a unique character that attracts customers
and strengthens relationships with them. If chatbots are destined for such a bright future,
it is because they will converse with us in an increasingly sophisticated way, which will
make life a little easier for us: we will no longer have to type on the keyboard for
anything; we’ll just ask James and it will serve us on the spot.

The human being is endowed with the instinct of conversation (especially women); we
are made to speak, discourse, argue, chat, etc. We love stories, fables, tales, myths and
legends, fiction. So, if we have a companion who wakes us up in the morning, quotes our
personal agenda, reminds us of a particular date, appointment or detail, reads us a book,
tells us a story, gives us the news, translates us a particular text, sings us a song, etc., it
can really be very useful and pleasant. If he bothers us, we just deactivate him.

These chatbots - fed with Machine Learning with NLP sauce and a touch of voice
recognition and voice synthesis - will obviously be connected to the Internet, social
networks, IoT, etc. but also to our own personal environment, so that it could quickly
become our artificial alter ego.
But to get to this point, Deep Learning's processes and its avatars may not be enough. It
will be necessary to move up a gear with a higher technique. Biomimicry will have to be
applied with subtlety: our chatbot must be endowed with human functionalities as shown
in the diagram below:
Our chatbot will therefore be equipped with 3 interdependent systems: cognitive,
emotional, enteric, which will have been previously modeled to be operational. The
interaction between the cognitive system and the enteric system will be similar to the
human vagus nerve. As a result, our chatbot, which will be in close connection with us all
day long, would gradually transform into an artificial clone and would develop, in no time
at all, an artificial consciousness.

Imagine the final result of this whole network of interconnected individual artificial
consciousnesses.

My point after exposing my thoughts is that we need to go beyond Machine Learning and
Deep Learning. We need to build a real Artificial Cognitive System which will supplant
the operating systems on computers. We need to move forward with a really big step.

Now, after all this long talk, I like to finish on a somehow funny note by asking a
question: What would happen if we build a pavlovian neural network?

You might also like