Bravo, the cinema "Green Book" !!
It is natural that the movie has won the Oscar best picture.
Perhaps, people are moved by the movie not because it is a moving story of the past but
because it digs the problems out of our current world.
The racism and the discrimination have never disappeared, and it is a sad
fact that
AI technologies are said to have potential to amplify the racism and
the discrimination.
As an AI researcher, I would like to say that it is not the fault of AI
but the fact is that the big data accumulated from the internet have begun to
reveal the hidden existence of racism and discrimination driven by the
latest statistical methods
which are often called AI technologies.
We, the old AI researchers, have continued to study what intelligence is, and
we know that intelligence is, of course, far more than only the
statistical analysis of data.
The statistical methods developed in the current third AI boom have achieved
great progress
in filling the gap between data and information.
As a result, unfortunately, we have got the information that the racism and
the discrimination are embedded universally in the big data.
However, we know that data and information are only small parts of
intelligence.
We, the old AI researchers, have struggled to deal with knowledge
and wisdom beyond data and information.
The fact that most of the people who watch the movie "Green Book" are moved
shows that people have natural feeling that fighting against the racism
and the discrimination is the right way.
If this natural feeling is not well embedded in the big data, we, the
AI researchers, should develop AI technologies that can gather this
natural feeling.
I dare say that this is not so difficult problem.
I do not think that we, the AI researchers, should directly embed any
moral judgement
into AI by design, but I believe that we can build AI systems to incorporate
the democratic assembly of human knowledge and wisdom.
The AI systems may not behave in the way we have designed, because AI can
change its behavior based on results of learning.
If the racism and the discrimination are hidden in big data,
what the people should do is to feed continuously our knowledge and
wisdom into AI systems to upset the big data.
We can do it by verbalizing our knowledge and wisdom.
AI systems will be able to learn how to make correct decisions
by applying mostly the same methods
of learning from data to the verbalized information of our knowledge
and wisdom.
To sum up, whether AI can be moral or not is not the problem of AI, but
it is the problem of our society. We should verbalize our knowledge and wisdom.
The verbalized knowledge and wisdom need not be compactly structured.
The current AI technologies are good at dealing with big data
of large dimension.
Hence, we should just accumulate enormous amount of fragments of our
knowledge and wisdom and feed them continuously into AI systems.
I believe that AI systems that will be embedded in our society should
enhance the democratic aggregation of our knowledge and wisdom.
In other words, we, AI researchers, should build AI systems to solve
the problems caused by AI
systems, as I wrote in
Toward AI-embedded Society where AI is Not Recognized as AI.
The system we have built, which has the
knowledge base of ethical discourse, is a first small step in
this direction.
© 2019 Koichi Hori
Related entries (automatically calculated):
AI (Artificial Intelligence) and Philosophy
Toward AI-embedded Society where AI is Not Recognized as AI
Difference between Science and Engineering
UNESCO: `Do you know AI or AI knows you better? Thinking Ethics of AI'
What is Artificial Intelligence?
On This Day: Atomic Bomb Dropped on Nagasaki
AI support for Ethical AI Design
Civilization, Culture, Science, and Technology
AI ELSI Award
Culture as the base of our country: Prof. Inose
Mechanical engineers and electrical engineers have different mental models of oscillation
Koichi Hori Top page
UAV/UGV Autonomous Cooperation
Koichi Hori: Last Lecture
Using unicode characters in Windows command line
The University of Tokyo Academic Archives Portal - UTokyo Digital Collections
Koichi Hori
Using Python on Windows
Redirecting URL in Ruby on Rails
Aligning Facebook button and Twitter button
Unicode decode error "'utf-8' codec can't decode byte 0xfa in position 0: invalid start byte" when using MeCab
Login window freezes when making VNC connection from Windows to Mac
css <pre> and <code> for mobile devices
Showing the favicon in Google search results
A small program which extracts rhythmic word sequences such as Tanka(57577) or Haiku(575) from a plain text