Perceptions of Artificial in Artificial Intelligence – Professor Göte Nyman

You may have noticed how the proponents of Artificial Intelligence (AI), especially those having no background in the study of human intelligence, love to forget the meaning of the first word of the definition: Artificial and instead try to make it sound as human as possible and/or to suggest how AI will be a threat to us humans (e.g. the recent comments by Elon Musk and Stephen Hawking), will overpower us and will become conscious by the year 20xy.

There are at least two curious forms of underestimation in the current hype around AI: the underestimation of what it is to be a full human being and what it means to live, work and prosper as a community.

A famous neurosurgeon, some claim he’s the best in his own field, Juha Hernesniemi, who had worked at the Central University hospital in Helsinki, has told in his interviews (Helsingin Sanomat, for example) how he has performed more than 16 000 operations. He has the reputation of being the most skilled and also the fastest brain surgeon in the world. To some this sounds like the promising future of robot surgeons, which (not ‘who’) can be tireless, accurate and learn to be better than any human being. If a human can do it, a robot can do it, even better and more.

But Juha Hernesniemi tells also how he reacts to mistakes and failures, how he always wants to directly and immediately after the operation face the family and the close ones of his patients even and especially when something has gone wrong.

He remembers both his professional actions and the situations with his patients and their families. He has spent his whole life learning all this. No robot can ever replace what he does in doing so. He cares, makes a human connection, takes responsibility and evaluates his own state of mind and body every day – he’s 70 now. Still, I would guess he is not the first one to oppose the use of robot surgeons, but certainly he is a perfect fully human reference to the quality demands of any ambitious robot project.

Interestingly, the development of the theory and practices of AI itself has been an endeavor of extensive human collaboration and co-operation. However, the way an individual’s intelligence (IQ) is measured today, is a perfect opposite to this: in a typical decision making or intelligence testing situation the subject is isolated, he or she is not allowed to ask for help, pray, discuss and ask for more time to solve the problems at hand. Why is it that when such a complex system as AI is developed, it is natural to rely on free, often slow human communication and collaboration while the way to measure intelligent performance is to isolate the individual from the useful and valuable community?

It’s somewhat a surprise that it is worth a Nobel prize when someone shows how limited the isolated human beings are in making rational decisions, working alone and separated from their natural communities, not to mention their best colleagues and friends. With a similar approach, but keeping us distanced from each other, we would not have the present AI systems in the first place or even Elon Musk’s wonderful projects and innovations like Tesla cars, batteries or Space X.

Not every researcher of human intelligence has seen us human beings as isolated cognitive-emotional, self-sufficient specimens.

One of my favorite scientists has been Elinor Ostrom, who won the Nobel prize for economics in 2009 (https://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2009/ostrom-lecture.html).

She was known for e.g. the studies on how communities can successfully manage finite common resources such as grazing land, water and forests. She showed how we can be together better than alone when we trust each other, negotiate the rules, interact, have ways to deal with conflicts, and learn to benefit from cultural heuristics of problem solving.

The future of AI may well learn from her thinking especially when AI is socially and technologically implemented in modern factories, firms and other communities. One is tempted to think that we will see AI implementations based on some, if not on all the principles Ostrom suggested in the way we should take care of commons.

I’ve chosen only three of her eight principles here as an example and as inspiration for the participatory implementation of AI:

  1. Define clear group boundaries. In production and services there are typically several units, groups or teams working together, each with their own local and other interests and competing for the limited resources and role in the organization. When AI is introduced, conflicts of interest will emerge so that following the Ostrom principle, it is necessary, at an early stage, to identify and define these boundaries of interests and where the problems might occur. Then it is possible to start negotiating and building rules for progress. The worst-case scenario could be that AI is introduced, top-down, in the organization and the emerging conflicts remain invisible until they are forcefully solved because it has become necessary. This will probably happen in many future organizations, but it is costly and inefficient and especially in Finland and other European countries where it can lead to serious problems with the unions.
  2. Ensure that those affected by the rules for governing the use of common goods can participate in modifying the rules. In short, this is an encouragement for firms to rely on participatory development and creating the rules of AI use when AI is adopted at e.g. an industrial plant or a service environment. But this is a new challenge: how do you define the rules for working with an extensive AI system having a major impact on how work and production is organized, for example? With my engineering colleague Ossi Kuittinen, co-founder from SimAnalytics we have analyzed these phenomena and become convinced of the impact of these dynamics, following it closely in real industrial life.
  3. Make sure the rule-making rights of community members are respected by outside authorities. This is a direct demand concerning especially management and the owners and other influential stakeholders to let the system evolve without unwise interference. It is a matter of trust and often difficult to achieve, especially in the complex and fast developing environment where AI is typically implemented.

Why Elinor Ostrom? You have probably seen, from the above principles, the importance of trust on the abilities of the working and collaborating community to deal with demanding decision making and resource sharing problems. Of course, it does not happen automatically, and a new type of management competence is required to make it happen. When Artificial Intelligence is introduced, it is possible to see it, not as a competitor to the human and social competences, but as an associate and a partner that becomes almost like a member of the community. But there is a long way to the situation where AI would be a real community member. It is still Very Artificial Intelligence, but we are moving to that direction and it is a time to become better prepared.

  • Professor of Psychology, University of Helsinki, Finland
  • Columnist for FinnishNews; https://www.finnishnews.fi
  • His blog: http://gotepoem.wordpress.com/
  • His latest book  Perceptions of a Camino available from  Amazon & as Kindle) is a Readers’ Favorite -review, USA: 5 stars.

Site Footer