Perceptions of Artificial Intelligence in Natural Work Life – By: Prof. Göte Nyman

Artificial Intelligence (AI) is not competent and clever enough to enter the real work places without the support and help from and collaboration with the personnel at these sites. We need AI, but AI needs us, too.

The burning question is, how does AI need us?

With my friend and colleague Ossi Kuittinen from SimAnalytics (, we have repeatedly been surprised to notice how little serious consideration is given to this most important and very practical human and organizational problem that every firm meets when implementing machine learning and AI systems into its on-going operations. As far as I know, there are no generally accepted, AI-specific human-technology design & implementation models to manage this significant change in the organizations.

Ossi is involved with the building and introduction of extensive machine learning systems for major industrial and service settings and my role as a technologically oriented humanist and advisor has been to analyze and innovate ways of designing the AI-human-social interface and to find ways to facilitate the emerging collaboration.

We have a most interesting, and often overlooked learning experience from this:

The higher the intelligence level of the AI implemented in an organization, the higher is the pressure to model and understand the work and behavior of the personnel and professionals working initially without the AI and after that, when they start working with the AI.

If a firm fails to take the relevant lessons from its best human and social resources and practices when AI is taken into use, it faces a risk that in worst case can be devastating for the firm and in less severe cases, it just means that the best potential of the AI systems cannot be achieved. The ‘human factor’ has a new tone with the coming of AI. How come this is so?

The most basic demand of any AI system in a complex production or service environment is that it must learn to perform well, even better than has been possible by the traditional human and technological resources. How can we know what is the performance level of an AI system?

There are naturally a plethora of traditional quality-, process- and outcome measures to characterize its performance even in the most complex environments, but it is another matter to use this knowledge for educating AI.

Does the AI system keep its performance knowledge by itself only? Of course not, it has to talk or communicate to someone like us, to the operators, and other personnel to tell how it is doing, what difficulties it is facing, what it trusts and especially, what it does not know. In the very near future, the AI system will talk to other AI systems to get help and support. It is not at all self-evident what is the best way for AI to do this and talk to people (and to its partner AI systems) so that it can be listened to and taught to improve its performance, to master its job better than any human team or community can ever do.

In highly technological environments, we know how difficult the technical language, data representations, and jargon can be and how multi-dimensional the performance measures typically are. Hence, the language problem is serious. Furthermore, it is not enough for AI to talk to one person only, but to a working community, who must understand its messages and act accordingly. My guess is that there is an AI-Babel developing as I write this.

It is not wise to build an AI having a private language and illustrations, which it uses only to satisfy a selected number or people, the professional operators, for example, and, which nobody else can understand. There is a distant analogy to this in how human twins sometimes can develop their own language impossible for outsiders to understand. It is bad for the mental development of the twins and it isolates them from social growth as well. AI should avoid this trap otherwise serious problems will occur with inevitable personnel changes and it becomes difficult to share the acquired valuable AI-human interaction knowledge within the firm or a consortium.

Any AI system must be built to talk to the most important people, to its close ones, with whom it learns to do its job. We all know how difficult a task it is to be a teacher, and how challenging it is to succeed in it.

Teaching AI, especially in the future, is no less difficult since it is not only about teaching it simple or complex tasks: it has to learn to behave in an intelligent manner.  To make this possible, we must decide what kind of teaching material to use and show the AI, to choose a proper pace of teaching, and to make it possible for the AI to demonstrate its learning achievements to the operators, its teachers.

How should this teaching- and learning-outcome information be represented so that the teacher-operators can understand it and offer their guidance to the AI system? With improving AI, the situation will change even more dramatically: the student-AI becomes the teacher-AI and we, as the personnel and the firm, must be prepared for that, too: how to learn new relevant skills the AI teaches us?

In a system having hundreds or thousands of measuring points and other data sources this repeating teaching-learning-teaching-learning chain of interactions is not a simple process to design and manage. Failing to build a good human interface and organizational connection can lead to serious problems and neglect of critical factors in the AI performance. It can also mean that the best potential of the AI is not achieved. Hence, a new, continuously improving human-AI communication-learning-teaching model and relevant data representations are needed. All AI systems include these in one form or another, and it is no news that the planning of the implementation of such a system is a new task for any organization and firm. It invites the management and the whole personnel to a novel learning journey.

A number of large firms have learned it the hard way how difficult it has been to implement the traditional, wide-scale enterprise software systems to control their functions and operations from the very technical ones to the management of human and financial resources. The learning lesson has been how serious a challenge it is to the management, software planning, participatory design, collaboration, consulting, and to performance measurement – and debugging. With the coming of AI these human-organizational problems will potentially scale up and so will be the challenges of AI implementation. Compared with AI, the enterprise systems are extremely simple in the way they communicate and work with the firm and its personnel.

Mastering the implementation of AI together with the skilled personnel in an efficient, secure and cost-effective way in industrial, service and other settings makes it a significant competitive advantage for the AI designers and suppliers. In Finland we are both officially and in industrial contexts preparing for the AI breakthrough impacts but it remains to be seen how well we can find ways for mastering the AI-human-social-workforce entity together with AI technology. It’s a new human success and failure factor with which we must learn to live.

Finally, an unholy, almost forgotten, but an inevitable fact:

AI is not intelligent enough to step aside when it should do. Every firm adopting AI, will one day, perhaps very soon, face the situation where it needs to get rid of its AI system and replace it with a better one. AI is not clever enough to accomplish this alone, without human support and intervention. We must be prepared for this new form of human-AI system divorce as well.

* Some of the ideas presented here were published in Kauppalehti 22nd February 2018 (In Finnish) by Göte Nyman and Ossi Kuittinen.

Göte Nyman, Professor of Psychology, University of Helsinki, Finland
Columnist for FinnishNews;
His blog:
His almost latest book -“Perceptions of a Camino” (available from  Amazon & as Kindle)

Site Footer