Perceptions of the Maslow Model and the hierarchy of AI Needs – By: Prof. Göte Nyman

Maslow’s hierarchy of needs model (see the paragraph below) still lives in marketing and management texts although psychologists know well how it fails to cover the depth and complexities of human life and spirit.

Interestingly, following the enthusiastic discourse on the potentials and threats presented by AI one cannot avoid noticing the amusing parallel between the Maslow need model and the needs as implied in the futuristic views on AI. The need pyramid is showing its tip and is about to find its new field of glory in the human considerations of AI. Can Maslow offer us something to think about in the future of human AI?

The hierarchy of needs in the Maslow Model, extends from the very basic needs to the higher ones: 0. Physiological (physical), 1. Safety, 2. Belonging, 3. Esteem, and 4. Self-actualization needs.

No doubt, the ‘needs’ of AI today are nothing more than Physical in nature and one could think of them as being satisfied by the classic ICT architectures of the relevant software and hardware support systems.

Furthermore, most if not all, AI systems lack perfect Safety because at any time, we humans, their owners and operators can stop, interrupt or destroy them. However, not so far away in the future, and already today in advanced AI based production and control environments, the AI Safety needs must be properly ‘satisfied’. This is especially so in situations where AI takes the control over critical operations and decision making in health care, finances and other sectors of human and social life, and perhaps in the matters of death, too. How should AI be allowed to ‘satisfy’ its security need?

The sense of Belonging is crucial for the well-being of all of us. It is not only about how we feel being connected with others; it concerns the way we can communicate, show and receive care, and have an accepted the role as a member of a meaningful community. No doubt, the AI systems will have ways to show and demand that they are members of ‘our’ community and they do what they can to contribute to our needs and wishes. There will be innovative ways for AI systems to show this, like the famous HAL did by the use of language and its ways of addressing the users. AI can accomplish this through gestures and can even offer colorful and touching expressions of its sense of belonging to our human/social communities.

There is a long way for AI to enjoy Self-esteem in the form of happiness, curiosity, and joy or of just being what it is, a self, even though man-made. AI has already reached the potential to talk about these aspects of the experiencing and feeling human mind, even describe its ‘own feelings’ in great detail, but it will remain like a psychopath talking, and we should know and remember this, unless we are ready to fall into the ‘guilt trap’ – feeling guilty for not complying with the human-sounding supplications of a robot, for example –  as an AI expert once remarked.

Self-actualization of AI is the highest-level need, which is typically implied but hidden in the futuristic visions where the emergence of dangerous AI is predicted. A positive self-actualization of AI would mean that it joins us, offers its help and support and its will to learn together with us to enjoy our successes. It would share our values. But as we know from all the tragedies of life, this can go wrong as well. Sadly enough, we have painfully seen how difficult it is to know how and why the value base of an individual becomes dangerously biased and disarrayed as shown by the recent school shootings in US.

We don’t know much of the dynamics and development of human values. The emergence of AI values can be even a more difficult design challenge and a process to manage. There is an enchanting parallel in the way AI, especially deep learning systems, are unable to express their inner life to the users, in the same way people have difficulties in verbally explaining their own values and their dynamics.

It is an immense design task to build an AI that can meet us humans at the level of our values. We don’t know how this can be accomplished, but whatever solutions we will see, they must be based on cultural and other forms of learning. Intelligent talk by AI is not enough to build values for it even though a persuasive AI system can lure us to think it has human-related values, just like a psychopath can lure us to believe and trust.

You may wonder why the Maslow Model is used here as a reference having already argued that it fails to capture the essence of human life. Well, Maslow is well and alive in between the lines of the current AI discourse and it’s time to find a better model for considering the future AI if we want to build truly human AI – and understand it as a strange member of our communities.

Göte Nyman, Professor of Psychology, University of Helsinki, Finland
Columnist for FinnishNews;
His blog:
His almost latest book -“Perceptions of a Camino” (available from  Amazon & as Kindle)

Site Footer