Prof. Ljupco Kevereski
Rectorate of the University St. Clement Ohridski Bitola, North Macedonia
Title of the paper: Emotional Intelligence and Artificial Minds: A Psychological Reflection
Abstract: The development of artificial intelligence (AI) is no longer merely a technological process but a deeply psychological phenomenon that exposes the limits of human self-awareness, emotional intelligence (EQ), and identity stability. In an era where algorithms can recognize emotions faster than humans, a paradox emerges: the more AI imitates human affective patterns, the more aware humans become of their own emotional deficits. Thus, AI does not become a competitor but rather a projectional mirror of human frustration—a reflection in which humanity observes its own inability to unite the rational and the emotional within the technological reality it has created. The theoretical framework of this research connects the concept of emotional intelligence (Goleman, 1995) with affective computing (Picard, 1997) and psychodynamic theories (Freud, Jung) to explain the phenomenon of the “AI Frustration Syndrome.” This syndrome is defined as an emotional-cognitive conflict between the human desire to create a perfect mind and the fear of losing one’s own subjectivity. In this context, AI is not merely a technical artifact but a symbolic representation of a human narcissistic wound—the moment when humanity realizes that its intelligence is no longer uniquely human and its emotionality becomes algorithmically predictable. From a psychological perspective, AI activates a deep dynamic of compensation: while machines attempt to simulate emotions, humans strive to rationalize them. A circular paradox arises—humans emotionally react to the emotional simulation of the machine. This produces a new type of frustration, which the author terms “algorithmic anxiety”—a state in which the human loses the boundary between authentic empathy and digitally induced emotionality. In this sense, AI is not a cold technology but a projection of the collective psychic need for control, security, and self-affirmation. The ethical and philosophical aspects further problematize the possibility of genuine empathy within artificial systems. If empathy presupposes the awareness of the other as a subject, then AI remains within the realm of imitative empathy—a rational reproduction of affect without affective experience. However, for a human seeking an emotional response, even imitation may become sufficient. This blurs the boundary between authentic and simulated affection, turning technology into a mediator of emotional meaning. In conclusion, artificial intelligence does not endanger human emotion but radically exposes it. It does not dehumanize humanity; rather, it reveals the inner psychological gap between knowledge and feeling, power and vulnerability, control and the need for understanding. Future psychology will have to recognize AI as a new field of emotional epistemology—a space in which the depth of human existence is measured by the capacity to recognize and preserve empathy in a world that has created its own artificial emotion.
Bio: Full CV