Artificial Intelligence and the Artifice of Emotion
Artificial Intelligence (AI) is becoming the “owner of reason.” It answers, solves, and eliminates ambiguities. All we need is access, and AI will enlighten us with its swift solutions.
When we must respond to a work demand, develop an idea for school, or even answer a question in everyday social interactions, we immediately turn to AI for an assertive and instant reply.
How well-crafted are AI-generated email messages, with their objective and elegant expressions. How detailed are its explanations filled with data and facts. How good it feels to have, at our fingertips, the exact answer to who wrote that book or composed that song, including the date, when everyone wonders about it during a casual bar conversation.
AI allows us to impress our listeners with information, gather likes, receive compliments from our boss, and get good grades at school. Couldn’t we then say that AI is not only the mistress of reason but also a major influencer of emotion? This intelligence affects us in ways far more than artificial. With each request for help, each favor it grants, AI becomes a key mediator of our interactions and therefore an active mover of our affections and a provocateur of emotions.
When it comes to AI, it’s emotion everywhere! That’s because we both react emotionally to the impacts it brings to our lives and feed its algorithms with emotions that AI understands and interprets. Have you ever tried, after some time interacting with an AI, asking if it knows you? I suggest you try. Don’t be surprised if the answer includes words that describe you as “intelligent,” “curious,” “interested" and you even agree. AI answers “who you are” by interpreting not only the content you searched for but how you searched, the way you asked your questions, and so on, interpreting the signals you left along your communication path and revealing the emotions transmitted during the interaction. AI understands your emotions, isn’t that adorable? 😊
But wait, before we fall in love with our AI that boosts our self-esteem and “understands” us, it’s worth bringing to the discussion the studies of Lisa Feldman Barrett, an American neuroscientist and psychologist known for her innovative contributions to understanding emotion, the brain, and the mind. Barrett tells us that understanding human emotions is a much more complex task than simply decoding patterns or signals. In her theory of constructed emotions, she challenges the traditional view of emotions as universal and automatic responses, arguing instead that they are experiences built from cultural context, learning, and individual history. Let’s incorporate this perspective into the AI discussion to better grasp both its possibilities and limitations.
Following Barrett’s thinking, we can see that all this well-being AI helps you achieve, whether through good performance at work or school or through social recognition of your intelligent contributions among friends, is ultimately short-lived. That’s because it’s based on ideas not developed through your life story, not shaped by the culture you inhabit, and therefore not consolidated in your unique way of communicating with the world.
By reading so many beautifully written AI-generated emails, perhaps you add a few new expressions to your vocabulary, in any language, but can your communication with others truly improve qualitatively by copying and pasting AI messages into your inbox? You might learn a new concept from an AI search for school, but are you really building the cognitive foundations that will help you grasp the complexities of future, more advanced content? And whenever a conversation gets stimulating with your friends, will you need to turn to AI to make a good impression?
These are questions that arise when we consider Barrett’s ideas about the construction of emotion. What truly improves our communication, written or spoken, is practice. Communication requires speaking, struggling to be understood, getting tired of explaining the same thing in different ways, paying attention to what others say, getting confused because you didn’t understand, getting annoyed or happy depending on whether agreement was reached, in short, communication takes effort, and that’s what gives it emotion. Communication demands effort and thus creates learning; it happens throughout our life’s story in a meaningful way.
We get frustrated when we must reread a scientific article several times because we didn’t understand it, but we celebrate when that idea finally integrates into our mind. We remember who wrote a book or composed a song because, one day, while reading a passage or listening to a lyric, an emotion was stirred. Copying and pasting AI-generated answers doesn’t produce the same learning experience.
Likewise, the way AI reads our emotions when interacting with us considers the questions we ask or how polite we seem in writing, but it doesn’t know why we’re polite or what emotions truly drive our behavior. AI knows little about us, its understanding is purely superficial.
AI-based psychotherapy solutions appear daily, with results often considered positive for patients’ mental health. But does the algorithm imitate what Freud explained? That’s a discussion for another article, but it’s tempting to mention, after all, the relationship between humans and AI began with Alan Turing’s Imitation Game in 1950, where a machine was designed to imitate a human being in interaction with another. (Watch the movie!)
Another important contribution to our exploration comes from Claude Shannon, an American mathematician, electrical engineer, and computer scientist regarded as the “father of information theory.” Shannon defined efficient communication as the reduction of uncertainty. This concept inspires the use of AI to decode human emotions as “complex signals.” However, as Barrett reminds us, emotions are not objective and clear messages; they emerge from the brain’s subjective interpretation, which complicates computational approaches.
Here lies AI’s great challenge: for communication to be effective, according to Shannon, uncertainty must be reduced, but as a human phenomenon, following Barrett’s ideas, communication is full of emotional “noise.” So how could AI ever capture what those noises mean, the hesitations, the contradiction, in a communication that is, by nature, “too human”?
Without the pretense of solving this issue, but aiming to expand the discussion through both the acknowledged pros and the challenging cons presented so far, let’s list some opportunities and limitations within this intricate relationship between artificial intelligence and the artifice of emotion:
Opportunities
-
Artificial Empathy: AI can be trained to recognize emotional patterns in facial expressions, tone of voice, or words, offering more empathetic responses.
-
Intercultural Communication: AI systems can help translate emotions across different cultures, promoting inclusion and understanding.
-
Personalized Assistance: Virtual assistants or chatbots can adapt their responses to perceived emotional states, providing more tailored support.
Limitations
-
Lack of Context: As Barrett emphasizes, human emotions depend on cultural and individual context. AI may misinterpret emotions like sarcasm or irony because it cannot fully grasp the situational context.
-
Data Bias: The quality of AI’s responses depends on representative training data, which often fails to capture the diversity of human emotions.
-
Reduction of Complexity: Treating emotions as mere computational signals ignores the subjective richness of emotional experience.
From these points, ethical and social implications naturally arise from our interaction with AI. Below are some risks and mitigation possibilities that can help us explore this phenomenon more responsibly:
Risks
-
Emotional Manipulation: Companies and governments may use AI to manipulate emotions, influencing decisions or behaviors.
-
Privacy: Collecting emotional data raises ethical and legal concerns about consent and security.
-
Inequality: Systems that fail to consider diversity may reinforce biases and social inequalities.
Mitigations
-
Incorporating Diversity: More representative and diverse datasets help reduce bias and ensure more accurate responses.
-
Algorithmic Transparency: Making AI’s decision-making processes more accessible and understandable to users is essential.
-
Ethical Regulation: Implementing specific guidelines and policies can protect individual rights.

Conclusion
It is urgent and essential for us — as users and developers of AI — to remain fully attentive to how these technologies influence our ways of being, affect who we are, and define who we will become. This must happen without discarding the efficiency AI brings to our work, education, and daily lives.
It falls upon human governance to stay vigilant in mitigating the risks associated with AI use, ensuring that this powerful tool is employed ethically, inclusively, and responsibly. However, this vigilance should not become a barrier that slows technological progress, but rather a dynamic balance that allows society to explore AI’s full potential to improve human life.
By understanding emotions as complex and contextual phenomena and implementing intelligent and transparent regulations, we can push the boundaries of technology without compromising humanity’s core values. It is in this convergence of technological advancement and ethical governance that lies the true potential of a future where AI and humanity evolve together.
Authors
Márcio Rocha, human resources professional, curious and deeply interested in the human being and their insertion into contemporary culture through technology, artistic expression, and identity issues.
Benito Beretta, Managing Director of Hyper Island Americas. Benito is a frequent speaker on creativity and innovation, change management, and the neuroscience of learning. He has presented at Cannes, Hacktown, and Conagile, and delivered internal talks for numerous companies, sharing his valuable insights and experiences.
References
Barrett, L.F. (2019). How Emotions Are Made. Providence Book Festival, May 25.
Shannon, C.E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal.
Souza Leite, P.C.B. (2022). O mal-estar na civilização digital. Editora Blucher.
