Emotional Artificial Intelligence by Simulating Neurotransmitters in the Brain
Posted on Thu 07 May 2020 in Post
Based on a paper from Kazan Federal University, Russia
Emotional Model: Lovheim cube and Tomkins
While there is no perfect and agreed upon model of human emotions, a very promising and simple enough to implement model is Lovheim's cube of emotions. The cube explains 8 of the 9 emotional states (affects) that are defined in Tomkins' 'affect theory' of emotions which list distinct emotions such as joy, surprise and disgust. Lovheim's cube uses a 3-dimensional vector to create a resultant emotional state, where the 3 dimensions are given by a quantity neuromodulators (or neurotransmitters) in a person's brain. This theory is based on research that shows the three mono-amine neuromodulators of dopamine, serotonin and noradrenaline. These neuromodulators are responsible for all behavioural control in humans and other animals from lobsters to mice. Lovheim points to evolutionary conservation of this behavioural control system as a strong sign that it is very important for such a system to exist as "[t]he environment an organism encounters is very complex, therefore, a system of behavioural control cannot
be specific to every possible situation." Meaning that organisms need behavioural control that is general and that neuromodulators are an effective generalised system for behavioural control. Lovheim also points to effective behavioural drugs targeting the neuromodulators and the system such as antidepressants and anti-psychotics.
The cube shows that a vector from one vertex, generated using the quantity of the 3 neuromodulators will point towards a vertex where each vertex corresponds to a emotional state or affect. It may then be possible to quantify the strength of the emotion and combine a mixture of multiple emotions as a point within the cube. It can also define a state where the emotion can freely change in any direction, become stronger, weaker, or changing. This model is also very simple for application in a simple AI where having distinct emotional states is useful for tone and facial posture in a conversation.

Source: "A new three-dimensional model for emotions and monoamine neurotransmitters" - Hugo Lovheim in Medical Hypotheses 78 (2012) 341–348
Making the AI
There already exists voice assistants such as Amazon Alexa and IOS Siri, which have the components of speech recognition and speech synthesis, that overlay a simple Q&A system. Current voice assistants are very limited to simple requests and information questions alongside a few pre-programmed responses. The next levels in developing voice assistants and AI such as the one described in the paper, is to add more features including context awareness and emotional reaction. Effective AI assistants would be able to recognise emotions in the voice of the user, which can create a more natural feeling to conversation, and may enable the AI to interpret things like sarcasm. Speech recognition and synthesis have been sufficiently developed as seen in current voice assistants, where the next step is adding emotions into the recognition and synthesis, which would require emotional states to be built into the AI and it's response generation.
Achieving this next level in AI can potentially be achieved by replicating the cognitive processes that people use in conversation. This includes; memory, decision making, perception, understanding, judgement making, language and emotion. Development for these features is steady but they are not the only roadblocks to more realistic human-machine interfaces. It could be very useful to add a facial interface for the AI to aid in conveying emotions, while also tracking the face of users for detecting emotions. Using the 8/9 affects of the Tomkins model and Lovheim's cube, it is possible to present a face for the AI that is capable of emoting, but the major barrier that is left is the uncanny valley. The uncanny valley by definition is when the artificial animation is so close to real that it is very difficult to critique what is unconvincing, but it is unconvincing nonetheless. The problem that plagues AI in this sense the the creepy feeling that users feel and the inability to trust the agent that is in the uncanny valley. Answers to this problem are yet to be found and the researchers state that more research into the uncanny valley affects on users is needed.
Conclusion
Ultimately the research is very brief which may show that the model with the emotions was not a significant improvement. However, the conclusions on what needs to be done for the improvement of AI, being adding emotions to the input and output as well as the AI thought process, seems to be a fair subject for further development. It still remains that scientific research into human emotions in terms of psychology and physiology are inconclusive, and attempts to model AIs off human emotions could be effective, but the models may need to be too complicated or wait for some new and more promising model of emotions. Lovheim's cube using a vector of 3 chemicals in the brain is very attractive for programming and seems like an effective start, but might be too simple for an AI to climb out of the uncanny valley.
References
Subject Paper: "Anthropomorphic Artificial Social Agent with Simulated Emotions and its Implementation", Vlada Kugurakova, Maxim Talanov, Nadir Manakhov, and Denis Ivanov, Procedia Computer Science, Volume 71, 2015, Pages 112–118
https://doi.org/10.1016/j.procs.2015.12.217
"A new three-dimensional model for emotions and monoamine neurotransmitters", Hugo Lövheim, Medical Hypotheses 78 (2012) 341–348