From September 3rd to 6th, 2019, I attended the 8th International Conference on Affective Computing & Intelligent Interactions (ACII 2019) in Cambridge, United Kingdom with the generous support from the Dartmouth Graduate Student Council. I am accustomed to attending psychology or neuroscience conferences but this was my first time attending ACII 2019 which is primarily an engineering and computer science oriented conference. Nevertheless, topics in affective computing span multiple disciplines including psychology, neuroscience, human computer interactions (HCI), and computer science, and I was delighted to be able to offer a perspective from social psychology and present my poster on “Shared experiences increase social connection”.

Before I elaborate on the exciting research presented at the conference, I would like to mention a couple of organizational and structural differences I found exciting between ACII and other psychology/neuroscience conferences. First of all, I was pleasantly surprised by the openness of the organizational committee and the open discussion processes during the town hall meeting. The town hall covered many topics including co-location (organizing the conference near the date of another conference in the same city to reduce environmental costs), diversity, inclusivity, and budgeting (popularity of allocating increased budget for open-access publications). I don’t know if other engineering conferences are alike, but I was fascinated by the openness of the organizers and the participation by the attendees to help steer the conference community in a better direction and would love to see more similar discussions held at psychology and neuroscience conferences.

There were numerous inspiring and cutting edge research presented at the conference but here are a few that I’d like to share in brief. Lisa Feldman Barret (Northeaster Univ) presented her keynote titled “Can Machines perceive Emotion?” and sparked interesting discussions on how visual emotion recognition systems are recognizing stereotypic displays of emotions but not how the subject is actually feeling. I think it is good that she is raising awareness of this issue, especially since the public may not be immediately aware that results from affective computing algorithms must be considered with the concept of probability. Just because someone smiles, triggering a facial emotion detection algorithm to spit out that the person seems happy, does not necessarily mean that the person is truly happy albeit there might be a good possibility that she is. What’s important is that the context also needs to be considered to improve the detection of how one is truly feeling. Rosalind Picard (MIT) also led a panel discussion regarding ethics related to affective computing to talk about what scientists can do to perhaps make sure that their tools and algorithms do not fall into the wrong hands while still advocating open science. I learned about License AI ( by Daniel McDuff (Microsoft Research) for the first time which empowers developers with more control over how their code is used restricting domains such as surveillance or criminal justice. It was great to know that these types of discussions accompany the research done in this community.

I really enjoyed the workshop on emotions and emergent states in groups, which hosted many research from Hayley Hung’s group at Technical University of Delft in the Netherlands. I learned about new concepts such as f-formations (Kendon, 1990, which can be used to detect groups of interacting individuals in an open interaction environment and how they can be used to predict social relationship development. Moreover, she presented results from measuring team cohesion over time using longitudinal paradigms and mobile wearable sensors and how they can be predicted by turn taking and mimicry. Work by Nale Lehmann-Willenbrock (U of Hamburg) also extended these results by looking at how humor dynamics (how it is presented, received, and reciprocated) during real company meetings affect the performance of groups. On the other hand, there were also interesting talks from the neuroscience community including Desmond Ong’s (Singapore Univ) talk about how verbal, vocal, and visual cues differentially contribute to predicting emotion in storytelling and by Phil Kragel (U of Colorado Boulder) who showed emotion categories can be decoded by the visual cortex of the brain in response to images suggesting that there are sensory features in image stimuli that allow some decoding into what the emotional experience might be in response to the image.

Lastly, there were demo sessions which included EEG-triggered camera apps, online tool for annotating videos (, a human-like robot programmed with social signals inspired by psychology research (, and Kinect depth cameras to visualize poses which turns out to be presented by my personal affective computing hero Tadas Baltrusaitis who developed OpenFace (

All in all, I had a great time at ACII 2019 and am already looking forward to attending again in 2021 in Nara, Japan. I was deeply inspired by so many things at ACII including the research, the people, and the community that I hope to share with the psychology and neuroscience community. I hope this post can be helpful to those who may not have been able to attend or to others who might have not known about ACII. Once again, I am thankful that the Graduate Student Council at Dartmouth supported this trip.