The first talk, “Get Your Virtual Hands Off Me! – Developing Threatening Agents Using Haptic Feedback” was presented by Tibor Bosse, and described how virtual agents can be used to train people in social situations. For example, virtual agents in a VR setting are used to train tram conductors in Amsterdam on how to best respond to aggressive passengers. A continuous challenge in this research is making the simulation believable. The authors presented evidence that a virtual agent who could raise his voice, and whose pushes could be felt by haptic inputs on a vest, helped to improve this believability.

In the second talk, Vincent Koopman talked in his presentation “Omniscient Debugging for Cognitive Agent Programs” about how debugging could be optimized in agent-based simulations. A good technique for debugging is to record the execution of a program, but this slows it down dramatically (by about 300x). However, a solution is to capture a higher level of abstraction of the program: not every little detail but only the agent’s mental states. This allowed for capture of the program at only 1.1x the runtime speed.

The third talk by Lenin Medeiros, “An Empathic Agent that Alleviates Stress by Providing Support via Social Media” described how virtual agents could potentially function as friends to provide relief in times of stress. The advantage of such a system would be that those agents would always be available, and not suffer from empathy burnout. The virtual agents that were programmed first detected the type of stress of the user, then selected a support strategy and used that to send messages. Users preferred this virtual agent over a random model.

In the final talk “Expectation Management in Child-Robot Interaction” Mike Ligthart talked about how expectation management is crucial for successful child-robot interactions. For example, children may believe that robots actually understand them or that they can help them with delicate tasks such as drawing blood. If the robot is found to be unable to do that, the child then becomes disappointed. They suggested that to prevent this from happening, it is important to identify children’s expectations from robots, and maybe eventually the robots themselves could help to manage these expectations by pointing out what they can and cannot do.