Micol Spitale is an Assistant Professor at the Department of Electronics, Information and Bioengineering at the Politecnico di Milano (Polimi), as well as a Visiting Affiliated Researcher at the University of Cambridge. In recent years, her research has been focused on the field of Social Robotics, Human-Robot Interaction, and Affective Computing, exploring ways to develop robots that are socio-emotionally adaptive and provide ‘coaching’ to promote wellbeing. In this interview, Micol tells us more about her work in this area.

I’m Micol. I’m now an Assistant Professor at the Politecnico di Milano in Italy. I’m also a Visiting Affiliated Researcher with the University of Cambridge because I’m still working with Professor Hatice Gunes, who is leading the group of Affective Intelligence and Robotics Laboratory at the University of Cambridge, where I did my postdoctorate. She won a very big grant, the EPSRC grant, and with her I am working on adaptive robotics for mental well-being. I’ve been hired for the project where I started to explore and focus my research mostly on robotics for mental well-being. Before this, I got my PhD in Information Technology and I did a visiting period at the University of Southern California as well in Los Angeles.

My work on promoting mental well-being using technologies, specifically robots, is all about exploring ways to develop robots that are socio-emotionally adaptive and provide ‘coaching’ to promote well-being. During my journey, I have done numerous different studies that have been collaborations with different PhD students at the University of Cambridge, and now I’m continuing some of the projects with them. I’m also starting my own research with an interest always in the same topic. I think there is a lot of potential, room for improvement, and room to explore in this research field.

There are actually plenty of challenges. First of all, all the populations we have been working with until recently were healthy populations because we want to use this technology to promote mental well-being. Therefore, the idea is that this tool can be beneficial for healthy people, but anytime that you try to use this kind of technology with a more vulnerable population, a lot of ethical concerns can arise.

Also, because you do not have full control of the robot most of the time, especially with the latest advances in large language models (LLM) that can be embedded in this technology, you do not have control exactly of what is going to be said by the robot. I would say this is a little bit more controversial if you let a robot interact using this model with a vulnerable person. As a researcher, we have to deploy such technology in a very responsible way so that we avoid any issues that can arise, and we consider all the different ethical concerns.

To be privacy compliant and to avoid the robot saying something that is inappropriate or giving some advice that it is not really the place of the robot to give, there are multiple aspects that we need to be careful about when designing and especially deploying such technology to vulnerable population.

We can actually intervene at different levels. It could be in the design process, where you can design this technology responsibly. For example, involving the final user from the beginning and adopting what is called in the human–computer interaction community, the Participatory Design Technique. This basically involves the end user to really trying to understand together how we can design this technology for them and with them. With this, we are trying to give a voice to people and consider all their concerns, and I think in this way we can design the technology in a better way.

There is also the other side that is a more technological core side, so the algorithm that you are going to use during the interaction. One of the fields of artificial intelligence is to guarantee, for example, fairness. So, if we try to also deploy algorithms that minimize biases while detecting facial expressions of people while interacting, with the robots avoiding having any gender bias while replying to the person and things like that, we can try to safeguard the interaction a little bit more.

Also, with the new LLMs which are black box models, you cannot really have control inside the model. You can build upon them some techniques like adversarial testing, or you can put in place some ethical guidelines to avoid the LLM going completely off topic or giving some advice that was not required. So, there are a lot of ways we can try to cope with these challenges on different levels, not only from an algorithmic perspective but also from a design point of view.

Oh yeah, definitely. So right now, I am starting new work with my PhD student, and we are working on group child–robot interaction in a hospital context. We are going to start the studies soon, but the main idea here is that we are doing something that can really have a positive impact on children’s lives, for example, when they are in hospital.

This is super rewarding because with children, everything is more exciting. They are more enthusiastic than the general adult population. They are very genuine also in expressing themselves, so it’s really a beautiful experience to see and have their feedback during the interaction because they really get excited about that, of course. So, I think working with children for me is one of the best things in my research work because you can really see how beneficial the work you’ve done is, and you have really enthusiastic feedback from them.

I think there are multiple areas actually. Something that I want to get back to is my research during my PhD, so we’ve been focusing on using robots for therapeutic activities for children, for example, with autism. During my PhD, I discovered that robots are a very effective tool because they reduce the complexity of the human–human interaction, which for children with autism is very difficult to interpret and elaborate. So, when they have to interact with technology like a robot, for them, it’s a lot easier. So, you can use this tool to facilitate the learning process or rehabilitate some of their skills that maybe they have some more difficulties with, like emotional skills, social skills, or linguistic skills.

I do believe that in that area there is a lot of potential. I hope that in the future, it could be something that we can see in therapeutic standards, technology that is adopted by professionals, such as therapists, to help children rehabilitate their skills in a more effective and maybe faster way.

That’s a big thing. I think social media in general and the media are communicating misinformation about it. There is such a hype around the topic, and so when people do not fully understand it and do not fully know it, they get very scared. Whenever, for example, I try to do a study and I mention that we use some model that is based on this artificial intelligence model, you can see that they are scared. So, I think some literacy on the topic, especially given the boom of artifical intelligence and the hype about it, is very, very important. When you deal with people who are very well-educated on the topic, this is not an issue, but whenever you deal with the wider public, that is something that I find concerning. You have to convince people of something that they have their own opinions on, based on media and society, rather than on their knowledge and information grounded in scientific proof or evidence.

graphic

Published by Portland Press Limited under the Creative Commons Attribution License 4.0 (CC BY-NC-ND)