Tom Carlson is Professor of Assistive Robotics at University College London. Here, we interview Professor Carlson to learn more about his background, his areas of research and the future of robotics.
Give us a little bit of background about you and the journey to your current role as Professor of Assistive Robotics?
I started off with an MEng degree in Electrical & Electronic Engineering at Imperial College London, UK. Having worked on projects with the Royal National Institute of Blind People (RNIB) and Royal College of Music to support blind musicians with Braille music notation, towards the end of my degree I was pondering on how else we might use electronics to help people. After discussing with a few potential supervisors, I decided to stay at Imperial to pursue a PhD in Intelligent Robotics under the supervision of Yiannis Demiris, specifically developing a smart wheelchair to help those who could not use traditional powered wheelchairs. I was then fortunate to be offered a postdoc position at EPFL in Switzerland, working with José del R. Millán to develop a non-invasive brain-controlled wheelchair, with a particular focus on how we share the control between the user and the ‘intelligent’ roboticized wheelchair.
In Switzerland, I divided my time between the robot lab, running experiments with healthy able-bodied participants and then increasingly working with patients and clinicians at a rehabilitation centre 100 km away. There, I worked on small-scale clinical trials to investigate the transition of the technology out of the lab and towards the real intended end-users. When a lectureship in ‘disability science’ opened up at University College London in collaboration with the Aspire Charity and the Royal National Orthopaedic Hospital, I jumped at the opportunity—especially because all these activities could now be co-located on one site. After a quick re-brand with my newly found colleagues, Anne Vanhoestenberghe and Rui Loureiro, UCL Aspire Create—the Centre for Rehabilitation Engineering and Assistive Technology—was born and has continued to grow steadily ever since! As part of the new Centre, we developed a bespoke MSc programme in Rehabilitation Engineering and Assistive Technologies, including my specialist optional module on advanced human–machine interfaces, which focusses in particular on brain–machine interfaces.
What are the key areas of your current research?
My research revolves around the notion of ‘shared control’ and how to optimize the collaboration between humans and machines, especially applied to the healthcare sector. I have completed a few large European projects that have enhanced the capabilities of our fleet of smart wheelchair prototypes and have some smaller ongoing projects, such as working with charities to develop bespoke assistive robotic devices, and working with clinical partners to develop wearable devices to support rehabilitation. However, my group’s main research focus is currently very much on improving the usability of the brain–machine interface itself, to open up the doors to a whole array of assistive technologies, including devices for motor-substitution and communication aids.
In collaboration with my UCL colleagues, Youngjun Cho and Hubin Zhao, we are working on non-invasive brain–computer interfaces (BCI) using electroencephalography and functional near infrared spectroscopy. We use these complementary technologies to record signals from the brain and then develop advanced signal processing and machine learning pipelines to support a range of real-time protocols from motor and speech imagery, to decoding cognitive states such as workload, fatigue and error-related potentials. This often involves starting with neuropsychology-style experiments and then translating the results into closed-loop control systems with the aim of enabling real-world BCI use.
The field of assistive robotics and human–machine interfaces clearly opens up a wealth of opportunities, but does it present any challenges?
So far, one of the most difficult—but perhaps also the most interesting—challenges is working with humans! Every human is different, and the brain exhibits neuroplasticity (essentially the ability to learn), which means, by definition, those signals that we are chasing are constantly evolving. Even with the recent explosion of artificial intelligence, most machine learning techniques still do not cope particularly well with this sort of data, and generally work better when there is a lot of (more homogenous) data available. However, it is really boring for a user to sit down for hours on end, while they imagine moving one hand or the other, just to provide a lot of training data to their BCI classifier. Furthermore, as the user transitions from the carefully controlled lab/training environment to operating real devices in real-world settings, they have to deal with a multitude of external stimuli and consequently their performance usually rapidly declines.
Therefore, we have taken a two-pronged approach to improve the user’s BCI experience, speed up the training process, and improve the final performance. First, we have gamified the training and introduced Virtual Reality, along with personalised dynamic assistance. This has already shown some improvement in engagement and motivation, as well as increasing robustness of the resultant classifier performance (as can be seen in our recent ICRA paper). Second, we are improving our machine learning pipeline to work better with uncertainty and the evolving features which are so typical in BCI applications.
Your research has the ability to change lives, providing independence, and improved quality of life. Are there areas that continue to surprise you as to what can be achieved through human–machine interfaces?
While of course our research strives towards the vision of improving independence and quality of life, we must be cautious and recognize that it usually takes a long time to translate this type of research into something that can be made widely available as a product. That said, I am always delighted when I witness a new participant, or indeed one of our students, learn to master our BCI. It is still awe-inspiring, when you see someone controlling a device through thought alone, be it moving a simple cursor on a screen, to controlling our Virtual Reality wheelchair simulator, or even driving a physical smart wheelchair.
There are the headline-grabbers where BCIs are used as part of a complex system with the aim of enabling a paraplegic to walk again, be it in a rehabilitative or assistive context. And of course, each of these represent an important step forward for the BCI community, but ultimately, there are many broader applications and challenges to be addressed. Indeed, in our lab, we are also very much focussed on improving one’s functional abilities through the use of BCI; however, you should not underestimate the potential additional psychosocial impact they can have. Colleagues around the world have demonstrated very exciting results in creative arts through applications such as ‘brain painting’ (using a BCI to create digital art) and enabling a DJ with amyotrophic lateral sclerosis (ALS) to control a robotic avatar.
What excites you the most about the future of robotics and human–machine interfaces?
One of the challenges for any control problem is ‘closing-the-loop’, i.e., getting feedback so you can make the required adjustments to the control signals. The majority of brain–computer interfaces do this through some form of visual feedback, although there have been a number of studies looking at alternative modalities of stimulation, from auditory to vibrotactile and even electro-tactile. Some of the most exciting work that I have read recently in this arena aims to directly stimulate the brain to elicit proprioceptive and cutaneous sensations. This is a really important step in closing the BCI loop, while not overloading other channels (like vision and hearing), which can instead focus on the actual task at hand.
I guess the ultimate dream is for people to have options: to be able to choose the interface they want, and pair it with the robotic technology they want. Whether that is an invasive or non-invasive brain–computer interface, and whether it is to control a smartphone, a wheelchair, an exoskeleton, a bionic limb or something else entirely.
As consumers, we are becoming increasingly used to simply picking up the latest tech on the shelf and using it straight away (often without reading any instructions). However, BCI is not like this, and you need to develop BCI skills, a bit like learning to ride a bike or drive a car. It takes time, patience and resilience. I am excited because in our lab, we are making significant advances in improving the BCI learning process, and in doing so, hope to open up the world of BCI to many more potential users.