
On this interview collection, we’re assembly a number of the AAAI/SIGAI Doctoral Consortium members to search out out extra about their analysis. Kate Candon is a PhD pupil at Yale College interested by understanding how we will create interactive brokers which might be extra successfully capable of assist folks. We spoke to Kate to search out out extra about how she is leveraging specific and implicit suggestions in human-robot interactions.
May you begin by giving us a fast introduction to the subject of your analysis?
I research human-robot interplay. Particularly I’m interested by how we will get robots to raised study from people in the best way that they naturally train. Usually, a whole lot of work in robotic studying is with a human instructor who is simply tasked with giving specific suggestions to the robotic, however they’re not essentially engaged within the process. So, for instance, you might need a button for “good job” and “dangerous job”. However we all know that people give a whole lot of different alerts, issues like facial expressions and reactions to what the robotic’s doing, perhaps gestures like scratching their head. It might even be one thing like transferring an object to the facet {that a} robotic fingers them – that’s implicitly saying that that was the improper factor at hand them at the moment, as a result of they’re not utilizing it proper now. These implicit cues are trickier, they want interpretation. Nevertheless, they’re a approach to get further info with out including any burden to the human consumer. Prior to now, I’ve checked out these two streams (implicit and specific suggestions) individually, however my present and future analysis is about combining them collectively. Proper now, we’ve got a framework, which we’re engaged on bettering, the place we will mix the implicit and specific suggestions.
By way of selecting up on the implicit suggestions, how are you doing that, what’s the mechanism? As a result of it sounds extremely troublesome.
It may be actually laborious to interpret implicit cues. Folks will reply otherwise, from individual to individual, tradition to tradition, and so on. And so it’s laborious to know precisely which facial response means good versus which facial response means dangerous.
So proper now, the primary model of our framework is simply utilizing human actions. Seeing what the human is doing within the process can provide clues about what the robotic ought to do. They’ve totally different motion areas, however we will discover an abstraction in order that we will know that if a human does an motion, what the same actions can be that the robotic can do. That’s the implicit suggestions proper now. After which, this summer time, we wish to lengthen that to utilizing visible cues and taking a look at facial reactions and gestures.
So what sort of eventualities have you ever been sort of testing it on?
For our present venture, we use a pizza making setup. Personally I actually like cooking for example as a result of it’s a setting the place it’s straightforward to think about why these items would matter. I additionally like that cooking has this factor of recipes and there’s a formulation, however there’s additionally room for private preferences. For instance, someone likes to place their cheese on high of the pizza, so it will get actually crispy, whereas different folks wish to put it underneath the meat and veggies, in order that perhaps it’s extra melty as a substitute of crispy. And even, some folks clear up as they go versus others who wait till the top to take care of all of the dishes. One other factor that I’m actually enthusiastic about is that cooking might be social. Proper now, we’re simply working in dyadic human-robot interactions the place it’s one particular person and one robotic, however one other extension that we wish to work on within the coming yr is extending this to group interactions. So if we’ve got a number of folks, perhaps the robotic can study not solely from the particular person reacting to the robotic, but in addition study from an individual reacting to a different particular person and extrapolating what that may imply for them within the collaboration.
May you say a bit about how the work that you simply did earlier in your PhD has led you so far?
After I first began my PhD, I used to be actually interested by implicit suggestions. And I believed that I needed to deal with studying solely from implicit suggestions. One among my present lab mates was targeted on the EMPATHIC framework, and was trying into studying from implicit human suggestions, and I actually appreciated that work and thought it was the course that I needed to enter.
Nevertheless, that first summer time of my PhD it was throughout COVID and so we couldn’t actually have folks come into the lab to work together with robots. And so as a substitute I did a web based research the place I had folks play a sport with a robotic. We recorded their face whereas they have been enjoying the sport, after which we tried to see if we might predict primarily based on simply facial reactions, gaze, and head orientation if we might predict what behaviors they most popular for the agent that they have been enjoying with within the sport. We truly discovered that we might decently nicely predict which of the behaviors they most popular.
The factor that was actually cool was we discovered how a lot context issues. And I believe that is one thing that’s actually necessary for going from only a solely teacher-learner paradigm to a collaboration – context actually issues. What we discovered is that typically folks would have actually massive reactions nevertheless it wasn’t essentially to what the agent was doing, it was to one thing that they’d completed within the sport. For instance, there’s this clip that I all the time use in talks about this. This particular person’s enjoying and she or he has this actually noticeably confused, upset look. And so at first you may suppose that’s damaging suggestions, regardless of the robotic did, the robotic shouldn’t have completed that. However should you truly have a look at the context, we see that it was the primary time that she misplaced a life on this sport. For the sport we made a multiplayer model of Area Invaders, and she or he received hit by one of many aliens and her spaceship disappeared. And so primarily based on the context, when a human seems to be at that, we truly say she was simply confused about what occurred to her. We wish to filter that out and never truly take into account that when reasoning concerning the human’s conduct. I believe that was actually thrilling. After that, we realized that utilizing implicit suggestions solely was simply so laborious. That’s why I’ve taken this pivot, and now I’m extra interested by combining the implicit and specific suggestions collectively.
You talked about the specific factor can be extra binary, like good suggestions, dangerous suggestions. Would the person-in-the-loop press a button or would the suggestions be given by means of speech?
Proper now we simply have a button for good job, dangerous job. In an HRI paper we checked out specific suggestions solely. We had the identical area invaders sport, however we had folks come into the lab and we had a bit Nao robotic, a bit humanoid robotic, sitting on the desk subsequent to them enjoying the sport. We made it in order that the particular person might give constructive or damaging suggestions through the sport to the robotic in order that it will hopefully study higher serving to conduct within the collaboration. However we discovered that individuals wouldn’t truly give that a lot suggestions as a result of they have been targeted on simply making an attempt to play the sport.
And so on this work we checked out whether or not there are alternative ways we will remind the particular person to provide suggestions. You don’t wish to be doing it on a regular basis as a result of it’ll annoy the particular person and perhaps make them worse on the sport should you’re distracting them. And likewise you don’t essentially all the time need suggestions, you simply need it at helpful factors. The 2 situations we checked out have been: 1) ought to the robotic remind somebody to provide suggestions earlier than or after they fight a brand new conduct? 2) ought to they use an “I” versus “we” framing? For instance, “keep in mind to provide suggestions so I could be a higher teammate” versus “keep in mind to provide suggestions so we could be a higher crew”, issues like that. And we discovered that the “we” framing didn’t truly make folks give extra suggestions, nevertheless it made them really feel higher concerning the suggestions they gave. They felt prefer it was extra useful, sort of a camaraderie constructing. And that was solely specific suggestions, however we wish to see now if we mix that with a response from somebody, perhaps that time can be an excellent time to ask for that specific suggestions.
You’ve already touched on this however might you inform us concerning the future steps you could have deliberate for the venture?
The large factor motivating a whole lot of my work is that I wish to make it simpler for robots to adapt to people with these subjective preferences. I believe when it comes to goal issues, like having the ability to choose one thing up and transfer it from right here to right here, we’ll get to a degree the place robots are fairly good. Nevertheless it’s these subjective preferences which might be thrilling. For instance, I like to prepare dinner, and so I need the robotic to not do an excessive amount of, simply to perhaps do my dishes while I’m cooking. However somebody who hates to prepare dinner may need the robotic to do all the cooking. These are issues that, even if in case you have the proper robotic, it might probably’t essentially know these issues. And so it has to have the ability to adapt. And a whole lot of the present choice studying work is so information hungry that it’s important to work together with it tons and tons of instances for it to have the ability to study. And I simply don’t suppose that that’s sensible for folks to really have a robotic within the dwelling. If after three days you’re nonetheless telling it “no, whenever you assist me clear up the lounge, the blankets go on the sofa not the chair” or one thing, you’re going to cease utilizing the robotic. I’m hoping that this mix of specific and implicit suggestions will assist or not it’s extra naturalistic. You don’t must essentially know precisely the suitable approach to give specific suggestions to get the robotic to do what you need it to do. Hopefully by means of all of those totally different alerts, the robotic will be capable to hone in a bit bit quicker.
I believe a giant future step (that isn’t essentially within the close to future) is incorporating language. It’s very thrilling with how massive language fashions have gotten so a lot better, but in addition there’s a whole lot of fascinating questions. Up till now, I haven’t actually included pure language. A part of it’s as a result of I’m not totally certain the place it matches within the implicit versus specific delineation. On the one hand, you may say “good job robotic”, however the best way you say it might probably imply various things – the tone is essential. For instance, should you say it with a sarcastic tone, it doesn’t essentially imply that the robotic truly did an excellent job. So, language doesn’t match neatly into one of many buckets, and I’m interested by future work to suppose extra about that. I believe it’s an excellent wealthy area, and it’s a approach for people to be far more granular and particular of their suggestions in a pure approach.
What was it that impressed you to enter this space then?
Truthfully, it was a bit unintended. I studied math and laptop science in undergrad. After that, I labored in consulting for a few years after which within the public healthcare sector, for the Massachusetts Medicaid workplace. I made a decision I needed to return to academia and to get into AI. On the time, I needed to mix AI with healthcare, so I used to be initially interested by scientific machine studying. I’m at Yale, and there was just one particular person on the time doing that, so I used to be taking a look at the remainder of the division after which I discovered Scaz (Brian Scassellati) who does a whole lot of work with robots for folks with autism and is now transferring extra into robots for folks with behavioral well being challenges, issues like dementia or anxiousness. I believed his work was tremendous fascinating. I didn’t even understand that that sort of work was an possibility. He was working with Marynel Vázquez, a professor at Yale who was additionally doing human-robot interplay. She didn’t have any healthcare tasks, however I interviewed along with her and the questions that she was interested by have been precisely what I needed to work on. I additionally actually needed to work along with her. So, I unintentionally stumbled into it, however I really feel very grateful as a result of I believe it’s a approach higher match for me than the scientific machine studying would have essentially been. It combines a whole lot of what I’m interested by, and I additionally really feel it permits me to flex forwards and backwards between the mathy, extra technical work, however then there’s additionally the human factor, which can also be tremendous fascinating and thrilling to me.
Have you ever received any recommendation you’d give to somebody pondering of doing a PhD within the discipline? Your perspective might be significantly fascinating since you’ve labored outdoors of academia after which come again to start out your PhD.
One factor is that, I imply it’s sort of cliche, nevertheless it’s not too late to start out. I used to be hesitant as a result of I’d been out of the sphere for some time, however I believe if yow will discover the suitable mentor, it may be a very good expertise. I believe the largest factor is discovering an excellent advisor who you suppose is engaged on fascinating questions, but in addition somebody that you simply wish to study from. I really feel very fortunate with Marynel, she’s been a wonderful advisor. I’ve labored fairly carefully with Scaz as nicely and so they each foster this pleasure concerning the work, but in addition care about me as an individual. I’m not only a cog within the analysis machine.
The opposite factor I’d say is to discover a lab the place you could have flexibility in case your pursuits change, as a result of it’s a very long time to be engaged on a set of tasks.
For our closing query, have you ever received an fascinating non-AI associated truth about you?
My predominant summertime interest is enjoying golf. My entire household is into it – for my grandma’s one centesimal birthday celebration we had a household golf outing the place we had about 40 of us {golfing}. And really, that summer time, when my grandma was 99, she had a par on one of many par threes – she’s my {golfing} position mannequin!
About Kate
![]() | Kate Candon is a PhD candidate at Yale College within the Pc Science Division, suggested by Professor Marynel Vázquez. She research human-robot interplay, and is especially interested by enabling robots to raised study from pure human suggestions in order that they’ll change into higher collaborators. She was chosen for the AAMAS Doctoral Consortium in 2023 and HRI Pioneers in 2024. Earlier than beginning in human-robot interplay, she obtained her B.S. in Arithmetic with Pc Science from MIT after which labored in consulting and in authorities healthcare. |
AIhub
is a non-profit devoted to connecting the AI group to the general public by offering free, high-quality info in AI.

AIhub
is a non-profit devoted to connecting the AI group to the general public by offering free, high-quality info in AI.

Lucy Smith
is Managing Editor for AIhub.

