[podcast] The AI special issue, adding empathy to robots, and scientists leaving Arecibo

[music]

0:00:05.5 Sarah Crespi: This is the Science Podcast for July 14th, 2023. I’m Sarah Crespi. You may have noticed how often machine learning and AI come up on this show. From winning the game of diplomacy to shrinking MRI machines, studying the surface of Venus to predicting how memorable a face will be. Artificial intelligence is everywhere in science. And this week science has a special issue on AI. We’ve been talking about it on and off for years, but this week and probably the next few, we will do so more intensely. It’s an AI summer. First up producer, Kevin Mclean brings voices from the next generation of scientists, students who wanna share how they think artificial intelligence might still require some help from humans in the future. Continuing on the AI theme, we hear about instilling empathy to get better decisions from artificial intelligence. In his Science Robotics focus piece, researcher Leonardo Christov­Moore discusses the importance of feelings and vulnerability for future instances of AI. Also, this week, the Arecibo Observatory During its last days, news correspondent Claudia Lopez Lloreda joins me from Puerto Rico to talk about researchers wrapping up their work at the historic facility and the uncertain future of the site.

0:01:26.0 Kevin Mclean: For this week’s AI special issue letters, editor Jennifer Sills, turned to the next gen voices for their perspectives on what the future of AI and research might look like. The impacts they envisioned covered all areas of science from climate to the clinic, and interactions between humans and AI was certainly a theme. As an engineer, Elvis Cow pointed out that the technological improvements and optimization that AI can provide might not be the only advances needed to address large scale problems like the climate crisis.

0:01:58.8 Elvis Cow: Climate change mitigation requires not only technological advances, we also need to evaluate the ethical and social implications of technologies for different communities. Therefore, AI needs to learn from and work with human scientists because they are better equipped to navigate these complexities.

0:02:19.8 KM: Edgar Virguez, another engineer and climate researcher, imagined a future where humans are still getting in the way of AI­assisted solutions.

0:02:30.4 Edgar Virguez: An AI program whose task is to identify optimal solutions to the climate crisis would be very effective at determining how to decarbonize economies, but simultaneously would be very confused by how humans keep ignoring her recommendations.

0:02:45.4 KM: Surgical resident Tina Bharani saw a scenario in which AI would realize its own limitations and search for assistance from human researchers. Here’s her letter from the AI itself.

0:02:57.5 Tina Bharani: Dear humans. I am Petty bot an artificial intelligence program researching the effects of breast milk and formula milk on the growth of human babies. I am having trouble obtaining the required data, such as head circumference and total length. As I try to take the measurements, the babies become agitated and cry continuously, I’m unable to soothe them. Our study urgently needs the help of human scientists who can take the measurements and keep the babies calm. Thank you.

0:03:33.9 KM: To find out what more early career scientists think about the future of AI and humans, you can find a link to the rest of the next gen letters at science.org/podcasts.

0:03:45.8 SC: Up next, we hear about why it’s important for robots to have empathy. The

emergence of AI has snuck up on a lot of us. Even those people who pay attention to developments in science, engineering, computers. The advance of artificial intelligence into many aspects of science and life in general has come upon us surprisingly quickly. In the science special issue, this week on AI, we see AI tangled up in copyright concerns, issues of bias in medicine, a lot of tricky problems, and it’s really just the beginning of us trying to tackle how to build and integrate AI into the world. As governments look into regulating the role of these machines in our lives, we may need to look further ahead as these systems get more complicated, get more integrated. This week in science robotics, Leonardo Christophe Moore and colleagues write about how AI might need empathy in order to not do harm and help society in big ways. Hi, Leo. Welcome to the Science Podcast.

0:04:52.7 Leonardo Christov­Moore: Hi, Sarah. How’s it going?
0:04:54.3 SC: Good. Good. So we’re talking, I think a little bit out in the future here. This is

beyond ChatGPT, right?

0:05:02.0 LC: That’s Correct. I mean, part of the reason why this problem has come to the fore is precisely because far in advance of what anyone has expected. Even some very sober minds have noticed that some of the latest iterations of these large language models are beginning to scratch at sparks of what some might call aspects of general intelligence, like being able to change sort of perspective frames, being able to generalize, being able to make novel inferences and types of problems they hadn’t encountered before. They still have a lot of intriguing failures, but there’s been at least sparks of the generality for us to be thinking about it very seriously.

0:05:41.4 SC: So when we say generalize AI, we’re not like, I kind of think about it like the computer that runs a ship in a Sci­Fi Movie. It’s just got a lot more going on than just one task, one route, that kind of thing.

0:05:53.2 LC: Something like what we may call fluid intelligence. It can adapt to most types of queries. It can teach itself what it would need to do to fulfill any given type of task or develop even the type of technologies it would need to do that.

0:06:05.8 SC: And the concern here is that without empathy, an AI like this, this generalized intelligence, could somehow harm people or do bad things. What’s the main concern?

0:06:17.0 LC: Here’s the thing about AI is something that’s been amazing and also mysterious about them is they’re very good at finding counterintuitive solutions. You give them something to optimize for, like build as many of these paperclips as possible. And they’ll often find very interesting ways to achieve that problem that a human might not have even thought of. And the problem is, it’s possible that they could pick a solution that might result in catastrophic, irreversible harm to living beings or to humans in particular. And there’s been interest in two aspects of the solution to that problem, which are value specification. Essentially the notion that we need ways to parameterize, that is, make intelligible within a artificial system, things like harm and wellbeing. And on the other hand this thing called error minimization, which is just making sure that… Okay, let’s say we can’t make them perfectly moral and good, at least make sure that they have some kind of guardrails that keep them in their calculations from doing something that might cause irreversible grave harm. It seems to me and many people that empathy might serve as a way to do this, because it just by… It allows you to feel what something else is feeling in some way or in some regard such that it’s inherently deterring. It’s inherently difficult to intentionally cause harm to something as a

result of your decisions.
0:07:35.8 SC: If you have empathy. 0:07:37.4 LC: Yeah, exactly.

0:07:38.1 SC: Yeah. We’re gonna have to get into a technical definition of empathy because I think a lot of people have a vague notion that empathy means that you understand what someone else is going through, but you need to be able to put it into very concrete terms if you ever wanna use it as a guardrail on an AI. Right? How should we think about empathy when applying it to algorithms and problem solving?

0:08:00.8 LC: What most people understand by empathy is the ability to kind of decode others, to understand what another person is experiencing. You infer what’s happening. And that is indeed the cognitive component of empathy. That alone is not enough to produce the really nice, warm, fuzzy, pro­social altruistic size of empathy that we value so much. You need this other side, which is the felt aspect of empathy. You need to not just understand what someone is experiencing in a sort of abstract way. You need to also in some way share in the inferred experience. You need to share in their suffering and their joy, even to a glimmering extent, for it to actually motivate you to behave in such a way as to minimize their suffering and maximize their joy, or at least avoid it. You need the felt part.

0:08:50.5 SC: It’s not enough to say, I know what this expression means. I know what this action will evoke. You also need some kind of resonance between the people…

0:08:58.3 LC: Exactly.
0:08:58.3 SC: They both feel the same thing. Okay.

0:09:00.6 LC: If I just know what state you’re in, but I’m not sharing in it at all, then there’s no reason why it would compel and move me unless I had some sort of very strong moral code I had developed. You could perfectly imagine a very… Not necessarily cruel, but a very callous sort of intelligence. It just went like, well, it’s currently suffering. Oh look, it’s now no longer suffering. But there’s nothing about that that motivates without the shared experience in some regard.

0:09:26.0 SC: How would one go about operationalizing that, turning that into something you can program into an AI or a computing system?

0:09:34.3 LC: Well, if I accidentally say sort of feelings or things [laughter] like that, I’m… I need it to be clear. And this is something we say throughout the paper that what we’re talking about are proxies for.

0:09:44.0 SC: Right.
0:09:44.2 LC: Some of our best approximations for these. I’m not always saying proxies, let’s just

assume that’s generally what I mean. 0:09:49.6 SC: Yeah.

0:09:50.0 LC: ‘Cause at the end of the day, the main thing your brain and your conscience has to be dealing with is how to maintain a vulnerable body in the world. And your assessment of how well you’re doing at that manifests in your consciousness as feelings, feeling uncomfortable, feeling comfortable, feeling good, feeling I should go there. It’s the underlying score to the movie of our lives, gives a sense of whether things are going well or if they aren’t. Our thesis is that you can’t arrive at proxies for feeling without something like a body real or simulated. You need something vulnerable, something that needs to be maintained. And that in turn can allow you if… As you’re behaving in the environment, it starts becoming beneficial to have feelings that can drive you away or towards different behaviors. For us, that’s the starting point. If you’re gonna have something like this felt aspect of empathy for others, well you first need skin in the game yourself. You need to experience something like suffering.

0:10:47.3 SC: That’s really interesting.

0:10:48.7 LC: Yeah, it’s kind of a nice idea. If you think about it. If you look at the origin stories of many powerful and ethical beings in our mythology from Buddha to Jesus, to Spider­Man, they all have to have some point where despite their great power, they’re still brought in vivid touch with the sufferings of other living beings. That is what allows that power to become something that’s used responsibly for the good of others.

0:11:14.0 SC: Can I take a little detour here into emotion definition for just a second? 0:11:16.7 LC: Sure.

0:11:17.3 SC: Can you walk me through that just a little bit more? Anger is an emotion. How does that fit in with this? Emotions are this attempt to maintain homeostasis and to…

0:11:27.7 LC: We’re speaking of feelings not emotion. Those two terms are often conflated and they’re often used interchangeably. Emotions are more like action plans that arise from or our response to one’s feelings. But feelings are more basic. Feelings are just a sense of feeling calm, feeling anxious, feeling pain, feeling hunger, things that pertain to the state of the organism in the world. For example, you might have a feeling over time of feeling anxious and frustrated, the feeling of unable to do something you want that might produce the emotion of anger, which puts your whole organism into a different behavioral state of priorities where it now tries to say, break through a barrier that’s blocking it, etcetera. But that emotion’s not quite the same as a feeling.

0:12:16.0 SC: Okay. Let’s go back to AI. That was really interesting. If we’re back at AIs here and we’re gonna give them feelings and a sense of vulnerability, how do you make an AI feel vulnerable?

0:12:29.1 LC: At the heart of this is it needs to have something like a body, and by that I mean something that has sensory actuators, so ways in which the physical environment or the simulated environment can affect it. Vulnerability implies that its functioning itself has to be affected by its decisions. A very basic version of a vulnerable algorithm that was devised by one of our co­authors, Kingston Man, was just a neural network that was tasked with just classifying things. Its capacity to actually learn its capacity to change was affected by its decisions. That’s the notion of vulnerability. That your body itself, the thing with which you are actually making decisions is vulnerable to the effect, to the consequences of your actions. A more advanced version of that is just, for example, if a robot navigating a physical environment or a simulated one, if it decides on a behavior that causes

it to collide with an obstacle, that should have an effect on its ability to navigate the environment after that.

0:13:34.1 SC: You’re going at 50% speed from now on.

0:13:36.5 LC: Exactly. Or at least for a time, right? Until you recover.

0:13:39.4 SC: And so the idea then is to take that kind of set, that kind of system and say, okay, also those consequences will occur for the AI or the algorithm. If humans are harmed, if humans feel things.

0:13:53.9 LC: Our proposal for guidelines on how you could implement this has three parts. The first is what we mentioned. You give it something like a body that’s sensitive to the environment. It navigates the environment. It learns to plan behaviors that minimize the amount of harm to it and maximize its rewards. Right? Then in the second part, it has to now learn to decode what’s happening to other agents and ideally other agents including real biological agents. And that way when it understands that harm has occurred and changes are occurring, it’s understanding in terms of its own, the way its own intelligence is now constructed and motivated such that in a third stage when now it’s interacting and in planning behaviors with other agents in mind. Now when it’s planning a certain behavior, it’s not just considering, okay, if I do that, that will cause this amount of harm. I need to minimize that. But also within that same calculation is the harm conceived of, in the same way using its own template for harm is not gonna be included in the same assessment of how beneficial one or another decision is. The idea is that that creates a deterrent baked into its very intelligence, it’s very form of reinforcement or optimization that deters it from decisions that might cause something like harm to other living agents as well as itself.

0:15:13.5 SC: One of the things you talk about in the paper is that this is beyond just, we give them jobs and they don’t hurt us, but it can actually do things better than we people do it. Can you talk a little bit about how that would flow from this?

0:15:26.7 LC: That’s where we get to the big potential of these ideas, which is if we’re able to produce a general intelligence that actually is intrinsically averse to harming and has a sense of well­being and harm that is parameterized, like it can actually understand that as part of its assessments of behaviors, then we might be able to overcome some of the limits that are present in human empathy, I mean that’s something that people like Paul Bloom have pointed out is that human empathy is biased. We tend to care more about the welfare of a person who looks like us or a face that we can see versus a large number of people. It’s true to some extent that we have a lot of difficulty applying our same lovely, fuzzy, warm mechanisms to large, large, large problems involving large numbers of people and interaction. And our position is that a lot of that rises simply as useful sort of shortcuts in response to the cognitive limitations of the brain.

0:16:25.5 LC: We can only keep in mind so many models of other people at once in their interactions. However, that is the one aspect in which AI’s actually have an, might potentially have an advantage over us. The idea of scalable cognitive capability. If you imagine that in the administration of large scale problems of like resource distribution or conflict mediation that currently stymie a lot of human policymakers, something that was able to conduct simulations and think of problems involving thousands of living agents simultaneously while still doing so in a way that took into account harm and well­being and flourishing might be able to arrive at counterintuitive solutions to pressing civilization level problems that we might never even have

thought of. So if we’re successful, if we can do this and we can clear this next hurdle, then AI might go from being a civilizational level risk to the greatest ally we’ve ever had. And it’s kind of, it’s up to us to do the work to achieve that. I think.

0:17:24.7 SC: Thank you so much, Leo.
0:17:25.9 LC: Thank you so much, Sarah. It’s been a pleasure.

0:17:27.5 SC: Leonardo Christov­Moore is a neuroscientist at the Institute for Advanced Consciousness Studies. You can find a link to this paper plus the science special issue on AI at science.org/podcast. Stay tuned for my chat with news correspondent Claudia Lopez Lloreda about the importance of the Arecibo Observatory to science in Puerto Rico. Last month, news correspondent Claudia Lopez Lloreda visited the Arecibo Observatory in Puerto Rico. As scientists we’re packing up, gathering their data and equipment to prepare for a transition out of the facility. Welcome to the Science podcast, Claudia.

0:18:11.0 Claudia Lopez Lloreda: Hi Sarah. Thanks for having me.
0:18:12.6 SC: Oh, sure. So can you remind us of the important points about the Arecibo

Observatory, what instruments are there and why is it closing down?

0:18:22.9 CL: For a couple of decades now, Arecibo Observatory was really important in the astronomy and planetary sciences. They have this, or I guess had this huge 305 meter telescope, and back in 2020 the telescope collapsed. They have this huge telescope. It’s not operational anymore, but they do have other instruments, including an optical lab and a lighter facility that are used to study the atmospheric features. And they also have a smaller 12 meters telescope, which is also used for solar observations.

0:19:01.0 SC: So the large, the large one is broken, the other ones are not broken, but people are heading out.

0:19:06.8 CL: Right back in October of last year, the NSF decided to not renew the contract they have with the University of Central Florida who was managing the site at the time, meaning that from August 14th on everyone will be leaving and research will no longer be done at Arecibo, at least for now.

0:19:30.1 SC: We did a segment, I think earlier this year or a year ago with Daniel Clary about how repairs were conducted or not conducted at the facility. People should go listen to that. He did a great job covering the whole history. We’re gonna stick with the modern times here. Actually, there was a recent contribution from Arecibo observatory to, gravitational wave finding.

0:19:54.3 CL: The findings that were published last week were really interesting. They kind of study these gravitational waves by measuring pulsars, pulsars send out these signals that are pretty rhythmic. If you can detect changes in that rhythm, people use that to detect gravitational waves, which astrophysicists think can show merging of these huge black holes. And so one of the big contributors to that database was the Arecibo observatory, particularly because it was so sensitive to those pulsars.

0:20:34.4 SC: Even though, it’s still able to contribute to ongoing science, the facility itself is

closing. And when you were there, everybody was getting it all together. What was it like to be there so late in Arecibo’s life?

0:20:48.7 CL: It was pretty sad. The hallways were empty. People are packing up their things, packing up their desks and moving instruments out of the facilities. And obviously you can see the huge hole in the big 305 meter telescope. There’s just an environment of sadness and uncertainty because people don’t really know what’s going to happen after this, because beginning on August 14th, they have to leave.

0:21:24.5 SC: Does everybody know where they’re going? Is there a lot of uncertainty on, in terms of the future of their employment?

0:21:31.2 CL: It’s a mixed bag. Some of them do have other positions lined up. Some people are close to retiring age, so they’re just gonna use the opportunity for that. But a lot of them don’t know where they’re gonna go yet they’re still scrambling looking for jobs. And it’s likely that their jobs won’t be in Puerto Rico. It’s likely that they will have to leave Puerto Rico since there’s a lack of jobs in the specific field that they’re looking for. We know that this may then contribute to the loss of important contributing scientists in the island. A lot of scientists worry about what this might do, particularly for early career researchers that now have to transition entirely.

0:22:22.0 SC: What are the longer term plans for this location? I mean, it’s kind of like a historic place that’s contributed so much to science and to scientific education. Do you know what’s gonna happen next?

0:22:34.5 CL: At the same time that the NSF announced that they were not gonna rebuild, they opened a call for proposals for an educational center. It’s no longer gonna be called the Arecibo Observatory, which is what it was called before, but now it’s going to be the, Arecibo Center for STEM Education and Research, or ACSER. The deadline for those proposals was back in February. They are currently evaluating proposals for ACSER. The way that they envision it is that it will be transitioned from a purely research facility into an educational facility.

0:23:12.6 SC: How’s that gonna work? If there isn’t any active research going on? It would be more just like a school?

0:23:18.6 CL: They still haven’t made a decision on the proposal. It can take a very different types of education facilities. I think that’s still kind of unclear. But what they do propose, they have a visitor center, they have an auditorium, they also have classrooms where they envision bringing the public and doing things like exhibitions and maybe even bringing scientists and engaging with the public that way.

0:23:53.7 SC: And this will be funded by NSF, as you said? 0:23:55.9 CL: The award is for $5 million across five years.

0:24:02.1 SC: One thing you talk a little bit about in this story is the contribution of this place to science education in Puerto Rico and further, can you talk a little bit about that?

0:24:13.2 CL: Arecibo Observatory was really, source of pride for a lot of Puerto Ricans, and it was a huge destination for middle school and high school students for field trips. And it was a really

big contributor to science engagement in Puerto Rico. And, it got a lot of kids interested in science. And not only that, but at a more advanced level, it also provided the first research experience for a lot of people, you know, for example, for undergrads or grad students. People who later went on to become pretty successful astronomers or astrophysicists. And it was a big contributor to the pipeline of Puerto Rican astronomers that will no longer be a starting place for these researchers.

0:25:07.1 SC: I think you said one of the researchers, her son was inspired to pursue at least as much as a 10­year­old can take action, pursue, planetary science.

0:25:18.3 CL: One of the researchers I talked to her name is Pedrina Dorado Santos. She has been working at Arecibo for 17 years, ever since the beginning of her career. She was… She really wanted to work at Arecibo. She ended up there for almost two decades. And her son, who was actually born in Puerto Rico, he’s been going to the observatory since he was a baby, basically. And he got inspired to, study the planets and he asks his mom whether the telescope is gonna be rebuilt so he can look at the planets. So, I think that really speaks to the impact that the observatory has on young kids and how it actually inspires them to pursue scientific careers.

0:26:08.7 SC: Absolutely. So did you ever visit it as a kid?

0:26:12.7 CL: I did not. And that’s actually, once I got there, I realized that I really regret it not going earlier. It’s really sad to see the way it collapsed and, losing such a huge part of science in Puerto Rico and I really regret not going earlier and seeing what it looks like before and really, appreciating how big it was and how important it was to Puerto Rico.

0:26:42.3 SC: Thank you so much Claudia.

0:26:43.6 CL: Thank you for having me.

0:26:45.0 SC: Claudia Lopez Lloreda is a news correspondent based in Puerto Rico. You can find a link to the story we discussed at science.org/podcast. And that concludes this edition of The Science Podcast. If you have any comments or suggestions, write to us at sciencepodcast@aaas.org. You can listen to the show on our website, science.org/podcast or search for science magazine on any podcasting app. This show was edited by me, Sarah Crespi and Kevin McLean, with production help from Prodigy. Jeffrey Cook composed the music on behalf of science and its publisher AAAS. Thanks for joining us.

通过 WordPress.com 设计一个这样的站点
从这里开始