Researchers establish a standard for robotic ethics

first_imgNobody wants a robot apocalypse. The whole enslaving and slaughtering humans en masse thing would be something of a bummer. Thankfully, the British Standards Institute has been hard at work on a standard of ethics. The hope is that if these rules are implemented consistently, that we can avoid unnecessary suffering.Alan Winfield, a robotics professor at the University of the West of England, said these standards were the “the first step towards embedding ethical values into robotics and AI.”The BSI presented the list, titled BS8611: Robots and robotic devices, at the Social Robotics and AI conference at Oxford. They’re written in dry, regulatory style, but their implications could be massive.The first tenet, “Robots should not be designed solely or primarily to kill or harm humans,” if followed, would prevent the kinds of kill-bots the US army and other government agencies have been developing for years. Some of the others, namely that humans are always responsible for the actions of robots, and that it should be possible to track down whoever programmed or designed the bot should help guarantee accountability if something fails.If all this sounds familiar, you’ve probably heard of Isaac Asmiov’s three laws of robotics. They lay out a few basic maxims that are coded into the positronic brains of Asimovs fictional automata. But, you’re at all familiar with the author’s work, you’ll also recall that many of his characters would behave in unpredictable ways as a consequence.These standards aren’t infallible. Computer code is especially tough because programmers will often recycle bits of code from other projects. It is, however unlikely, possible that a chunk of code written by someone not at all involved in the creation of a robot could be at fault. Though, given the risk level, checking and rechecking every facet of a bot is probably a good idea.Speaking with The Guardian, the biggest concern, according to Winfield is the potential for racist or sexist attitudes to come from deep learning machines. These machines use the internet to gather and collate information so that they can make their own decisions. Then they simulate possible outcomes to come up with ideal strategies based on the data they’ve collected.BSI’s standards warn that deep learning bots may present a “lack of respect for cultural diversity of pluralism,” and that’s a big problem.“Deep learning systems are quite literally using the whole of the data on the internet to train on, and the problem is that data is biased,” Winfield said. “These systems tend to favor white, middle-aged men, which is clearly a disaster. All the human prejudices tend to be absorbed, or there’s a danger of that.”As AI continues its rapid advances, it’s become clearer and clearer that we are dealing with some of the most dangerous technology we’ve ever developed. AI could well be considered the children of the human race, and if we aren’t careful, we risk imprinting upon them the worst of us.The robot apocalypse then, may not look like the complete extinction of people as sci-fi authors have long proposed. Instead, it may be more insidious. It’s not too hard to imagine, especially with the kind of prejudice that exists all over the net, for robots to internalize the racist and sexist ideals and carry out one of the most efficient and insidious attacks on women, people of color, or LGBTQ folks.The BSI says that these standards aren’t final, and they will work to modify them and keep them current as AI develop.last_img


Leave a Reply

Your email address will not be published. Required fields are marked *

*
*