Σάββατο 26 Αυγούστου 2017

Forget Killer Robots. Here’s What You Should Really Worry About

Dear Elon–Forget Killer Robots. Here’s What You Should Really Worry About
Panicking about killer robots is foolish when you have many more immediate problems, writes columnist Caroline Sinders.
Y CAROLINE Dear Elon,

We’ve never met but I feel confident telling you this: It’s going to be okay, I promise. Put the coffee down, and the whiskey. The killer robots aren’t coming for us, despite your warnings last month (and, more recently, your call for an outright ban). The singularity is never going to happen, and the only winter people should be concerned about is the larger and harsher ones brought on by climate change, not an AI one. I, Robot was a farce of a book and a bad movie, and it’s never going to be our future.

But.

There are plenty of actual things to worry about when it comes to machine learning and artificial intelligence. And you should worry. You really should.

You should worry about (and read) the AI Now Report from 2016. You should worry about the themes they highlight–of what happens to labor, healthcare, equality, and ethics when artificial intelligence embeds itself into our daily lives. You should worry about who has a say over the future–the entire world isn’t D.C. or Silicon Valley–so how do we design for everyone?

You should think about how machine learning is changing how we work, and the kind of work we do. You should worry about the impact of artificial intelligence on job losses and job creation. You should worry about training and education for new jobs, and what kind of benefits or income redistribution should result when AI systems displace jobs. You should think about automation, not just from a robotic standpoint but an infrastructural one; how will the shipping and transportation industries be affected by machine learning? Spoiler alert: They already are, and there are lots of jobs that will be lost to automation. Here’s some good reading on that, too. In fact, you should watch these videos from AI Now’s 2017 symposium. Here are some personal favorites. But if you can only watch one, watch this one.

[Source Image: Kevork Djansezian/Getty Images]You should worry about all of the articles ProPublica publishes on machine bias, especially this one on how predictive policing software radically indicts black people over white people. You should worry about this other ProPublica article that reports on how certain insurance providers charge people of color more for coverage. It isn’t clear why that happens but ProPublica‘s theory is that there are predatory algorithms favoring predominantly white neighborhoods over others. You should worry about that data set, and the negligence behind it. You should worry about the obvious flaws that exist in software we are already using from facial recognition to analyzing sentiments and emotions to these two specific examples I’ve just highlighted. These aren’t standalone cases–these are systems that are being implemented at a large scale.




You should worry about where the data comes from to train these algorithms. You should be upset when some of that data is taken without consent, as demonstrated in a recent Verge article on trans people’s images of transitioning being used without their consent in a research project on AI and facial recognition. You shouldn’t just worry, you should be outraged at how ethically bankrupt that is.

I feel like I’m lecturing you, and I am. I unequivocally am lecturing you. You deserve to be lectured right now. You’re a person in a position of power who could effect real change, and instead, you’re creating more noise and fear about a thing that’s never going to happen. Don’t incite panic unless it’s necessary to panic. Panicking about killer robots is foolish when you have better things to panic about.

Elon, particularly, you should worry about this article put out by the MIT Technology Review about the biggest impediment to self-driving cars being security flaws and those flaws being susceptible to hackers. You should worry about the fact that one of your cars had a hard time differentiating between sunny spots on the road and someone died. You should worry about how driverless cars will be implemented on roads with drivers in cars.

[Source Image: Mlenny/Getty Images]I’m not trying to be alarmist here but a realist. All of us–consumers, creators, and technologists–should worry. The series ProPublica published on machine bias isn’t just on the problematic biases within machine learning; it also highlights how fallible machine learning is, and how much users trust it and believe the results to be truthful without questioning it. In its reporting, ProPublica found that the scoring of “most likely to recommit a crime” created within predictive policing algorithms was used as reinforcement for harsher sentences. Judges specifically looked at this scoring as support, because it was seen as “algorithmic proof” from a new form of tech tools aiding the justice system. That is one of the biggest things we should worry about.



What happens when a user gets stuck in a series of automated systems with no way out? What happens when a chatbot gets stuck in a place, or some data is wrong and there’s no person around to let the users reenter in the information or override it? Machines make mistakes, period. So, this prompts the question: How well are these systems designed to handle mistakes?

People are good at navigating nebulous situations–we have that kind of sideways investigative intelligence. Have you ever shown up to an event or a car rental place or a doctor’s appointment and had your reservation lost? A person can walk you through it. Part of what makes customer service great, especially in a deep bureaucracy like the DMV, is being able to talk through, and solve, difficult problems. AI systems are not nearly as good at thinking on their feet. No designer or engineer wants to build a frustrating system for users, but a system designed to do a specific task, no matter how well trained, will be inflexible with fringe cases.


Elon, you should worry about computer vision not recognizing black skin. You should worry about bad products that are “color blind” as in, the algorithms can’t see skin color so soap dispensers and automatic faucets won’t work on black skin tones. You should worry about cameras that suggest to Asian users that they’ve “blinked” because the data sets were trained on predominately Caucasian eyes. Or that this same problem led to a passport website rejecting an Asian man’s image for passport renewal. You should worry because in these scenarios users can’t intervene and fix the problem, because the problem is autonomous, the problem is in the code, the problem is in the design. You should worry about products alienating users, and not working for all users. You’re a businessman. I imagine this would concern you.

Robots are never going to “think” like humans. There are so many actual things to worry about in the present, why are you focusing on the super-not-very-close-hella-never-happening-far-away science fiction future?

There’s a line between constructing algorithms to analyze patterns similar to the way human brains analyze patterns and machines thinking and inferring the way people do. One is specifically about analysis and the latter, the one you’re concerned about, is about how original thought manifests. You shouldn’t worry about flying cars or machines being too smart. I get those are cool, fun things to worry about it. But they aren’t real, and they are never going to be. You shouldn’t worry about machines becoming humanistic any more than you should worry about if we really just live in the Matrix. You should worry about how systems hurt people–everyday people, people who use all of the products you invest in and the products you make. You should worry, but about how people are affected by the systems we are building, and you should worry–always worry–about how your products will affect your users.

Okay? Glad we had this chat.

Keep calm,

Caroline

www.fotavgeia.blogspot.com

Δεν υπάρχουν σχόλια: