When you’re having a conversation with a digital customer assistant in 2018, it’s likely to feel exactly like you’re speaking with a human. And by 2029, robots will probably be smarter than us.
While the idea of AI is exciting, we still have no idea exactly how much of an impact artificial intelligence will have on our daily lives. Ask one person, and they’ll be full of optimism about a life of leisure while the robots do all the hard work.
Ask another, and they’ll tell you darkly that robots are destined to enslave us.
For many people, the fear of the unknown (and all those Terminator movies), is a convincing argument towards slowing down our quest for free-thinking robots. But if you’ve ever used a self-checkout, spoken to Siri, or watched Netflix, you’ve already interacted with AI. And self-driving cars, automated drones, and elder-care robots will soon be just another normal part of life.
The Ethics of AI: Should Robots Vote? Click To Tweet
The moment when the intelligence of machines will exceed the puny intelligence of humans is known as singularity, but what happens if robots decide they don’t need humans around?
In Futurescape with James Woods, episode two begins with a robot walking towards a polling booth. Surrounded by protestors, her eyes scan a man wearing a badge which says “No soul means no vote!”
Woods begins the narration: “Ridiculous right? A robot voting. Well you know they said the same thing about blacks in 1870 and about women in the early 1900s. But you wouldn’t give a programmable robot the full rights of citizenship now, would you? I mean they can’t think like us, they don’t feel like us- they are fundamentally different from us. For now. But the lines are already blurring. And if you doubt that, ask your dad to turn in his pacemaker, or your grandma to live without her artificial hip. Every day we’re getting closer and closer to the day when man and machine truly meld and redefine what it means to be human.”
The difference between humans and machines? Machines are programmable, and repairs are mechanical. We also believe that humans have a mind, a soul, and free will. But some scientists argue that one day we’ll find the wiring that shows us how the mind truly works- eliminating the last barriers between humans and robots.
It could be that these patterns are very simple, and research has already begun in this direction at the University of California, Los Angeles. Dr. Arthur Toga is mapping the connection between neurons in the human brain, which would give us a total picture of the brain. While this will help us identify miswiring in humans, it could also unlock that special something that makes us uniquely human.
In order to feel emotion, robots must be able to become conscious.
Futurist Ray Kurzweil believes that robots will gain consciousness by 2029, and he points to the current studies of the human brain which show inter-neural connections firing in real time. Since scientists can see our brains creating our thoughts, it’s not unreasonable to imagine that they’ll be able to replicate this in robots. For this reason, the idea of “robot morality” is attracting scholars from theology, law, psychology, ethics, human rights, and philosophy.
Elon Musk is the founder of Tesla Motors, and he recently pledged $10 million towards research that would help ensure “friendly AI.”
Stephen Hawking has also spoken about AI, warning that super-intelligent AI may equal the end of the human race, and the list of concerned experts continues to grow, with Steve Wozniak and Bill Gates warning that robots could threaten our existence.When robots achieve singularity and become smarter than us, they could, therefore, be out of our control. And if they’re programmed without morality, may just decide to do away with us. This leads to all sorts of questions. How do you program morality? Is morality in the eye of the programmer? If robots need an ethical code, who will decide what this ethical code is?
One answer is to directly program behavioral rules into the robot’s software. These could be concrete rules (think the Ten Commandments), or theoretical such as utilitarian ethics. The important thing is that the robots are given hard-coded guidelines which they could then use as a base for all of their decision-making.
This sounds great in theory, but it could make it too difficult for robots to make quick decisions in the real world.
Another idea is a bottom-up approach, with the machines learning from experience instead of being spoon-fed the necessary rules. This puts less strain on the robot’s computers, since it won’t need to consider the repercussions of each and every action, and can simply use habitual responses, leading to the organic development of ethical and moral behavior.
If we can develop conscious robots, would they then be people? And in that case, shouldn’t they have the same rights as any other person?
Scientists at Stanford University are currently working on the Nerogrid– a piece of hardware that can mimic the activity of the brain- in real time. This supercomputer is the size of an iPad and may one day become a machine that can think for itself.
Neurogrid even replicates the circuitry of the human brain. The only difference between humans and robots that could think like us, is more neuropathways, which means that it’s only a matter of time before robots are able to think for themselves.
But if robots become self-aware, what will separate them from us?
A 58cm humanoid robot called Nao can already tell stories, walk, and make decisions. But that’s not all it can do. Nao is also being programmed to have ethics and can weigh all of the options in any given situation, before choosing how to act. Nao uses a series of true/false statements to recognize harm, benefit, and the pros and cons of each option.
When programmed to remind someone to take medication, Nao has some choices to make. If the patient refuses her medication, the robot must decide whether to allow her to skip her dose (and potentially cause harm) or insist that she takes her medicine (impinging on her autonomy).
The creators taught Nao how to navigate these types of problems by giving it examples of conflicts that bioethicists had resolved which involved autonomy, benefit, and harm to patients. Nao’s learning algorithms allow it to sort through these cases to find patterns to help it make a decision in a new situation (the woman refused the aspirin, and Nao accepted this and reminded her again later).
Humans bond with each other by establishing a rapport and the survival of our species depends on this bonding, which is also dependent on emotion. We recognize these emotions in other humans and express our own from birth.
Nao is able to download new behaviors, make eye contact, recognize faces, and respond to users. This makes it invaluable for children who are autistic, overstimulated, or nonverbal, and the robot can help them learn to identify emotions. Research has even shown that computers are actually better at reading expressions than we are, meaning it’s going to be increasingly difficult for us to lie to robots.
Aldebaran, the creators of Nao, released Pepper last year. This robot is the first designed to live with humans and is able to mirror the emotional state of its user- something that humans begin to do when they’re babies. This works both ways, and as robots mirror our emotions, we instinctively mirror theirs, feeling a bond with them and assisting them with tasks.
While robots can’t yet feel emotions, they’re already able to appear as if they do. In 2014, a robot named Eugene passed the Turing test. Named after Alan Turing, a computing pioneer, the test was designed to test whether or not a computer can be “intelligent”, and is basically an imitation game. A human judge will sit on one side of a computer screen and will have conversations with other humans, and one chatbot which has been created to trick the judge into thinking it’s human.
Turing tests have been happening for more than 20 years, and Eugene (programmed as a 13-year-old boy) convinced 33% of the judges that he was human, showing that robots still have a long way to go, but are now intelligent enough to effectively trick 1/3 of the humans they talk to.
While robots are becoming excellent at pretending to feel, the ultimate goal for many companies is to ensure that they can one day actually think and feel- just like a human.
The European parliament has voted for the drafting of regulations which would govern the creation and use of artificial intelligence and robots, including “electronic personhood“, which would give robots rights and responsibilities.
Under current laws in the United States, corporations are persons. So are ships. And municipalities. So why couldn’t a robot or cyborg also be a person?
This will have huge implications for the companies creating robots. If they create a robot, and that robot then creates a product that can be patented, who would own the patent? The robot? Or the corporation?
More importantly, a large part of being a “person”, is the right to vote. If robots have to take responsibility for their actions, many argue that they should then have the right to vote or sit on a jury.
But in order to vote, robots would need to have a very complex understanding of right and wrong.
Another key right that humans have is the right to reproduce. Since robots would be able to make infinite copies of themselves, allowing them to vote would effectively be allowing them to be in charge. It’s for this reason that many ethicists argue that robots should either have the right to reproduce or the right to vote, but not both.
Ryan Calo is a law professor specializing in cyber law and robotics at the University of Washington. He serves on the advisory committee of CITRIS, a People and Robots initiative. He calls this dilemma the “copy-or-vote paradox.”
If an artificially intelligent machine one day claims that it has achieved personhood, it’s sentient, experiences both pain and joy, has dignity, and we’re unable to disprove it, we may not be able to justify denying it the same human rights that we enjoy.
If that machine then claims that it should be able to procreate and vote, and then copies itself indefinitely, democracy would be completely undone.
Once robots have become “people”, and enjoy the same rights that humans do, it makes sense that they must also be subject to the same responsibilities. This would mean that robots would be able to be sued for causing damage or making mistakes, and would also be able to be punished under the legal system.
Another problem? While the EU may be ready to grant robots personhood, and the United States is encouraging the use of ethics, who’s to say that other countries will do the same?
Autonomous robots making life or death decisions may soon be a common feature of warfare, as countries around the world begin harnessing artificial intelligence. The Chinese and Russian governments are already working to gain an edge over other countries, launching a new-age arms race.
Professor Toby Walsh, from the University of New South Wales in Australia, has already spoken to the United Nations on a number of occasions to encourage the formation of an international body which would prevent killer robots from becoming another weapon of war.
There’s little doubt that the ethical implications of AI will be one of the biggest challenges that humans face over the next fifty years. Ideally, we’ll eventually be able to utilize the strength of both robots and humans and integrate them ethically, although there are sure to be some bumps in the road along the way.
One thing’s for sure, we’ll never look at our Roombas the same way again.