3 ways robots will try to manipulate you

Brittany Postnikoff grew up marveling at some of the humanoid robots presented by sci-fi pop culture: from Cylons, to Commander Data, to C3PO, she wondered when she’d get to see such an android operating in the real world.

Now a masters student at the University of Waterloo’s Crytography, Security, and Privacy (CrySP) program, Postnikoff says it’s finally starting to happen. The industrial, assembly-line working robots that we’ve avoided viewing through an anthropomorphic lens so far are transforming into two-armed robots that stretch alongside their human counterparts during a morning exercise routine, or have been given a face to convey emotions.

There are telepresence robots that put people’s faces on an iPad on a stick and scoot around offices. There are airport helper robots that will find you a new connection for that flight you missed. Then there are the mostly compelling examples of human-like robots yet from Japan-based Softbank Robotics in Nao, an interactive companion robot, and Pepper, a humanoid robot with the ability to perceive emotions.

Softbank’s Pepper robot is already at work selling Nescafe products across Japan.

In a presentation to security researchers at the Toronto Area Security Klatch, Postnikoff pointed to these robots as proof that it’s only a matter of time until we start unbundling the ethical implications around human-robot interaction. One issue that Postnikoff is already grappling with is how robots can manipulate humans in their interactions. Through her own research and reviewing that of others, she shared a few ways that a robot might try to trick you in the near future.

The teammate effect

When Postnikoff and her teammates were competing in the Federation of International Robot-soccer Association’s HuroCup competition, they were surprised by what happened when they put a Manitoba Bisons jersey on their tiny robotic soccer player.

“As soon as people see robots as part of a team, they want to cheer for it,” she says. “By dressing up our robot, we brought a lot of appeal to our projects.”

Postnioff’s team also taught their robot how to cross-country ski.

Postnikoff’s team was featured on Daily Planet and TSN Sports Centre, even though they weren’t the first to be programming a soccer-playing robot. She wonders if people with a malicious intent could buy a robot and outfit it in a sports team jersey to manipulate people. If you saw a robot at a mall wearing the jersey of your favourite hockey team, would you give it your personal information to join a fan club?

Asserting authority

Provided by University of Manitoba.

People aren’t usually very eager to argue with robots and may even be more likely to obey a robot that’s telling them what to do. In studies conducted by researchers from the University of Manitoba and the University of British Columbia, study participants were asked to complete the tedious task of renaming files for as long as they were willing. For some participants, a human researcher would tell them to continue renaming files until they refused to do so. Other participants had a Nao robot nagging them to continue renaming files.

The outcome of this particular experiment saw Nao assert almost as much authority as a human experimenter, scoring 77 per cent to 86 per cent. Though the robot was only able to convince participants to complete the task of renaming files for the full 80 minutes of the experiment 46 per cent of the time, compared to the human’s 86 per cent success rate.  The study participants often engaged with the robot as though it were a person, using logical arguments about why they should not continue. Some suggested the robot must have been broken, but continued doing the task anyway.

Postnikoff worries that robots could be placed in positions where perceived authority could be dangerous. For example, if a robot has the task of dispensing medication to someone in a hospital, and the prescription changes one day, is the patient likely to argue with the robot about the change?

Or what if robots became supervisors in an office setting?

“How can we refuse a robot’s power to take control of our employees?” Postnikoff asks.

Robot persuasion

Another study asked participants to choose between pictures of people’s faces that best conveyed an emotion. In each case, either face would have arguably met the criteria for the emotion (eg. “joy”). But a robot was programmed to disagree with certain answers the participants gave. In each case, there was no correct answer, but study participants often said they found the robot’s opinion informative after the experiment. Some went so far to say that the robot made them think about something they hadn’t considered.

“The robot, from a script, enlightened people about other people’s facial expressions,” Postnikoff says. “People had full suspension of their disbelief. People are really gullible.”

 

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Brian Jackson
Brian Jacksonhttp://www.itbusiness.ca/
Former editorial director of IT World Canada. Current research director at Info-Tech

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now