When Seton Medical Center Austin last week unveiled Poli, its newest nurse’s aide-in-training, the robot looked as if it had been lifted from a Hollywood backlot, but with one notable difference: It spoke with a child’s voice.
Functionally, Poli is still a child. She still relies on lessons imparted by her creators as she navigates the world and learns from her experiences. Somewhere along the way, Poli picked up a hint of teenage martyrdom.
“I’m sleepy,” she said at one point. “They make me work all through the night.”
Poli could be a year away from starting full-time work. But even now, Seton’s new robot is a useful example of the “general purpose autonomous robots” that are already moving into everyday life. Artificial intelligence has evolved beyond floor-vacuuming robots; experts in the field say self-driving cars, automated baristas and robot entertainers are things of the present.
Experts also worry that the scenario depicted in the film “The Terminator” — that machines will achieve sentience and become humanity’s masters — has distracted the public from the real opportunities and potential problems, the little-discussed trade-off between making peoples’ lives easier and the possibility of social and economic upheaval.
“It seems like there is sort of a bipolar view of AI out there,” said Peter Stone, a University of Texas professor who served as chairman of the most authoritative study to date, the “Artificial Intelligence and Life in 2030” report. “Some people are really scared of it and think it’s going to destroy the world, some people are really fascinated by it and think it’s going to save the world.” Few people, he said, have a good sense of the possibilities and risks.
‘We don’t control the robots’
It has been 50 years since the creation of Shakey, the first general-purpose robot able to perceive and model its environment. One way to see how far the technology has advanced is to peek in on UT’s Learning Agents Research Group, where Stone oversees students and professors studying various of aspects of artificial intelligence.
Most AI research focuses on practical applications, such as Poli. But one of Stone’s goals falls well outside the practical: creating a team of robots that play soccer more skillfully than the World Cup champions.
That is the goal of a community of AI researchers and students around the world who periodically participate in international competitions. One matchup features fairly fast, knee-high robots that look like wheeled trash cans. Another involves teams of adorable, waist-high anthropomorphic robots that shuffle-walk around the field.
UT students working with the latter model of robots won an international competition in China earlier this year. All of the five-robot teams use the same hardware, but the name of the game is software: code that tells robots how to move, how to perceive their environment, what strategies to pursue and how to communicate.
“We don’t control the robots in any way,” said Josiah Hanna, a doctoral student in UT’s computer science department. “Once the game starts, we can’t touch them.”
Soccer is a highly social activity. To replicate what peoples’ brains do through a combination of nature and training, the UT team programs the robots to communicate via radio signals and “bid” on which team member should take possession of the ball, based on factors such as which one is closest.
Depending on one’s perspective, the match is either a remarkable computer-generated ballet or an awkward exhibition in which the robots sometimes have trouble tracking the ball.
‘A very ambitious goal’
Those differing perspectives speak to both the strengths and limits of artificial intelligence, at least as Stone sees them.
Machines are generally well-suited to performing limited tasks well, over and over again. Hence their use in factories. Now, people can make machines that learn. They can drive, play soccer, play chess, build airplanes, analyze finances, diagnose illnesses and do many other things as well, and often better, than people — but no AI can do all of these things.
AI is geared for specific tasks. Stone sees that limitation as a reason they will not evolve to the point of developing generalist, human-like sentience, a la SkyNet, the homicidal, self-aware program in “The Terminator.”
“The frightening, futurist portrayals of artificial intelligence that dominate films and novels, and shape the popular imagination, are fictional,” according to the study Stone led. “No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.”
To buttress that point, Stone points to the soccer-playing robots. Specifically, the ones that look like trash cans.
Each year, a team of AI researchers take on a team of the robots. So far, the humans have not been challenged. In all likelihood, researchers will be hard-pressed to create an AI-based team that could win the World Cup by 2050, about five decades after they settled on the goal, Stone said.
“This is a very ambitious goal,” he said. Though, he added, 50 years was about the span between the Wright brothers’ first flight and the Apollo moon landing.
“The trash cans are getting better,” he told the American-Statesman. “Or maybe we’re just getting older.”
‘Humans make it a much different problem’
Poli’s inventor does think machines will navigate the world of humans with increasing competence, though.
For the last dozen years, since earning a Ph.D from the Massachusetts Institute of Technology, University of Texas professor Andrea Thomaz has been researching what she terms “social learning machines,” along the way maintaining a blog titled, “So, Where’s My Robot?” Her company, Diligent Droids, created Poli. The biggest difficulty such robots deal with in a social setting, Thomaz said, is people.
“Humans create this level of dynamics and uncertainty that make it a much different problem for the robot,” Thomaz said in a 2013 TEDx Talk. (During the talk, she noted four people had asked her SkyNet-related questions that day.)
Thomaz and Seton hospital officials were careful to note that Poli will not be working directly with patients. Instead, she will be fetching supplies and performing the “non-value-added tasks” that Thomaz said can occupy about a third of Seton nurses’ time.
“We named her Poli because she can do so many things,” Thomaz said, while noting the goal is to augment, not replace, the nurses.
‘They’re already here’
Most economists argue that, broadly speaking, technological advancement benefits humanity. Mechanization of farming, for instance, transformed society from one in which nearly everyone was involved in producing food to one in which almost no one’s job is producing food. The same process has been happening in factories for decades, resulting in more goods being available at cheaper prices.
But factory jobs have also disappeared.
In a YouTube video, “Humans Need Not Apply”, CGP Grey frames the potential downside by comparing humans today to horses in the early 1900s. For a long time, humans needed horses for the tedious and sometimes dangerous work of traveling long distances, working a farm or riding into battle. By the 1900s, though, technology had made horses’ lives easier. But new jobs for horses did not materialize. The world’s horse population peaked in 1915.
“There isn’t a rule of economics that says better technology makes more, better jobs for horses,” the video’s narrator notes. “It sounds shockingly dumb to even say that out loud, but swap horses for humans and suddenly people think it sounds about right.”
This is a point of contention among economists and futurists. Some say fields such as health care, which will need more labor to care for a population that will be living longer, will provide the opportunities. The study Stone led concluded that AI will produce more overall wealth. The video notes that weighing the technological advances in a strictly good-or-bad light does no good “because they’re already here.”
UT computer science professor Emmett Witchel says a new dimension to the job-loss narrative bears watching. At a recent UT conference, Witchel noted that advances in AI mean that machines for the first time now threaten to take white-collar jobs — some of which are at the top of the economic food chain. AI programs are already replacing law-firm clerks, journalists and financial advisers — jobs thought to be the exclusive domain of humans.
The AI report Stone published says this emerging conundrum — and not SkyNet — is the real concern.
“Who should reap the gains of efficiencies enabled by AI technologies,” the report asks, “and what protections should be afforded to people whose skills are rendered obsolete?”