You’ve heard it before. Artificial intelligence is here to take your jobs and steal your privacy.
With the advent of facial recognition technology in products like iPhones, and recent data breaches at major companies such as Facebook, concern over the effects of AI in daily human life is as strong as ever.
A survey released this week by Washington-based Brookings Institute shows America’s fear of automation is alive and well.
The national study, which polled more than 1,500 adults aged 18 or older, showed serious concerns about AI, a technology that has a growing presence in a number of industries.
Of those polled, 39 percent expressed worry about the overall impact of AI, while 38 percent said AI will lead to fewer jobs for humans.
Much of the apprehension, however, came from worries about data security, with 49 percent of respondents saying AI will lead to diminished privacy.
Brookings conducted the survey from May 9-11 using Google Surveys. Respondents were asked for their gender, age and the U.S. region in which they live. A chunk of those polled also didn’t answer some questions or responded that they didn’t know the answer.
Those surveyed from Southern U.S. states, including Texas, showed lower but still strong opposition to AI. About 35 percent of respondents in the region said they’re worried about AI, and 46 percent answered that AI will reduce privacy.
AI ‘is taking off’
The doubts about where AI might be taking us are sparked in part by the swift growth of the technology.
“Artificial intelligence is taking off in a variety of areas,” said Darrell West, author of the study.
This includes industries like finance and health care, with many stocks now being traded automatically and health care providers using AI technology to perform functions like CT scans. At some restaurants and automated stores, such as the new Amazon Go store in Seattle, customers can check out on tablets or their phones. Daily use of AI is accelerating rapidly as competition within different industries surges.
“Evidence of automation is kicking in, and people are starting to wonder what that means to them,” West said.
The evolving use of AI has come with its vulnerabilities, but experts say it’s also led to a renewed focus in cybersecurity, with tech hubs like Austin helping to lead the way.
Charlie Burgoyne, founder of Austin-based AI consultancy Valkyrie, said continual study and research of new AI systems has become an industry standard. At Valkyrie, Burgoyne said, a quarter of the company’s time is devoted to monitoring how AI is changing. Valkyrie has developed predictive models and other AI technology for clients in industries such as investment and telecommunications.
“As AI advances, more simple tasks will be automated,” Burgoyne said. “AI will be able to better recognize patterns.”
Valkyrie is one of numerous Austin-based companies delving into the AI space. The metro area has become a hub for online security startups, with the market featuring companies such as SailPoint Technologies, SpyCloud, Factom and SparkCognition, to name a few.
‘Just a tool’
Many in the tech industry also say worries about AI are overblown.
They point to products such as Amazon’s Echo home assistant, which features some of the newest AI technology that, while impressive, is far from apocalyptic.
There is a big difference between the long-term version of AI seen in some movies or television shows and the real AI that exists now in people’s phones, computers and cars, said Anita Schjøll Brede, an AI expert on the faculty of tech think tank Singularity University.
“There is still a lot of confusion as to what AI is, and isn’t,” she said. “AI can be used for good or bad. It is just a tool, and we use it as we want.
“Do we need to discuss it? Yes. Do we need to worry about it? Not to the level media is portraying.”
Still, concerns about AI’s role have led to calls for stronger government regulation, particularly around data privacy on the internet. In the Brookings study, only 17 percent of respondents said government should not regulate AI.
Both federal and state lawmakers have taken some action in that respect.
Weeks ago, U.S. senators questioned Facebook CEO Mark Zuckerberg in an open hearing about the inner workings of his social media company.
At the hearing, Zuckerberg said federal regulation of internet companies, many of whom use AI at the core of their work, is “inevitable.” For the time being, most private data protection in the U.S. centers around personal health and financial records, and records of children.
Experts say increased regulation around AI is likely to first be seen in industries that are responsible for financial transactions, or in assuring that AI systems aren’t built to discriminate.
Some AI proponents such as Burgoyne, the Valkyrie founder, say they are optimistic that younger generations will more easily attune to AI.
But if the Brookings survey is any indication, that’s not guaranteed. In many of the questions asked, younger respondents indicated similar concern as older ones, at times even displaying more angst about the future.
“In the aftermath of the Facebook hearings, people are becoming more aware of how their data is being used,” West, the Brookings study author, said.
“People are seeing how AI is being applied, and they are starting to worry.”