ARTIFICIAL INTELLIGENCE

Austin tech leaders launch effort to promote, guide future of artificial intelligence

Posted March 30th, 2017

Bryan Johnson looked across the ballroom at the hundreds of South by Southwest interactive enthusiasts listening to him and wondered why they’d settled for something so inefficient.

With such a collection of brainpower and knowledge in one room, he mused, why should everyone listen only to one person at a time? What might happen if, with a neural-interface chip implanted in their brains, the entire crowd could engage simultaneously in the same conversation?

“Before, we were subject to our environment,” said Johnson, who is chasing that sort of augmented intelligence and cognition at his startup, Kernel. “Now we’re in a place where we can program almost any kind of world we want, in biological and computational forms.”

Johnson's vision and comments drilled directly to the core of the excitement — and the concern — that swirled around the heavily attended SXSW sessions on artificial intelligence, robotics and automation.

For all the promising capabilities a brain chip or the myriad other AI-related advances might produce, it’s not hard to imagine dangerous or dystopian scenarios they could also bring. At SXSW, virtually every amazing possibility came paired with a statement of caution — a caveat about the need to discuss an ethical, transparent and beneficial approach to AI development.

A new Austin-based initiative joined that chorus in March, becoming the latest in a global emergence of organizations designed to chart that responsible path forward. The local effort, called AI Austin, was formed and partially funded by a trio of the region’s tech-business heavyweights — Michael Stewart, Manoj Saxena and Tom Meredith — who hope to bring a more human-oriented and Austin-flavored perspective to the conversation.

Cognitive ScaleManoj Saxena is chairman of Austin-based Cognitive Scale and former general manager of IBM Watson.

AI Austin organizers hope to raise the profile of local artificial intelligence work and build more connections with researchers, businesses and governments around the world. To that end, they brought in Kay Firth-Butterfield, one of the world’s top experts on the legal and ethical considerations of AI and autonomous systems, to lead the effort.

But as she, Stewart and Saxena noted in interviews after SXSW, AI Austin also plans to launch local initiatives that use artificial intelligence to enhance the education, health and welfare of the Austin community.

“AI is going to change everybody’s lives, and we definitely want to change everybody’s lives for the better,” Firth-Butterfield said. “We’re all passionate evangelists for the ability of AI to really help humanity over the next problems that we will be facing, for example in climate change, health care issues, some of the issues we find in the developing world.”

At the outset, AI Austin will focus on four primary topics. The first, a law and ethics piece, will reside at the University of Texas Robert S. Strauss Center, where Firth-Butterfield is a senior fellow. The group also will look for ways AI can improve education, health care and social justice.

“If we grow it correctly and responsibly — and I’ll keep using the word ‘responsible’ — then we can lift everybody’s boats at the same time,” Firth-Butterfield said.

The timing for the emergence of AI Austin and of similar groups in Silicon Valley and at leading universities around the world is vital, local and visiting experts said throughout SXSW. Already, narrow uses of artificial intelligence have pervaded everyday life through smart phone apps, self-driving vehicles, digital personal assistants and other increasingly mainstream technologies.

Even absent an existential threat posed by a super-intelligent machine — and that notion has its share of skeptics, even among AI scientists — uses of artificial intelligence will become only more commonplace in the future. So the discussions needed to guide that development and shape a future that’s beneficial to humanity need to happen today, experts say.

“Artificial intelligence is going to be a technology that will be as pervasive if not more pervasive than the internet was, and even more than electricity was,” said Saxena, chairman of Austin-based Cognitive Scale. “It has a lot of power to do good, but also a lot of power to do harm.”

AI in Austin

As a former general manager of IBM Watson, Saxena has seen the ups and downs of artificial intelligence. During one SXSW session, he quipped that AI stands for two things — “absolutely incredible” or “artificially inflated.”

Artificial intelligence has a decades-long history of big hype cycles followed by “AI winters,” when investment and interest all but froze. Today, however, AI applications have embedded themselves deeper into everyday life — whether in the algorithm Netflix uses to learn your preferences and offer better movie suggestions, or in Google’s self-driving cars cruising Austin’s streets.

This latest wave of AI is changing how we work, how we live and how we connect with others, Saxena said. And because of that influence, he said, AI Austin will focus more on the human perspective.

“I think we need to keep Austin AI weird,” he said. “Most cities are coming at it from a technology lens, building companies, standards, whatever. Austin can add a human lens to AI.”

Austin can approach AI’s challenges from this different angle because it comes from a different mindset — one that emerges from its blend of a diverse creative community, a collaborative spirit and a mature high-tech industry infrastructure, Saxena said.

Yet Central Texas also has a robust, if often under the radar, history of technical expertise in artificial intelligence. The UT computer science department, for example, is ranked among the top 10 in the country, and a significant portion of IBM’s Watson initiatives are based here.

However, few locals have been at this game longer than Doug Lenat, who launched Cycorp in 1984. Lenat and his Austin-based firm have spent decades developing a huge knowledge base that instills machines with something akin to common sense — to understand, without an exhaustive set of instructions, that a full-grown blue whale won’t fit in your backyard pool.

Stewart said he got to know Lenat and his work early on, when both were affiliated with the Microelectronics and Computer Technology Corporation (MCC), a significant precursor to Austin’s high-tech boom.

“It was obvious to me that what (Lenat) was working on was not just a form of AI that might be interesting … but actually working on how humans think with knowledge, inference and the common sense we develop over the years,” Stewart said.

Stewart hoped to help commercialize Cycorp’s technology under a newer firm he created, called Lucid. Alongside that effort, Lucid developed one of the AI industry’s first ethics teams to contemplate future development and use of its technologies.

The deal never came to fruition, and Stewart and his colleagues realized they would never attract broad industry participation in ethical deliberations if they embedded that team at Cycorp.

“We would not get competitors to cooperate if we did it in house,” he said. “It needed to be pluralistic, to include the whole society.”

The idea of AI Austin was born. Meredith and Saxena jumped on board last fall. USAA became the first major corporate sponsor a short time later. And they all held an informal launch party at Meredith’s home on March 9, less than six months later.

‘AI for good’

USAA might not be the first company to come to mind when one mentions artificial intelligence, but insurance and risk assessment could exert a powerful influence on future AI deployments.

One of the key challenges today is understanding how an AI system reaches its conclusions. For example, advanced machines can pick the image of a cat out of a video, but they can’t explain how or why they managed to do it.

So how does an insurer assess risk in a self-driving car controlled by artificial intelligence? If such a vehicle gets into an untenable position and steers itself into a fatal accident, there’s no way to know why it made that decision. Perhaps it chose that unfortunate path to avoid a worse fate; perhaps it simply erred.

Finding a way to gain more insight into the AI black box remains a key concern for many in the industry, and transparency emerged in multiple panel discussions at SXSW.

AI Austin won’t shy away from the complex technical issues, but the founders have seeded a more humanistic perspective, as well.

“Don’t we want AI to be transparent, fair, honest and with just outcomes?” Meredith asked during a SXSW panel. “Those words have values behind them, and we have to realize there are going to be bad actors. So we have to make sure the leaders of companies and a strong community are behind ethical use.”

Meredith said he sees the initiative supporting practical and ethical AI applications that help bridge divides, promote equity, combat climate change and help workers prepare for a future in which machines replace humans in more and more jobs.

Education and job training could have some of the most immediate applications. A recent Oxford University study found that artificial intelligence, robotics and other automation technologies could replace as much as 47 percent of current U.S. jobs in the next 20 years.

Because such pervasive economic, societal and personal changes that could arise — and, in many cases, are already underway — AI Austin and similar groups hope to collectively produce what Saxena calls “AI for good.”

Rather than machines simply replacing humans, they hope to design a future in which machines augment and scale human intelligence and capability — much as Johnson envisioned with the brain implants he described to the crowd at SXSW.

And that human side, Saxena said, is where Austin should lead.

“Berlin today is known for design in the IT world,” he said. “Silicon Valley is known for tech innovation. Austin’s AI can be known for enabling humanity. … That’s a wide open space, and frankly that’s where 90 percent of the AI opportunity lies.”

Comments