Is there a future for humans?

Lurking beneath the fear of artificial intelligence and automation threatening people’s jobs lies a deeper, far more profound threat. Do artificial intelligence and automation imperil humanity itself?

Those predicting a dystopian future include Elon Musk, Bill Gates, Stephen Hawking, and many others. For some of them, it’s only a matter of time before the prophecy of Yuval Noah Harari’s great book, Homo Deus: A Brief History of Tomorrow, comes to pass. The bleak vision: a world where a small group of humans control machines, which in turn control the rest of humanity.

Meanwhile, there are others who, even while feeling blindsided by the rapid development of AI, see the potential for a bright future. “The revolution in deep nets has been very profound, it definitely surprised me, even though I was sitting right there,” said Google cofounder Sergey Brin at the World Economic Forum in January. “What can these things do? We don’t really know the limits,” he said. “It has incredible possibilities. I think it’s impossible to forecast accurately.”

And there’s plenty to be optimistic about. Already AI, automation and other digital technologies are helping realize everything from medical breakthroughs to increased economic productivity to self-driving cars. Yet for alarmists, these activities are rendering humans the metaphorical equivalents of frogs inside a pot of water on the stove, unaware that the water is getting warm.

Wading into this controversy are Stephen Wolfram, founder of Wolfram Research, and Irwin Gotlieb, chairman of GroupM. Speaking with VentureBeat editor in chief Blaise Zerega onstage at Collision in New Orleans, the pair voiced carefully reasoned, but very different, approaches to the issue in a session titled “Is there a future for humans?” (Watch video above.)

Wolfram explained that the current kerfuffle around AI is really just a continuation of the way technology helps humans by taking on tasks so that we no longer have to do them. “If there’s one thing that has advanced throughout history, it’s technology,” he said. “The question then is: What is it that humans still have to do?” In his view, we’re rapidly getting to a place where humans will be setting goals and then turning to technology to achieve them, as automatically as possible.

 Is there a future for humans?

Above: Stephen Wolfram, founder of Wolfram Research, asked, “What is it that humans still have to do?”

“When people ask what’s the space left for the humans,” Wolfram said, “the figuring out of what do is the kind of quintessential human piece.”

Gotlieb agreed with Wolfram on this principle — to a point. “I am much more fearful than Stephen,” he countered. “All of a sudden, problems that we thought we had two decades to deal with, we’re going to be facing them much, much more quickly.” He explained that the nerd in him welcomed the advances wrought by AI, but at the same time, “there’s a little voice in the back of my head that’s saying the dystopian outcome is perhaps more likely.”

 Is there a future for humans?

Above: Irwin Gotlieb, chairman of GroupM, said, “There’s a little voice in the back of my head that’s saying the dystopian outcome is perhaps more likely.”

The two discussed ways rapid technological advances were accelerating income inequality, societal changes, and job losses, as well as the need for collective action to better understand and even regulate the ways AI might be used in future. For instance, science fiction writer Isaac Asimov’s three rules of robotics have held up so far, and more recently, Wolfram has described the need for an AI Constitution.

And yet when the conversation took a turn towards ethics and humanity’s general quest for meaning, the importance of human judgment rose to the surface. Gotlieb raised the scenario of one AI-car carrying one passenger and another carrying several passengers; if only one vehicle could be saved, how would an AI system determine a response?

“At the moment there isn’t one solution for the world, and different parties will put different rule sets against it, with different objectives,” Gotlieb said.

“This question of ‘Can we invent one perfect set of mathematical principles that will determine the AIs for all eternity?’ — the answer, I think, is no,” Wolfram said. “In the longer future, we’re being asked to look at ourselves and ask: What is the essence of humanity? If we could define an AI Constitution, what would we want it to say?”

On this they agreed: The future for humans is up to, well, us humans.

Let’s block ads! (Why?)

Big Data – VentureBeat