Reading Time: 3 minutes

Computer scientist and AI researcher Stuart Russell have said today at UC Berkeley that there is a fundamental and Potentially civilization-ending shortcoming in the “Standard Model” of AI Which is taught.

Plus Dr. Russell said in his new book Human Compatible: Artificial Intelligence and the Problem of Control, argues that unless we re-think the building blocks of AI, the arrival of superhuman AI may become the “last event in human history.”

Even this sounds a little bit crazy but Human Compatible is a carefully written explanation of the concepts underlying AI as well as the history of their development.

If you want to understand how fast AI is developing and why the technology is so dangerous, Human Compatible is your guide, literally starting with Aristotle and closing with OpenAI Five’s Dota 2 triumph.

Stuart’s aim is help non-technologists grasp why AI systems must be designed not merely to fulfill “objectives” assigned to them, the so-called “Standard Model”

In AI development today, but to operate so “that machines will necessarily defer to humans: they will ask permission, they will accept correction, and they will allow themselves to be switched off.”

Why did you write Human Compatible?

Dean's Society - October 23, 2006; Stuart Russell
Dean’s Society – October 23, 2006; Stuart Russell

Dr. Russell: I’ve been thinking about this problem – what if we succeed with AI? – on and off since the early 90s. The more I thought about it, the more I saw that the path we were on doesn’t end well.

(AI Researchers) had mostly just doing toy stuff in the lab, or games, none of which represented any threat to anyone.

It’s a little like a physicist playing tiny bits of uranium. Nothing happens, right? So we’ll just make more of it, and everything will be fine.

But it just doesn’t work that way. When you start crossing over to systems that are more intelligent, operating on a global scale and having real-world impact, like trading algorithms,

For example, or social media content selection, then all of a sudden, you are having a big impact on the real world, and it’s hard to control. It’s hard to undo. And that’s just going to get worse and worse and worse.

Who should read Human Compatible?

Dr. Russell: I think everyone, because everyone is going to be affected by this. As progress occurs towards human-level (AI), each big step is going to magnify the impact by another factor of 10, or another factor of 100.

Everyone’s life is going to be radically affected by this. People need to understand it. More specifically, it would be policymakers, the people who run large companies like Google and Amazon, and people in AI, related disciplines, like control theory, cognitive science and so on.

My basic view was that so much of this debate is going on without any understanding of what AI is. It’s just this magic potion that will make things intelligent. And in these debates, people don’t understand the building blocks, how it fits together, how it works, how you make an intelligent system.

Read Also: Zuckerberg Misunderstands The Huge Threat Of TikTok

So chapter two (of Human Compatible was) sort of mammoth and some people said, “Oh, this is too much to get through and others said, “No, you absolutely have to keep it.” So I compromised and put the pedagogical stuff in the appendices