cft

The Most Awesome Thoughts About AI (Part 1)

My selection from Lex Fridman’s AI podcasts


user

Diego Lopez Yse

2 years ago | 6 min read

I’m a huge fan of

Lex Fridman and the great content he produces to promote ideas and advances in different sciences. In this regard, I selected some of the Artificial Intelligence (AI) concepts that blew my mind when I first heard them on the several interviews he shares through his Podcast.

Stuart Russell

He is a professor of Computer Science at UC Berkeley and a co-author of the book “Artificial Intelligence: A Modern Approach”.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=KsZI5oXBC0k
Link to the podcast in YouTube: https://www.youtube.com/watch?v=KsZI5oXBC0k

We have to be certain that the purpose we put into the machine is the purpose we really desire, and the problem is, we can’t do that right. In practice, it’s extremely unlikely that we could specify correctly in advance the full range of concerns of humanity.

What we need to do is to get away from this idea that you build an optimizing machine and you put the objective into it, because if it’s possible that you might put in a wrong objective, that means that the machine should never take an objective that’s given as gospel truth.

Because once it takes the objective as gospel truth, it believes that whatever actions it’s taking in pursuit of that objective are the correct things to do.

And this is not restricted to AI: in statistics you minimize a loss function, in control theory you minimize a cost function, in operations research you maximize a reward function, and so on. In all these disciplines this is how we conceive the problem, and it’s the wrong problem, because we can’t specify with certainty the correct objective.

We need uncertainty. We need the machine to be uncertain about what it is that is supposed to be maximizing.

A machine that’s uncertain is going to be differential to us: if we say don’t do that, now the machine learnt something more about our true objectives because something that it “thought” was reasonable in pursuit of our objectives turned out not to be so. It’s going to differ because it wants to be doing what we really want.

It’s a different kind of AI when you take away this idea that the objective is known, and you get a more complicated problem because now the interaction with the human becomes part of the problem. By making choices, the human is giving the machine more information about the true objective, and that information helps the machine to achieve the objective better.

That means you’re mostly dealing with game theoretic problems where you’ve got the machine and the human, and they’re coupled together, rather than the machine going off by itself with a fixed objective.

Max Tegmark

He is a Physics Professor at MIT, co-founder of the Future of Life Institute, and author of “Life 3.0: Being Human in the Age of Artificial Intelligence”.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=Gi8LUnhP5yU&list=PLYIvZcNcC8pc4Ue_XFLjalQ51tH-uyG4a&index=22
Link to the podcast in YouTube: https://www.youtube.com/watch?v=Gi8LUnhP5yU&list=PLYIvZcNcC8pc4Ue_XFLjalQ51tH-uyG4a&index=22

When we build machines, we normally build them with some kind of goal: win this chess game, drive this car safely, or whatever. As soon as you put a goal into a machine, and especially if it’s kind of an open-ended goal and the machine is very intelligent, it will break that down into a bunch of sub goals.

One of those goals will almost always be self-preservation, because if it breaks or dies in the process it’s not going to accomplish the goal.

Similarly, if you give any kind of ambitious goal to an Artificial General Intelligence (AGI), it’s very likely it will want to acquire more resources so it can do that better, and it’s exactly from those sort of sub goals we might not have intended that some of the concerns about AGI safety come from.

You give AGI some goal which seems completely harmless and then, before you realize, it’s also trying to do these other things which you didn’t want it to do.

Right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast, memorizing large databases, playing chess, playing GO, and soon driving cars. But there is still no machine that can match a human child in general intelligence.

AGI is, by its very definition, the quest to build a machine that can do everything as well as we can. If that ever happens, I think it’s going to be the biggest transition in the history of life on Earth. But the really big change doesn’t come exactly the moment they are better than us at everything. It’s actually earlier.

First there are big changes when they start becoming better than us at doing most of the jobs that we do, because that takes away much of the demand for human labor. And then, the really whopping change comes when they become better than us at AI research.

Right now the timescale of AI research is limited by the human research and development cycle (typically of years). But once we replace engineers with equivalent pieces of software, then there’s no reason to think about years, and cycles can be much faster.

The timescale of future progress in AI, and also all of science and technology, will be driven by machines. The really interesting moment is when AI gets better than us at AI programming, so that they can, if they want to, get better than us at anything.

My hunch is that we’re going to understand how to build AGI before we fully understand how our brains work, just like we understood how to build flying machines long before we were able to build a mechanical bird.

Regarding human-machine objective alignment, we should start with kindergarten ethics (that pretty everybody agrees on), and put that into our machines now. For example, anyone who builds passenger aircrafts wants to never, under any circumstance, fly into a building or a mountain.

When Andreas Lubitz (the German Wings pilot) flew his passenger jet into the Alps killing over a hundred people in 2015, he just told the autopilot to do it. And even though it had the GPS maps, the computer accepted it.

We should take those very basic values where the problem is not that we don’t agree, the problem is just we’ve been to lazy to put it into our machines and make sure that from now on airplanes refuse to do something like that. Instead, go into safe mode, maybe lock the cockpit door and go directly to the airport.

Steven Pinker

He is a professor at Harvard and before that was a professor at MIT. He is the author of many books, such as “The Better Angels of Our Nature”, and “Enlightenment Now”.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=epQxfSp-rdU
Link to the podcast in YouTube: https://www.youtube.com/watch?v=epQxfSp-rdU

Is there any difference between the human neural network and the ones we are building in AI? I think there is overlap, but also that there are some big differences. Current artificial neural networks called deep learning systems are in reality not all that deep.

They are very good at extracting high order statistical regularities, but most of the systems don’t have a semantic level: a level of actual understanding of who did what to whom, why, where, how things work, what causes what else.

The goal of making an artificial system that is exactly like the human brain is a goal that no one is going to pursue to the bitter end, because if you want tools that do thing better than humans, you’re not going to care whether it does something like humans.

Why take humans as benchmarks?

Goals are external to the means to attain the goals: if we don’t design an AI system to maximize dominance, then it won’t maximize dominance. It’s just that we are so familiar with Homo Sapiens when these two traits come bundled together, particularly in men, that we are apt to confuse high intelligence with a will to power. But that’s just an error.

Other fear is that we’ll be collateral damage: that we will give an AI a goal like curing cancer, and it will turn us into guinea pigs for lethal experiments. I think these kind of scenarios are self-defeating.

First of all, they assume that we’re going to be so brilliant that we can design an AI that can cure cancer, but so stupid that we don’t specify what we mean by curing cancer in enough detail that it won’t kill us in the process.

And it assumes that the system will be so smart that it can cure cancer, but so idiotic that it doesn’t figure out that what we mean by curing cancer is not killing everyone. I think this value alignment problem is based on a misconception. The code of engineering is: you don’t implement a system with massive control before testing it.

Upvote


user
Created by

Diego Lopez Yse

Reshaping with technology. Working in AI? Let's get in touch


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles