cft

The most ridiculously awesome thoughts about AI (part 2)

My selection from Lex Fridman’s AI podcasts


user

Diego Lopez Yse

3 years ago | 5 min read

I’m a huge fan of

Lex Fridman and the awesome content he produces to promote ideas and advances in different sciences. In this regard, I want to share some of the concepts that blew my mind when I first heard them in his Podcasts.

Christof Koch

He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at CalTech. Cited more than 105,000 times. Author of several books including “Consciousness: Confessions of a Romantic Reductionist”.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=piHkfmeU7Wo
Link to the podcast in YouTube: https://www.youtube.com/watch?v=piHkfmeU7Wo

Consciousness is any experience. It feels like something to be bad, to be an American, or to be angry, sad or in love, or to have pain. And that is what experience is. It could be as mundane as sitting on a chair, or could be as exalted as having a mystical moment in deep meditation.

There is a concept of intelligence (natural or artificial), and there is a concept of consciousness experience (natural or artificial). And those are very different things. Historically we associate consciousness with intelligence, but now we confront a world where we are beginning to engineer intelligence, and it’s radically unclear whether that intelligence we’re engineering has anything to do with consciousness, and whether it can experience anything.

Intelligence is about function. It’s about adaptation to new environments, being able to learn, quickly understand, and what will happen next. Consciousness is not about function. It’s about being.

Why is consciousness a hard problem? Because it’s subjective. Only I have it for only I know. I’ve direct experience of my own consciousness. I don’t have experience on your consciousness.

In humans, intelligence and consciousness go hand by hand. In artificial systems, particularly digital machines, they don’t go together. Systems may simulate the behaviours associated with consciousness, but simulating is not the same as having conscious experiences. Just like it doesn’t get wet inside a computer when it simulates a weather storm, in order to have artificial consciousness you have to give it the same causal power as a human brain.

Yoshua Bengio

He is considered one of the three people most responsible for the advancement of deep learning during the 1990s, 2000s, and now. Cited 139,000 times, he has been integral to some of the biggest breakthroughs in AI over the past 3 decades.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=azOmzumh0vQ
Link to the podcast in YouTube: https://www.youtube.com/watch?v=azOmzumh0vQ

Instead of learning separately from images and videos on one hand, and from text on the other hand, we need to do a better job of jointly learning about language and about the world to which it refers, so that both sides can help each other.

I don’t think that having more depth in the artificial neural networks (e.g. instead of a hundred layers we have ten thousand) is going to solve our learning problem. Engineers, companies, labs and grad students will continue to tune architectures and explore tweaks to make the current state of the art slightly better, but I don’t think that’s going to be nearly enough. I think we need some fairly drastic changes in the way that we are considering learning, to achieve the goal that these learners actually understand in a deep way the environment in which they are observing and acting.

Our state of the art deep learning methods fail to learn models that understand even very simple environments. Instead of what humans might need just dozens of examples, these things will need millions for very simple tasks.

So I think there’s an opportunity for academics to do really important research to advance the state of the art in training frameworks, learning models, and agent learning, in even simple environments that are synthetic, seem trivial, but in which current machine learning fails.

For machines, the hardest part of any conversation is everything to do with the non linguistic knowledge, which implicitly you need in order to make sense of sentences (e.g. sentences that are semantically ambiguous). You need to understand enough about the world in order to interpret properly those sentences.

I think these are interesting challenges for Machine Learning because they point in the direction of building systems that both understand how the world works and it’s causal relationships, and associate that knowledge with how to express it in language for reading or writing.

Jürgen Schmidhuber

He is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity.

Link to the podcast in YouTube: https://www.youtube.com/watch?v=3FIo6evmweo
Link to the podcast in YouTube: https://www.youtube.com/watch?v=3FIo6evmweo

There are significant differences between the way systems can learn. Let’s take an example of a deep neural network that has learnt to classify images, trained on 100 different databases of images. Now a new database comes along, and you want to quickly learn the new thing as well.

One simple way of doing that is taking the network which already knows 100 types of databases, take its top layer, and retrain that using the new label data that you have in the new image database.

Then, it quickly can learn that too. The neural network has already learned so much about computer vision that it can reuse that knowledge to solve the new task, except that you need a little bit of adjustment on the top. That is transfer learning.

On the other hand, true meta-learning is about having the learning algorithm itself opened to introspection by the system that is using it.

And also opened to modification, such that the learning system has an opportunity to modify any part of the learning algorithm, and then evaluate the consequences of that modification, learning from that to create a better learning algorithm, recursively.

I think that in the near future we will have (for the first time) robots that learn like kids. Robots that by seeing and hearing us guiding them, will try to do something with their own actuators, which will be different from ours but they will understand the difference. They will learn to imitate us, but not in the supervised way where a teacher gives target signals for all muscles all the time.

They will learn high level imitation, where they first have to imitate us and then interpret these additional noises coming from my mouth (voice) as helpful signals to do tasks better. Then, and by their selves, will come up with faster and more efficient ways of doing the same things that we taught them.

At the moment this is not possible, but we already see how we are going to get there. To the extent that this works economically, it’s going to change everything. Almost all our production will be affected, and a much bigger AI wave than the one we are witnessing is coming: an era of active machines that shape data through the actions they execute, learning to do that in a good way.

Upvote


user
Created by

Diego Lopez Yse

Reshaping with technology. Working in AI? Let's get in touch


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles