This AI Milestone Should Kill Traditional Schooling
Abstract Thinking Will Dominate Future Work
Timothy Dasey, PhD
I've been worried about schools since my college teaching stint almost thirty years ago. The students were great at memorizing facts and processes. But I was teaching computer programming, a subject loaded with abstract concepts. Many of them struggled. The students separated into two groups, with no overlap. Those who got the concepts aced the work, while the rest bombed the assignments and tests. I wondered why; after all, some of the concepts were things I've seen elementary school kids learn. (I fully admit it could have been the teacher's fault!)
In the workplace, I've had the pleasure of leading some of the brightest minds in the country. And yet, some (not many) of those straight-A graduates can get lost in the proverbial forest. Deep expertise doesn't necessarily make one a strong, broad-based conceptual thinker.
I had an advantage in leading and developing Artificial Intelligence (AI) throughout my career. AI design gave me visibility into users, workers, and businesses across many industries. Plus, AI is a learning science, and my interdisciplinary background (my Ph.D. is in Biomedical Engineering) allowed me to relate AI training methods to neuroscience and psychology.
I observed my daughters' schooling closely. My eyes would roll at the heaps of time on repetitive problem-solving. I knew they'd forget the stuff quickly, and most of the students would never have cause to remember any of it - ever. The rest would Google it when needed. I was saddened by the progression of excited elementary kids to stress-filled high school-ers. Some of that's a teenage staple, but we've unnecessarily amplified it. I knew conceptual learning - the kind that leads to a self-learner - doesn't come from the teaching process in most classrooms.
Anyway, this isn't about me, but I do have an amalgam of many perspectives. I've been in the vicinity of the education community. Close enough that I know a crisis when I see one. It's time I speak up.
My simmering school concern elevated to alarm a couple of years ago. I attended a seminar by a Google scientist. They were building Artificial Intelligence (AI) that builds AI, and it was working! It's better than human-built AI. And not just Google. The so-called autoML (Automatic Machine Learning) industry has sprouted.
AI developing AI? When I mention it, people give me bug eyes, like it shouldn't be possible. Brains analyze brains and software analyzes software. It shouldn't be shocking that AI will develop AI.
It's probably inevitable since all of the ingredients are present. There are best practices in AI design. There are clear ways to measure AI performance. And at this point, there are many built AI solutions. That means auto-AI algorithms have training examples. Plus, engineers often have to follow a trial-and-error process. The design automation can try a ton more options.
Until that seminar, I thought future workers needed lots of STEM expertise (Science, Technology, Engineering, and Math), though with more modern focuses.
That day I realized it's not about STEM; it's about being able to think big picture.
AI that can develop AI changes everything. Here's why.
It's easiest to think of AI as a decision-maker. In some realms, it has beaten human performance, but in niche tasks. At other times it is so much cheaper that it can replace people despite inferior results.
AI is like an autistic savant. They can be brilliant in one specialty, like playing an instrument or doing math. Yet the same autistic could struggle to navigate a basic conversation. Each AI is also good at only one thing.
As many as half of all job tasks could automate with existing AI methods. That won't happen overnight. Lack of decision examples to train AI will limit the pace of rollout. For instance, AI language translation should be far better with English-to-Spanish than Swahili-to-Maori. That's because there are more examples to give AI for the more common languages.
Plus, AI has a massive talent shortage. Until the Google seminar, I assumed the labor supply would be the biggest limiter. A 2017 estimate was that industry could use 10,000,000 AI developers, but the supply was 300,000 - worldwide. A new B.S. with AI skills can earn a six-figure salary. For a PhDs, it can be several hundred $K. Finding good AI workers is a combat sport.
Companies have targeted big markets like translation, image analysis, voice assistants, chatbots, and content search. But the decision business is long-tail. That's the term for problems with lots of small markets. In AI's case, the ultimate market size is one person. Every job has its drudgery to automate. Every person has weaknesses to mitigate. And each data set has its quirks. To amplify my productivity, I should ideally have the AI designed just for me.
Long-tail markets can become attractive to companies only in certain circumstances. For one, the technology must be adaptable. AI will cover the long-tail when non-expert workers can customize it on their own. There won't be enough AI-skilled people to do it for them, especially since the uses for AI will also keep growing.
It takes a lot of skill now to develop AI. The engineers must choose the method for a given data set and the desired AI decision. That was the topic of the seminar I saw - AI that would try many methods and pick the best one. Each technique needs other choices, many of which require experience. Recent research has automated those decisions too.
Can you see why AI that can develop AI is so impactful? It will democratize AI. Remember the PC revolution? It accelerated when they became easier to use-when one didn't have to be a geek.
You will be in charge of eliminating significant aspects of your job. The increased productivity will make you more valuable. If you don't, the other guy or gal will, and you'll be on your way to becoming a fossil. AI that develops AI will come to workers like a rough-cut statue. It'll be trained for a canonical task, such as an information search. But the best workers will morph the rough cut into their masterpiece by overlaying custom data, priorities, and context onto the design.
If AI can't serve the long tail, then some workers could get away with being AI-ignorant. Their tasks won't get automation. But auto-AI makes the long tail reachable. Every person will have the onus to make soup from the AI pieces. Abstract thinking will be on everyone's plate.
The Remaining Jobs Need Abstract Thinking
AI isn't going to take over the world anytime soon and make people irrelevant. That day may come, but there are a lot of big challenges to solve between now and then. Our frontal lobe functions aren't automated, and we don't have good methods to do so. Also, many decisions use knowledge unavailable to AI. It'll have to move toward general intelligence to show even basic common sense.
But it will impact every job - a lot.
AI will take the concrete thinking, puzzle-solving tasks. All jobs have routine decisions. Those are the ones that will automate. AI can even create novel solutions within the bounds of a clearly-defined problem. Just ask Go and Chess competitors, who are learning from the superhuman AI. Some real-life problems aren't that different from AI's game-playing feats. Production work will be AI's job, in the office as well as the factory.
Workers will have to decide what automation applies to their jobs. That's harder than it sounds. I've talked to many workers about uses for AI, across many industries. People often don't see the possibilities once they become good at their work. Their decisions are automatic. They can have a hard time anticipating how tech could change the work process.
Workers will also need judgment skills to understand how to interpret AI output. They'll consider the pros and cons of AI products, choosing them like employees. Each will have a personality of sorts. Technology 'empathy' will be a gotta-have.
Abstract thinking will rule other job aspects too. We'll make the tough judgments that consider incomplete and uncertain information. We'll tackle novel problems. And we'll handle open-ended situations, where problem statements and goals intertwine and evolve. Out-of-the-box creativity will still be ours. Oh, and interpersonal relations.
Judgments, creativity, and relations. If you're not good at those, then you'll be on the sidelines - irrelevant to the economy.
Society already lacks sufficient abstract thinking. Employers complain they can't find enough creative, teaming, and critical thinkers. It's hard to go a day without ample evidence that many people are bad at relationships. Creativity measures are down over the past few decades. So is empathy.
When AI democratizes, workers will need to be wise, creative, AI managers who relate well to people. Can you see why I'm worried?
Schooling Dedicated to Abstractions
Many of our children are in trouble without a complete school overhaul. Abstract thinking will be the job currency in the AI era. But typical schools are concrete thinking factories - precisely the opposite of what's needed.
Schools have to take on this enormous challenge. I'm talking K-12, not college, though both need work. By the time students get to college, they do need to dive into a job area.
We have to rethink the base assumptions of schooling.
Abstract thought is conceptual. Educators know concept-based learning has a completely different paradigm from knowledge teaching. Everything revolves around the concept. All deep-dives into details support the concept-learning goal. Unfortunately, most schooling is the other way around, with a knowledge acquisition emphasis. Teachers do care that they take away the concepts, but that's not going so well. Even MIT physics students don't learn concepts well if taught in a lecture format with an analytical focus.
Status quo school paradigms hurt abstract thinking. The robust concepts we need will sprout from variety, not repetition. Abstractions arise by connecting distinct concepts across many knowledge domains. That happens via analogy, periodic work/rest cycles, and play. Concepts about self and others are more important than the periodic table. Not to pick on chemistry, which I love, but this engineer thinks psychology is now the most important science. The arts are critical too. They teach mental patterns that get reused in unrelated situations.
Even the course structure needs rethinking. Why have everyone take a course full of details most will never use? Instead, imagine courses designed around transferable concepts - the ones that are most broadly useful.
For example, imagine a class on 'power,' in its many forms. It could cover electricity and historical clashes between powerful foes. Literature could highlight the corrupting influence and terrible misuses of power. Sociology might discuss soft power instead of overt clout. Computer and math models could tie the disparate examples together to show canonical power dynamics.
Those examples are far-flung, but concepts get more transferable when the examples are more varied, and the analogies are more abstract. The 'power' course would engage everyone and would put abstract concepts front-and-center.
We need to create great abstract thinkers, so schooling must get a major redesign. That will take a huge national commitment, but I can think of few more critical problems.
Curriculum reform and accompanying structural changes are long overdue. Educational researchers are on the right track, with 21st-Century Skill paradigms emphasizing soft skills like creativity, critical thinking, and teamwork. But most of the great learning research hasn't made it to the classroom. Kids are still bored, they're learning material more appropriate for the moon launch era, and employers are increasingly having a hard time filling high-skill jobs. Being on the right track isn't good enough. The world keeps changing faster than school.
There's urgency. It takes fifteen years to school a new worker. AI democratization won't take that long.
Timothy Dasey, PhD
Scientist, writer, educational enthusiast, parent, envelope-pusher