One has to feel rather sorry for liberal arts college students these days. With tremendous societal advances in artificial intelligence, robotics, and other technologies, many commentators have been quick to speculate about future job supply contraction and its impact on world economies, social inequality, and even the prospects of personal fulfillment in an automated world. Concerned parents and their children ask: how do we “future-proof” our increasingly expensive and time-consuming educations?
One of many emerging technologies we hear discussed is “Artificial General Intelligence,” which is distinguishable from more narrow, and already prevalent, “Artificial Intelligence” (AI). Artificial General Intelligence, or AGI, would be an AI capable of achieving a wide range of intelligent capabilities—it would not only be able to beat humans at Chess or Go, but would be “generally intelligent,” capable of adapting its problem-solving skills to achieve other tasks, like achieving resolutions to disputes. Currently, only humans can really achieve this: we think, feel, and act flexibly and responsively. And generally speaking, the better educated we are, the better we are at doing these things. We go to university in order to “upskill” that thinking.
While no one can currently say what AGI would look like in practice, there are a lot of extremely capable scientists across the world working on creating one. Such an AGI would likely be far better than humans at carrying out a variety of cognitive functions. This means that things that were previously the prerogative of human thought, like structuring meaningful, contextualized commands, or drawing on a range of skills to carry out a task, could now be more efficiently completed by a machine. Owing to the boundless potential of machine learning, such AGIs would be able to “self-improve,” and would therefore be able to enhance capabilities at a rate that human observers would struggle to comprehend.
We’ve already seen the displacement of human thinking in cases of what are now “old” technologies: internet searches from smart devices increasingly negate the need for independent memory recall of dates, names, and events; Google Maps on mobile devices means fewer of us have logistical command of our local areas, let alone our regional road systems. If this is what existing technology has already done to thinking and learning, it’s likely that future developments in AI will cause an even greater re-evaluation of the kind of knowledge we seek and value in society.
Having machines carry out cognitive functions on our behalf raises the fatalistic and depressing prospect that we simply won’t need to think anymore. Machines will “know” our needs and wants better than ourselves. We can already understand this prospect through the “Big Data” phenomenon that has already impacted our lives. We are told what we should buy, do, or think based on algorithmic profiling of our previous actions, and those of people in the same social, economic, racial, or other group as us. The scary thing is that these technologies can be remarkably accurate, even if they are laced with various kinds of biases. Nonetheless, and partly because of this, they raise powerful ethical, social, legal, and philosophical questions.
A colleague of mine recently pointed out that these transformative technologies will increasingly mean that machines become better than us at “common sense.” Sense-making will be passed onto machines so that, like the Apple Watches which currently tell us when we need to stand up or take a call, we will become more reliant on technology to carry out basic day-to-day tasks or decision-making. As an AGI or enhanced AI emerges, we face the prospect that it won’t just be common sense that is replaced. Information, knowledge, and perhaps wisdom will all be swept up as we humans face the anxiety-inducing possibility that everything we do could be better done now or someday by an inanimate machine.
With this prospect, those expensive, hard-won degrees from ivory-towered universities will quickly look less than their ticket price.
Even computer scientists can’t say what this emerging world will look like. There’s a huge amount of speculation out there, meaning that what is already a complex and frustrating field of study is made even less clear to those seeking answers to “what’s next?”.
Regardless of what happens in these fields in the coming years, emerging technology has stunningly important repercussions for the kind of education we value in society. The kneejerk response if you’re looking at how to “future-proof” your education would be to look up “best college majors for return on investment” online. Doing so, it quickly becomes evident that so-called “STEM” subjects—that is, Science, Technology, Engineering, and Math—offer significantly superior career prospects compared to those graduating with arts, humanities, and social science degrees. What’s more, STEM subjects seem to relate most directly to the emerging technologies we’ve been talking about. In the race to achieve global dominance, it’s no surprise that the world’s major economies are pumping investment into STEM education, which almost always comes to the detriment of funding for non-hard science subjects.
But this simple binary between relevant and not relevant, future-proof and antiquated, is a gross oversimplification, and one that serves to undermine the value that subjects other than the natural and applied sciences can bring to society.
This doesn’t mean we need to force disciplines like History and Philosophy into becoming relevant to the world. The kneejerk “make me relevant” sensation can in itself be facile and mistaken, and is particularly evident in the American liberal arts educational system. There’s an intrinsic benefit in studying and researching in a more abstracted kind of way, whether that be researching 16th-century religious movements, or, as one acquaintance of mine (in Pure Mathematics) recently told me of his own work, “developing a new system of number.” (When pressed, he told me that the practical consequences of his research would almost certainly be nil).
That said, being more flexible and curious about the kind of knowledge we value in society is a practical imperative as we enter into what some have dubbed “the Fourth Industrial Age.” In a very real sense, emerging technologies do and will mean that what we know becomes less important, as machines are ever more able to fulfill our requirements for day-to-day tasks. Instead, we should be teaching people to understand, interrogate, and critique how we know.
This involves understanding how different ways of thinking validate the kinds of decisions we make, and also the values we espouse, as societies. Knowing what you think is relevant—whether that’s what year a certain battle happened; or how to complete multivariate linear regressions—may well end up having less currency than more abstracted thinking, like moral philosophy or political theory. After all, these meta-level ways of knowing seem the least likely to be automatable, and some of the best for understanding what it is societies can and should want from emerging technologies.
As I have found in my own work, the latter kind of thinking can end up having much more powerful, and ironically, practical, importance in understanding exponential change in society today. An immersive and intellectually stimulating degree where the only currency is thought, rather than job market potential, might seem quaint and for the privileged few—but it is not so. Silicon Valley’s recent string of scandals, from deep-set gender inequality to evidence of foreign interference in democratic elections using social media, has shown that an army of engineers cannot save the world
In fact, education that encourages people to use reason, interrogate the foundations of that reason, develop substantive and thoughtful arguments, and think reflexively and responsibly, seems to be in desperate need in the sunny Bay Area—the locale that seems to hold sway in defining so much of our technological futures. And, frankly, we’re not doing much better with reasoned debate here on the East Coast.
So, my message to students concerned about the future of work is this: Don’t obsess over making your degree a vocational, skills-based one. If that’s what you’re interested in, fine—but you may well be better off entering the workplace in lieu of gaining an expensive degree. But several years of what most would call “esoteric” thinking will allow you to respond in a more critical, flexible, yet rigorous way about what it is we seek from our society of tomorrow. Shallow, fear-mongering, and poorly nuanced thinking is endemic in current debates surrounding technological change and the human future. Much of this knowledge is focused on the “what” of thinking and not the “how.” Equipped to deal with humility and intellectual soundness to these issues, you’ll see that the stakes are high, but that some of the most urgent answers to the biggest questions of today may well be found in the seemingly dusty irrelevance of an undergraduate collegiate library.
Harry Begg (@HarryMBegg) is a contributor to the Washington Examiner’s Beltway Confidential blog. He is a writer and researcher at the McChrystal Group, based in Washington, D.C. His opinions are his own.
If you would like to write an op-ed for the Washington Examiner, please read our guidelines on submissions here.