An Anthropological Time-Capsule of AGI and AI-Progress Takes You Hear a Lot in 2023
A few months ago I read an article about UBS’s acquisition of Credit Suisse, a 167-year-old bank, in the wake of the small banking crisis then unfolding. The article said that UBS had plans to enact deep enough cuts at Credit Suisse that the transaction to acquire it would be making money by 2027.
I was struck immediately by my belief that the world of 2027 will be very different from the world of today because of rapid AI progress. Noticing that large corporations make multi-year projections like this all the time, but that doing so would feel very difficult and precarious to me because of beliefs I have about AI progress, made me realize again that I’m in a pretty unusual niche of people who think artificial general intelligence is possible, it’s coming, it’s quite likely coming soon, and that when it comes it will affect basically everything. These are a pretty specific and high-stakes set of predictions, and I thought it might be interesting to try to make explicit some of the other immediate follow-on implications of believing that AGI is coming soon that are a common among the people who are pretty convinced it is.
Specifically, I think all of what I’m saying here is reasonably common to hear from people working on AI in 2023 who believe AGI is possible soon and that it will be a massive deal when it comes. In some cases, I’m highlighting specific commonly held beliefs because I think they are particularly at risk of not aging well.
Machine consciousness isn’t relevant to the most important questions about AI
My impression is that, during contemporary discussion about AI alignment, points related to AIs being conscious usually feel like a distraction. There are a few reasons for this. First, in general, a model wouldn’t have to actually be conscious to demonstrate power-seeking behavior that could be worrying for humanity. Much of the purpose of AI alignment work is framed around the objective of preserving humanity’s ability to steer the future in the presence of very advanced AI systems, and this goal isn’t affected much by whether or not the models are conscious.
Also, the virtual-assistant LLMs that are the most popular demonstration of the most powerful models currently available are in a paradigm that, to a rough approximation, act as “text simulators” in the sense that they predict and then output the most likely next token, so it doesn’t feel like we get much information about whether they are in fact conscious even when they generate outputs that claim they are conscious.
The one exception here is people who are specifically worried about so-called “s-risks” where a future AI system for some reason simulates lots of suffering beings. My impression is that work to explicitly mitigate this kind of risk isn’t considered a key part of the AI alignment strategy or plan by most of the leading teams working on this.
The basics of deep learning can get us to AGI
This one is pretty self-explanatory. I think most people working on AGI don’t seem to think that the basic paradigm of using back-propagation to update a bunch of weights in some kind of network is going to change any time soon. Neural networks of one shape or size or another are here to stay, and can get us all the way to AGI. This is one of those things that seems totally obvious to people working in the field, but that you still occasionally hear people object to who are nominally working on AI but in very different industries (e.g. Defense).
Progress towards AGI will accelerate economic growth in the near-term
Even people who are very scared of the fast pace of AI progress think that it will grow the economy substantially, especially as more effort is poured into research and development of larger models. There is some debate amongst economists who are watching AI closely about whether other bottlenecks that AI can’t easily increase the productivity of will appear in the economy, but frankly I think there’s a tendency to handwave a lot of this away by the AGI crowd. Most people working toward AGI seem to think that we’re less than four years away from the point where real cutting-edge AI (not just software glossed up as AI) begins to have a noticeable affect on overall GDP in a way that, for instance, cryptocurrency and space technology never have, and that rivals the influence the internet had by the early 2000s.
We won’t have another “AI Winter”
When I first became interested in AI stuff around 2015, it was important to acknowledge that AI had been through several periods of rapid progress, advance, and enthusiasm, followed by “Winters” of disappointment and denial about future progress. In this way, the field was a bit like virtual reality, or nuclear fusion, going through periodic periods of hype, usually benefitting from advances in other parts of technology, followed by busts and renewed pessimism.
Around 2015 it still seemed unlikely-but-not-impossible that we’d enter another such winter. Because of the point about deep learning seeming like it can get us all the way to AGI, and the fact that even the models we currently have (e.g. GPT-4) seem to have way more useful capabilities than we’ve yet had the time to productize and commercialize as an industry, this seems very unlikely now.
Knowledge work will be easier to automate than physical labor, although robotics too will be solved quickly after we have something that looks like AGI
I think a lot of people have come around to the idea that the wave of industrial automation introduced by AI will be unusual because it will target white-collar workers, with all of their fancy credentials and knowledge work pretensions, before it comes for people whose work consists of physical labor in the real world. This was not obvious 5-10 years ago, but is quickly becoming a core assumption of how the policy response to the technology’s economic implications might be calibrated.
China developing AGI is becoming a bit less scary on the margin because of their lack of access to the most leading edge chips, and their government’s stricter approach to AI and tech regulation generally.
Similarly, back in 2015-2019, with the dramatic rise of Alibaba, Tencent, and Baidu, it seemed like a very real possibility that AGI might be built in China first and that generally the balance of cutting-edge in AI research might shift overtime to be more concentrated in China. As an example, Kai-Fu Lee’s book “AI Superpowers” was essentially entirely dedicated to this thesis.
While the development of models in China, especially a state-led project, is still a major consideration in a lot of conversations about AI strategy, my impression is that, on the margin, people think this is less likely than they thought it was several years ago. The Chinese have failed, with substantial American effort aimed at helping them fail, to build an independent leading semiconductor fabrication industry and have in fact lost access to some cutting edge hardware, a core input into training the largest models. Additionally, the Chinese government has seemed to be more game to regulate its AI industry aggressively in a way that curtails innovation than the American government has been, and has in the interceding years decapitated the executive suites of its tech giants generally. Finally, there still isn’t a true frontier lab based in China, which I think might have been surprising to hear about 2023 back in 2018.