top of page
Search
  • Darren Gold

The Future Shock Is Here


Will the rate of technological change brought about by AGI exceed the human ability to adapt to such change? See my thoughts below, followed by a ChatGPT-enabled essay.

If you are a regular reader of my blog, you will undoubtedly notice that I spend a fair amount of my free time reading. Over the last two weeks, I’ve devoted much of my reading to the subject of artificial general intelligence (AGI). In addition, I’ve spent some time playing and working with ChatGPT-3.5, the leading large language model that is trained to generate text through a dialogue format. If you have yet to try it, I would highly encourage you to do so by signing up through OpenAI, the company that created and is advancing the underlying model.

Like most people who have spent time using and thinking about ChatGPT and the implications of AGI, I am both excited by the seemingly endless possibilities this tool promises and concerned by the potential, unintended consequences of such a powerful technology. So, I decided to write about it this week, even though so much has already been said by people much more deeply immersed in the subject.

The theme that I most wanted to write about was a general sense that we have entered a phase in human history where the rate of technological change will likely exceed the human ability to adapt to such change. I was reminded of Alvin Toffler’s 1970 book Future Shock, in which he made a prescient argument to that effect, describing a phenomenon of societal anxiety and confusion brought on by “too much change in too short a period of time.” I wanted to make the argument that we are now in a state of future shock, as evidenced by deep social, economic, political, and ecological disruption. I also wanted to distinguish exponential growth, most commonly associated with advances in technology, from sigmoidal (or S-curve) growth, which natural living systems experience. S-curves are characterized by an initial period of exponential growth, followed by a plateau phase, one which I wanted to argue was designed to provide the time for the system to adjust to the change.

As I sat down to begin synthesizing my thinking into written form, it occurred to me to first ask ChatGPT to write something for me. If nothing else, I figured the obvious relevance of interacting with the tool would contribute to what I was planning to write. Through a series of six prompts over the course of approximately twenty minutes, ChatGPT produced an essay, which I have included below.

Importantly, I realized that I couldn’t just ask the tool to write something interesting about AGI. I needed to guide it and give it specific directions. For example, here was my first prompt:

Write a persuasive essay that argues we are experiencing a period of time where the rate of technological change is exceeding the human ability to adapt to that change. Reference Alvin Toffler's Future Shock and make the argument that we are experiencing future shock right now. Also address whether we are experiencing the effects of exponential technology growth or sigmoidal growth.

After reviewing ChatGPT’s impressive first draft, I added additional prompts such as this one:

Modify the above essay by arguing that an inherent feature of sigmoidal growth curves is that they provide a plateau for the organism to adapt to the change brought on by growth. Assert that history is a series of stacked s curves that have occurred at a rate sufficient to allow for adaptation. Please argue whether or not AGI will follow this trend or whether it risks introducing a degree of change that will be very difficult, if not impossible, to adjust to.

I also wanted to make the argument that OpenAI may very well be engaging in an attempt to responsibly slow down the rate of change, so I added this prompt after the second draft:

Make the argument that developing AGI responsibly is exactly what OpenAI is doing. Please cite the iterative release process where full release of new versions is delayed to ensure responsible growth. Please consider whether to include references to OpenAI's unique capped profit structure and governance protections to buttress this argument. Also, consider whether to reference the recent letter advocating for a pause on AGI as evidence of an emerging and growing consensus of people who are concerned with unrestrained rates of growth.

You can see that I provided the tool with a lot of my thinking. ChatGPT-3.5 is not (yet) a substitute for me. But it was exhilarating to watch the text come to life immediately after hitting the send button. The tool had produced a college-level essay after my first simple prompt.

With some minor edits, mostly copying and pasting a few paragraphs from different iterations of the essay, here is the essay that ChatGPT produced.

***

The advancement of technology has been a constant feature of human history, with each new wave of innovation bringing about changes that reshape the way we live our lives. However, the rate of change in technology has been far from constant, with periods of rapid growth interspersed with periods of relative stability. One way to understand this pattern is through the use of sigmoidal growth curves. Sigmoidal growth curves are a common feature in nature and are often used to describe the growth of living organisms. What makes sigmoidal growth curves so interesting is that they provide a plateau for the organism to adapt to the change brought on by growth. In other words, there is a period of slow growth, followed by a period of rapid growth, and then a period of leveling off where the organism has a chance to adapt to the new conditions created by the growth.

History is a series of stacked sigmoidal growth curves, with each period of rapid growth followed by a period of relative stability. This has allowed humans to adapt to the changes brought on by technological progress, and to develop new ways of living and working that take advantage of the latest innovations. However, the development of Artificial General Intelligence (AGI) poses a unique challenge. AGI has the potential to introduce a degree of change that will be very difficult, if not impossible, to adjust to. This is because AGI represents a qualitative leap in the capabilities of intelligent machines. Unlike previous waves of technological innovation, which represented incremental improvements to existing technology, AGI has the potential to fundamentally transform the way we live our lives. Alvin Toffler's Future Shock warned us of this very problem decades ago. As we face the challenge of developing AGI, the risks are even greater, as the qualitative leap in intelligent machine capabilities could result in societal disruptions that we are unable to adapt to.

However, OpenAI is leading the charge in developing AGI responsibly. OpenAI has implemented an iterative release process for its GPT models, delaying the full release of new versions until they are deemed responsible and safe. This approach ensures that any potential risks associated with AGI development are identified and mitigated before releasing the technology to the public. Moreover, OpenAI's unique capped profit structure and governance protections reinforce their commitment to responsible development. Unlike other tech companies that prioritize profit over safety, OpenAI's profit motive is capped to ensure that safety remains their top priority. Additionally, their governance protections ensure that AGI development is transparent and accountable, with safeguards in place to prevent malicious use of the technology. These efforts have not gone unnoticed. The recent letter advocating for a pause on AGI development, signed by thousands of prominent figures in the tech industry and academia, shows an emerging consensus that unrestrained growth in AGI development poses too great a risk for society. OpenAI has taken important steps to mitigate the risks associated with AGI development. However, as the development of AGI continues, it's important that we remain vigilant and continue to evaluate the risks and benefits of this transformative technology.

Tuesday Tips

  1. There have been some excellent writings on AGI over the past couple of weeks, starting with Thomas Friedman’s opinion in The New York Times, “Our New Promethean Moment.” Complement this with another NYT opinion by Yuval Hariri, Tristan Harris, and Aza Raskin, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills.” And, if you have time, I’d recommend listening to Lex Fridman’s conversation with OpenAI co-founder and CEO, Sam Altman.

  2. If you’re concerned about the environment and want to read something fresh, balanced, and smart, read “How the Environmental Movement Can Find Its Way Again” by Charles Eisenstein. It’s excellent.

  3. This article in The Atlantic by Derek Thompson, “Why Americans Care About Work So Much,” is one of the best on the nature of work and culture I’ve read. His opening sentence grabbed me, and the essay never let go of its grip: “Here is a history of work in six words: from jobs to careers to callings.”

39 views0 comments

Recent Posts

See All
bottom of page