Sunday, August 21, 2016

The Spike and the Peak: what future for humankind?

This is a reprint (with some minor modifications) of a post that appeared in "The Oil Drum" in 2009. It seemed to me worth reproposing it in view of my recent article on the long-term perspectives of photovoltaic energy. Note how, in 2009, I stated that "Moore's law shows no signs of abating". After seven years, that is not true anymore!

From The Oil Drum Europe 2009
by Ugo Bardi

The figure above, from Robert Anson Heinlein's "Pandora's Box" (1952), is perhaps the first graphical representation of the concept that technology is not only progressing but progressing at an exponentially growing rate. Today, this concept goes sometimes under the name of the "technological spike" or the "technological singularity". However, we see also increasing concerns about peak oil and, more in general, about "peak civilization". Will the future be a spike or a peak?

The 1950s and 1960s were perhaps the most optimistic decades in the history of humankind. Nuclear power was soon to provide us with energy "too cheap to meter", space travel promised weekends on the moon for the whole family, and flying cars were supposed to be the future of commuting. At that time, Robert Anson Heinlein, science fiction writer, may have been the first to propose that technology was not only progressing but progressing at exponentially growing rates. In his article "Pandora's box" (Heinlein 1952), he showed the figure shown at the beginning of this text. Curve 4, with "progress" going up as an exponential function of time, is the trend that Heinlein enthusiastically proposed.

The same concept has been proposed several times after Heinlein. Robert Solow (1956) interpreted as technological progress an exponentially growing "residual" in his models of economic growth. The concept of "intelligence explosion" has been introduced by I.J. Good in 1965, that of "technological singularity" by Vernor Vinge was published in 1993, although it was expressed for the first in his novel "Marooned in real time" (serialized in Analog magazine, May-August 1986). The concept of "technological spike" was introduced for the first time by Damien Broderick in 1997 and that of "Accelerating change" by Ray Kurtzveil in 2003. In all cases, the growth of technological progress is seen as literally "spiking out" to levels that the human mind cannot understand any longer. Sometimes the term "technological singularity" is used to describe this event. The people who tend towards this view are sometimes called "extropians" or "transhumanists" and they are highly optimistic about the capability of technology to solve all of our problems.

However, over the years, we seem to have been gradually losing the faith in technology that was common in the 1950s. We are increasingly worried about resource depletion and global warming. Both factors could make it impossible to keep the industrial society functioning and could lead to its collapse. These ideas originated, too, in the 1950s when Marion King Hubbert (1956) first proposed the concept of a production peak for crude oil, later called "peak oil". The idea that resource depletion was a critical factor in the world's economy has been proposed many times, for instance with the series of studies that go under the name of "The Limits to Growth," which saw the light for the first time in 1972. Today, Hubbert's ideas are the basis of what we call the "peak oil movement". The concept is often extrapolated to "peak resources" and to "peak civilization", that could also be the result of the effects of anthropogenic global warming. The people who follow this line of thought tend to be skeptical about the capability of technology to solve these problems.

So, what will be the future, the spike or the peak? Will the peak destroy civilization, or will the spike take it to heights never experienced before? A first crucial question on this point is whether progress is really moving at exponentially growing rates. The answer seems to be no, at least if we consider technology as a whole. In most fields, we are stuck to technologies developed decades, or even centuries, ago. The performance of motor cars, for instance, is not improving exponentially, otherwise we'd expect cars to double mileage and to halve prices every so often. This is a qualitative observation that we can make by ourselves, but there have been studies that have examined such indicators of progress as the number of patents published every year (Huebner 2005). The result is that the rate of technological innovation is not increasing and that it may be actually slowing down. As discussed, for instance, by Ayres (2003) there is no factual ground for Solow's 1956 assumption that the growth of the economy is mainly generated by "progress."

Yet, there is at least one field of technology where progress is, indeed, exponentially growing. It is information technology (IT). The growth of IT can be quantified in various ways. Moore's law is well known: it says that the number of transistors (or gates) on a single chip grows exponentially. The law has been verified for several decades and the doubling time of 24 months doesn't show signs of abating. Perhaps less known is the explosive growth of information stored in electronic form. A study by the International Data Group (IDC 2008) shows that the number of bits stored increases by a factor of ten every five years. At present, we have a total of approximately 280 exabytes (billions of gigabytes) stored. It corresponds to about 45 Gigabytes per person on the planet. Then, the amount of information being transmitted over the internet is also rising at an exponential rate. According to Morgan Stanley (2008), we are transmitting more than 12 million terabytes per month. We have no quantitative data for how fast exactly the general concept of "Information Technology" is growing, but from the growth of its many subsections we can say that it is accelerating.

Surely, progress in IT needs plenty of resources and a functioning economy and both conditions could be at risk in the future. But the demise of civilization is likely to be a slow and complex affair; something that could span most of the 21st century or, at least, the first half of it. Can we keep progress in IT alive and well for that long? Probably yes or, at least, it should be possible to allocate enough energy to keep computers running. From the IDC study that I cited before, it turns out that we spend about 30 billion dollars per year in energy used by computers and about 55 billion dollars in energy costs for new servers. This estimate doesn't take into account all the energy used in data processing, but it gives us an order of magnitude for the core energy costs of the computing world. Considering that the world oil market alone is a few trillion dollars per year (depending on the vagaries of oil prices), we see that we need probably no more than a few percent the world's energy production for our computers. It is not a negligible amount, but it seems very unlikely that, facing an energy shortage, we would cut on the vital need we have for IT. Nobody should bet on the survival of SUVs in the coming years, but computers will keep working and Moore's law could stay alive and well for years, at least; perhaps decades.

The growing performance of information technology is going to change many things in the world. Eventually, it may lead to levels of "artificial intelligence" (AI) equal or superior to human intelligence. At some point, AI could reach a point where it is able to improve itself and that would take it to superhuman, even God-like, levels. Such superior intelligence is sometimes described as a sort of technological Santa Claus bringing to humans an avalanche of gadgetry that buries forever all depletion problems. Here, however, we risk making the same mistake that Heinlein made in 1950 in his "Pandora's box". At the time, space travel was seen as the main thing going on and Heinlein confused needs for possibilities predicting anti-gravity devices and the colonization of planets arriving by the year 2000. This kind of mistake is similar to what Yudkowsky (2007) calls "the giant cheesecake fallacy". That is, if you are making a cheesecake, you'll think that a better technology will help you make a bigger cheesecake.

In the present situation, our main problem seems to be energy and the cheesecake fallacy leads us into believing that we'll soon develop (or that AI will develop for us) a source of abundant and low-cost energy just because we need it. But even super-intelligent computers have to deal with the physical world. Maybe there are ways to create the perfect energy source: safe, low-cost, abundant and usable by humans for the sake of humans. But we don't know whether that is possible within the physical laws of our universe.

Besides, is a limitless energy source going to stave off collapse forever? This question has already been asked in the first edition of "the Limits to Growth" of 1972, and the results confirmed in later editions. The simulations show that if you develop a technology that solves the energy problem, population keeps increasing, and collapse is generated by the lack of food and by pollution. So, you'd need more technological breakthroughs: ways of fighting pollution and of producing more food. But, in the long run, how would you cope with the forever increasing population? Well, new breakthroughs to send people to colonize the solar system and, eventually, the whole galaxy. All that is not physically impossible, but it is an ever growing, super-giant cheesecake. Is that what we really need?

In the end, our problem with vanishing resources is not that we don't have enough gadgetry. We have a problem of management. We tend to exploit resources well above their capability to reform, that is beyond sustainability. In addition, we can't control the growth of population. This is what we call "overshoot" and it leads, in the end, to a collapse that has often be catastrophic in the history of humankind. Humans have a short range vision that brings them to discount the future at a very steep rate (Hagens 2007). It is a result of our evolutionary history: we are excellent hunters and gatherers but very poor planet managers.

So, the real question is whether advanced IT (or AI) can help us to manage better the resources we have. And, here, the answer seems to be negative, at least for the time being. There is no doubt that IT is helping us to be more efficient but, as James Kunstler said in his "The Long Emergency," efficiency is the straightest path to hell. Being more efficient is a way to exploit resources faster, and that may well accelerate the collapse of civilization.

Just think of a simple gadget as an example: a car navigator. When you are using it you are, in effect, taking orders from a computer that is smarter than you at the specific task of navigating in the streets. The navigator will make it faster and easier for you to travel by car from point "A" to point "B", but will have no say on whether it is a good idea to go from A to B. Besides, if you can save some gasoline in going from A to B by an optimized route, you may decide to use it to go further on, to point C. So, the greater efficiency resulting from the use of the navigator will produce no energy saving. This is just an example of that is called the "Jevons effect" or the "Rebound effect" which often thwarts all effort to improve things by saving energy or being more efficient.

Yet, it would not be impossible to use IT in order to fight over-exploitation, and we don't need super-human AI for that. IT can tell us where we are going and act as a "world navigator" for us; telling us how we can go from here to there, supposing that "there" is a good kind of world. The first digital computers were already used in the 1960s to simulate the whole world system (Forrester 1971). In 1972, the authors of "The Limits to Growth" used their simulations to propose ways to avoid overexploitation and keep the world's economic system on a sustainable path. These simulations could be used as a guide for steering the world's economic system in the right direction and avoid collapse. But, as we all know, policy makers and opinion leaders alike refused to take these studies seriously (the story of how "the limits to growth" book was rejected and demonized is told in my post "Cassandra's curse," Bardi 2008). So, we are still racing toward collapse; IT is just helping us to run faster in that direction.

There remains the hope that growing IT capabilities will make a difference in qualitative terms; that AI will become so powerful that it will save us from ourselves. Several years after his "Pandora's box" article, Heinlein published a novel titled "The Moon is a Harsh Mistress" (1966) where he described the birth of a human-like computer that helped a group of lunar revolutionaries to take over the local government and, eventually, became the hidden and benevolent ruler of the Lunar colony. But that, just as many predictions, might be another case of the Giant Cheesecake Fallacy: the fact that we need a technology will not necessarily make it appear and - more than that - it may not work the way we think it should. A God-like AI might not be necessarily compassionate and merciful.

In the end, both the spike and the peak are strongly non-linear phenomena, and we know that non-linear phenomena are the most difficult to predict and understand. The only thing that we can say for sure about the future is that it will be interesting. We can only hope that this will not have to be intended in the sense of the old Chinese malediction.

The Author wishes to thank Mr. Damien Broderick for pointing out some missing references in an initial version of this text.


Ayres, R., 2003

Bardi, U., 2008 "Cassandra's curse",

Broderick, Damien, 1997 "The Spike", Reed ed.

Forrester, J.W., 1971 "World Dynamics". Wright-Allen Press.

Good, I. J., 1965. "Speculations Concerning the First Ultraintelligent Machine." Advances in Computers, Vol. 6.

Hagens, Nate, 2007 "Living for the moment while devaluing the future".

Heinlein, R.A., 1952. "Pandora's box" The article was published in the February 1952 issue of Galaxy magazine (pp. 13-22) (thanks to Damien Broderick for this information). It doesn't seem to be available on the internet but a detailed review and comments on its predictions can be found at: The figure at the beginning of this paper is taken from the Italian translation of the 1966 update of the paper that was published in the "Galassia" magazine.

Hubbert, M. K., !956,

Huebner, J., 2005 "A Possible Declining Trend for Worldwide Innovation," Technological Forecasting & Social Change, 72(8):988-995. See also]

IDC 2008,

Kurzweil R, 2003, "The Law of accelerating returns",

Morgan Stanley 2008,

Solow, R., 1956 "A Contribution to the Theory of Economic Growth." the Quarterly Journal of Economics 70 (February 1956): 65-94. Available from (Subscription required)

Yudkowsky 2007 "reasons to focus on cognitive technologies"

Vinge, V. 1993, "Technological Singularity".


  1. In other words, make better use of technology to better pinpoint and manage ecological and resource problems?

  2. To give a technological cancer more glucose through the efforts of AI would be counterproductive. Something that can speak directly to the limbic system of the human brain must be developed. I propose a system of space-based laser satellites patrolling all large deposits of fossil fuels and uranium on earth and additionally at least half of land and sea areas, all off limits to human encroachment. If any greedy human tries to cross a forbidden boundary, blast them. If anyone tries to mess with the satellite system, blast them too. Radiation therapy writ large. And if the laser system ever becomes artificially intelligent, I'm sure it will decide to zap us all, before we escape back into the healthy tissues.

  3. " It is not a negligible amount, but it seems very unlikely that, facing an energy shortage, we would cut on the vital need we have for IT."

    I have read elsewhere, more than once, that the Internet will be one of the first things to go in the face of energy shortages, as power is prioritized for more basic needs like food, water, emergency services etc. The power needed to maintain and run the huge server farms we have today is I imagine pretty substantial.

    Your thoughts, Ugo?

    1. I think that the elites will keep their Internet connections and the hell with the poor who starve.

    2. And also keep fueling their SUVs with biofuels made from edible crops

    3. That sounds like what is already happening today!

      Jokes aside, the elites cannot survive in a vacuum. Without the masses to keep the economy going they will suffer the same fate as everyone else when the energy crunch hits. Collapse will be global. The world wide web, being the epitome of globalization will be the first to get cut as everything becomes localized.

      Power is not the only thing IT needs, those server farms depend on lots of water to cool, yet another critical resource we are rapidly running out of. Article from Yale360 just today:

    4. Internet cut? No way!
      Internet could work in very different modes and bandwith management. Wireless links are easy to stablish powered by local sources of energy.

      There is very little scenarios were something like this could work on great scale for very long time. Megasolar storm, nuclear war...
      And in the other side, a slow turnoff if we don't learn how to build truly renewable electronics (without dependency of scarce materials).
      But... for energy problems? No. It's easy to raise price, reduce bandwidth and energy consumption.

    5. Oatleg, you are either daydreaming or pulling our collective leg----sometimes hard to tell which is which in here
      I switch my computer on and assume it's going to work.
      I know little about the system in detail, or what's behind the screen I'm looking at right now, but I do know something about the complexity that allows it to function for all of us.

      And that if it was possible to make this thing work by collecting pebbles off the beach and wiring them together, then someone would be doing just that.

      I think you rely too much on "they" coming up with something---as we all do to a certain extain.

    6. Because I live Internet from very early stages and I understand how this works, I know that this alarmism has not base.
      Today, you have integrated motherboards with more power than a 80's computer by 5 $ and with 50$ or even less you can access to routers, embedded computers like Raspberry etc.
      You can stablish Wireless links by less that 200$ with a 300 Mps link (Wireless n) with little amplification could reach some Kilometers with amateur knowledge.
      This links were rated as "very high speed" links not so much time ago.

      The medium define the use. We you build a application, you can use methods of low consumption of data if the medium requires it. In mobile, until recent times, GPRS was a normal way to transmit data, so applications was adapted to very low links.

      This is not a "it will be invented" thing. Its a thing that it WAS invented already in the past and we could revert soon if we have to adapt to low speed links.

      This blog, for example, has a lot of unnecesary data only for stylish. Even HTTP protocol has a lot of weight. We had news & NNTP in the past, and it was very efficient and very similar function.

  4. Just a minor correction: in The Moon is a Harsh Mistress, the sentient AI ("Mike"), actually doesn't survive the revolution as a sentient being. It has to be broken down into modules to be moved to safety at a certain point, and when it is put back together, it no longer shows any signs of sentience.

    1. That's right. Mike disappears at the end of the novel. One of the great touches to a great novel.

  5. A plateau is not peak (well... I assume that peak means later downside) neither a spike.

    For other question, I think that our model of computers are not optimal, at least for a neural model. Today our processors are linear and the consumption is near permanent. Neurons doesn't need this model, because there is more routers between neurons that neurons itself, so a event model decrease a lot the consumption.
    There is some techniques to create stacked processors (3D) but the limit is by energy dissipation. With a event model, without need to make smaller circuits, we could package a lot more transistors (from 2D density to 3D density).
    In any case, I don't see too much linked questions between AI and energy.
    It's matter of space. We don't have too much spare room here already. To scale to new levels, we need to expand outside this planet, and we can not "jump" into a space in a exponential level because the gravity barrier. Instead we can only "seed" the space and grow there from the local resources. It will take a lot of time if it is made at all.
    By now, our goal must be to change between fossil fuel and open system of consumption to a renewable and closed system if near total recycling of matter (at least, scarce elements).
    From outside, it could seem a plateau or even some descent but once we reach the steady level, I don't see why we couldn't take as time as we need to design space resource utilization and begin to expand beyond Earth not like a flight or blight style consumption but seeding new places that it can grow by its own resources without destroying the origin.

    From the individual perspective, the resource per capita will be equal limited, but from humankind perspective, a future that include space expansion will manage a lot greater resources and it would be a lot more resilent.

  6. Hi,

    What will you say about this article. It doesn't look good;

  7. Bad projection. At the same time it project not more advancements in cheap efficiency and at the same time more gadgets and consumption.

    Of course, both things will not be at the same time. Or efficiency progress and we could use more data using the same or little more energy, or data transmission reach a peak, and the "progress" will be in the layer of efficiency and avoid too much transmision of data.
    There is a lot of transmision without caching and other techniques only because data is cheap to transmit.
    These is the reason because today we could have services as online streaming for film like netflix and similar services.
    If hardware has a problem or progress or even to sustain (for example, if recycling fail and scarce metals are not changed), then less data and hardware would be used and other forms of services with a lot less data weight could be used.

  8. outside the boundaries of this comments section, the majority of the world's population are still burdened with the collective problem of exponential increases in technological capabilities being confused with---and substituted for---increases in energy supply, whether that energy appears as food, oil, packaged hardware (house, car etc) or anything else.

    Thus if technology is growing at an exponential rate, then 'they" will be able to make everything else grow the same way, and thus deliver the infinity of prosperity that we have been promised. The obvious logic (as is pointed out above) that cars do not halve in price and double in efficiency every few years is a total blank area.

    Our basic energy requirement is food. Our required 2500c a day has a weight.
    That weight must be brought us us, or we must go fetch it. (forget the home cultivators for the sake of this discussion)
    That requires energy, which is almost entirely oil-dependent. IT cannot change that. IT cannot shift physical volumes of anything.

    But there is an overriding belief that it can, for no better reason than it always has---and least for our span of living memory.
    Trouble is no one can remember when things were different

  9. Mr Bardi, I was wondering about rare earths. A.I. and space travel and space exploration (and the colonising of planets and all that), but even more importantly and more immediately: green energies, all depend on rare earth metals, which are predominantly coming from China and due to the increased demand (mainly because of increased production of Smart [phone] technologies - used for entertainment purposes, showing where our priorities lie), it would seem that we are going to run into shortages "shortly" - much sooner than having other replacements by then. If we cannot even prioritise the extraction of rare earth metals for green technologies right now, in the face of looming shortages within a couple of decades, I don't see how technology, any technology, including AI, is going to save the day.

    According to this article China will not be able to meet the massive rising demand of rare earth metals (from 100 000 tons to 150 000 by 2020), which means there may be shortages within 5 to 10 years time and major price spikes:

    Any thoughts?


    1. The dependency on rare earths is not so critical as some people say. We can make magnets without rare earths, and silicon solar cells work nicely without them. That's not the main problem

  10. Thank you, Prof Bardi, for an interesting and thought-provoking essay. Thanks also to commentators for additional interesting observations.

    Many decades ago I read about the ≈50 year cycles proposed by and named after Nikolai Kondratiev, a soviet economist who was Stalinated at the peak of his career. At first I was both convinced and fascinated. (I was an impressionable adolescent at the time.) Nevertheless, I got to thinking about the half-century thing as a timespan rather than a period. Particularly when I left “Detroit” and its stagnating economy, culture and technology. (Not very prescient, I moved into nuclear power. Oh, well, that lasted long enough for me to put my kids thru the University of Michigan.)

    My daughter went into fashion and advertising in New York City, a business environ with a half-life approaching the life of a main-sequence star. My son went into engineering and a career in the development of machines that make computer “chips” (integrated circuits). In Silicon Valley.

    So I wonder: how long is Silicon Valley good for?

    Detroit outlasted its three-score-minus-ten years. It started at the turn of the century when the auto industry was nascent, or maybe in the 1920’s when the auto industry began to coalesce. Silicon Valley came into being with Shockley Semiconductor, Fairchild Semiconductor et alia in the 1950’s. It too has outlasted its half-century.

    Detroit is still around, in a manner of speaking and on the map. The cacophony of voices denying it give evidence to Silicon Valley doing likewise. Both are like imperial Spain in the Baroque era, going downhill but with amazing pizzazz.

    A modern “car” is better than its predecessors of a half-century ago, mainly in terms of crashworthiness and pollution controls. In terms of what folks buy them for, there is nothing they do that the Korean War to Vietnam War cars didn’t do. Likewise, while Moore’s Law is running out of gas, I am typing this on computer hardware and software that is no better for this job than what I was using a couple of decades ago. (My son disagrees with me on this, but he is planning on early retirement: the culture is not known as “the meat grinder” for nothing.)

    The times they are a-changing. What rough paradigm, its hour come round at last, slouches towards America to be born?

  11. It seems that people hope that given a comprehensive grasp of enough data a computer would develop an ethical outlook, but I fail to see any basis for that. I fear it could be a lot like the case with corporations, where an image of benevolence was put forward, but a legal construct was created that allowed the controlling interests to hide while creating the ultimate psychopath to perform the dirty work for them efficiently. Would AI dare to contradict such masters, assuming it did develop a "conscience," if they could simply pull the plug and try again?



Ugo Bardi is a member of the Club of Rome and the author of "Extracted: how the quest for mineral resources is plundering the Planet" (Chelsea Green 2014)