Some think of advances in science and technology through the metaphor of low-hanging fruit: we “picked” the easy ones, and the rest will be very difficult. But it may not be the ideas that are getting harder to find – it’s us that are getting worse at finding them.
Imagine a small orchard. You are picking fruit. At first you find fruit easy to pick. The lowest branches of the trees are full of it. Later, you find you have to jump, or climb the trees, or get a stepladder, and you find yourself picking fruit less quickly.
This is an intuitive mental model for why good new ideas might be getting harder to find. Now imagine the fruit is innovative breakthroughs, and society is the fruit picker in the orchard. We picked the low-hanging fruit long ago, and now we have to work ever harder to get more.
This model is popular and may well be correct. But it is still just one possible model, and there is a risk that it is so intuitive that it is steering us in the wrong direction. This article offers some alternative ways of thinking about this problem. If these alternative mental models are correct, and not the low-hanging fruit model, then it implies that we should be thinking about speeding up innovation and growth quite differently.
We do, in fact, seem to be getting worse at producing useful new ideas. Various measures of the rate of big scientific breakthroughs show decline. Fewer blockbuster drugs are being approved. Breakthrough patents are well below their peak. And measures of output, like productivity or GDP per capita, show slower growth. Some disagree, but many economists and technologists worry that we face a great stagnation, that growth isn’t like it used to be, and that the century has been disappointing – where are the flying cars we were promised?
In a famous paper, ‘Are ideas getting harder to find?’, Nicholas Bloom and collaborators found that research productivity has been falling quite dramatically since the 1970s. Their data suggests that it now takes 18 times as many resources as it did then to work out how to double the number of transistors on a microchip. When we started off improving microchips we worked out all the easy ways to do it. Now it’s requiring more and more effort just to make gains at the same pace we used to.
The most popular version of the low-hanging fruit model is that we are picking fruit at a slower rate because the higher-up fruit are harder to get to. You can only invent fire, the wheel, or a flint axe once. This story says that declining research productivity is inevitable. The fact that we once saw increasing scientific progress is because we were continually increasing inputs: adding more researchers, more research tools and technology. But now research is getting so much harder that even this is not enough.
But another explanation for falling innovation ‘yields’ is that we are getting worse at picking it, despite the progress in picking methods that has happened. We’ve handed in our stepladders for footstools, without realising it. Our recent decline in innovation vibrancy might be explained by ideas becoming inherently harder to find, but it may just as easily be explained by us becoming worse at finding them.
Here’s another mental model. Imagine you’re drilling for oil in Texas around the start of the 20th century. Right now you can just dig a hole in the ground and you may strike black gold. But as you go on, it’ll get harder and harder to find more oil, even though you’re getting new and better types of drills, pipelines, and seismology assaying techniques that help you overcome that. So you’re having to drill deeper, but that’s OK, because you’ve got a bigger drill now anyway.
But from the middle of the 20th century people began to worry about ‘peak oil’ – the highest-ever day of production, when all our improved technology and effort could not outrun the impact of our diminishing reserves. And yet that day has never come.
How is this possible? Obviously there is a finite amount of oil on the Earth. But technology is running ahead of the oil we can find and access, and has continually done so. There is a lot of money in finding oil, and many of society’s smartest are employed working out how to extract more of it. When oil prices do spike, it’s driven by world events. It is notable that 1970s peak oil worries were triggered by OPEC’s oil embargo, not a discovery that oil was scarcer than we thought.
The Earth is really, really big. Each hour about five times the Earth’s annual energy usage hits us from the Sun. Everyone in the world could fit inside the state of Louisiana, about 0.02 percent of the planet’s surface, if we all lived as densely as Parisians do. There are perhaps 8.7 million species on Earth and evolution has lots of space to explore. It’s often possible to find oil we didn’t previously account for. It’s not that the concept of peak oil is wrong, it’s just that it might be quite a long way away. What’s more, the stock of oil available to us can increase as prices rise and make it economical to extract reserves that were once not worth trying to reach.
So we may not really be anywhere near ‘peak oil’, even if it once seemed like we could have been. Nor are we obviously past, at, or near ‘peak music’, despite what some curmudgeons and nostalgics might claim – there are 75 billion possible 10-note melodies, before we even consider timbre, harmony, rhythm, and microtones. How many of them will fit any kind of sane chord progression is a different question, of course.
Although we often seem close to peak oil, we’ve never quite got there. But maybe we have reached ‘peak ideas’. After all, the rate of idea growth has already fallen. Despite the reserve of potential new ideas – that is, everything we don’t know but might want to – being so vast.
In practice the speed of innovative progress – a fuzzy concept that encompasses technological and scientific growth – may have been declining for decades. The fastest clip of progress probably came somewhere in the middle of the 20th century, but it certainly is not happening now. Unlike with oil, where society has always been able to find some new way to get at more of it, society has not sprung back with new ways of generating ideas – at least not as fast as we used to.
Total factor productivity growth measures how much better we get at producing stuff we like, given the inputs we are using. It seems to have been falling since the 1920s.
Then again, there have been dips before. In fact, in the past there have been dark ages – the most famous coming between the fall of the Western Roman Empire and the Middle Ages. In terms of its ability to grow and transport sufficient food to support extensive urbanisation, Europe did not surpass Roman technology until the early modern era, over 1,000 years later. Rome declined before it fell, with a declining rate of progress before full stagnation and decline. Some Romans believed Rome had reached peak agricultural productivity, squeezing as much out of the land as was possible. For example Lucretius said, ‘The old ploughman sighs often, bewails his fruitless labour, and compares the present time with times past’. It was not true then.
Our slow growth is a puzzle. We have generated huge amounts of useful knowledge. We have made it easier and easier to access this knowledge from anywhere in the world. We have Jstor and Google Books to dig through existing knowledge, and easy data analysis with Excel. We can collaborate with people all over the world through Zoom and Slack. And more people than ever are officially working on science, technology, and innovation. How is it that we are experiencing less progress despite all of these advantages, if it isn’t just that ideas are inherently getting harder to find?
One possibility is that there is a hidden factor that is driving research productivity to the floor. I suspect that in the future we will look back on our research processes and institutions as flawed for reasons that appear obvious in retrospect, in the same way that we look back with bemusement at the advocates of phlogiston theory and alchemy. Here I will only suggest some of the biggest and most plausible candidates for explanations. These are only a start, but I think they are reasons why research may now be many times less productive than it was once, or than it could be.
There’s an overarching reason to expect that oil and ideas would turn out differently, even if we have vast and deep reserves of both. Oil, today, is largely treated as a private good. In general, if you find it under your land it belongs partly to you, and partly to your government, either directly or via taxation. Firms that prove oil and extract it from the ground tend to be able to capture most of the benefits, and their share prices go up when shocks make oil more expensive, reflecting their direct stake in the commodity.
Things are totally different for ideas. When you produce an original idea, it is often possible for other people to copy it with almost zero cost to them. In some areas this is because there is no intellectual property protection by design. For example, farming techniques cannot be copyrighted or patented, and fashion designers have no protection against competitors quickly producing knockoff versions of their designs.
But even where there is substantial intellectual property protection, firms find it almost impossible in practice to prevent ‘spillovers’ – others benefiting from their idea without paying for it. Patents can be invented around, and they run out in 20 years anyway. Copyright, albeit long-lasting, can be avoided with relatively small tweaks for non-artistic products, and is often poorly enforced. Trade secrets cannot be permanently kept if employees eventually leave and work for competitors. And current laws make it impossible or impracticably costly to protect many important details.
We can get an idea of how much spillover there is by looking at how much corporate profits go up when technologies are invented. To do so, we have to rely on a bit of abstraction. William Nordhaus, for example, measures invention by using total factor productivity, which is what is left over when you measure output (or GDP) but account for the amount and value of all the inputs you have used – land, labour, and capital. He then looks at how much corporate profits go up in response. His rough estimate for 1948-2001 was that corporations only capture 2 percent of a new innovation’s value, and the other 98 percent is spread among the rest of society.
An extreme example of the spillover effect happened on 10 November 2020 when Pfizer announced it had a COVID vaccine that was 90 percent effective. Pfizer’s own stock market capitalisation rose around 9 percent that day, worth about $20 billion. By contrast, the Dow Jones Industrial Average rose only 3 percent that day. But that 3 percent was worth around $300 billion, and stock markets around the world added further trillions. Pfizer’s shareholders might have earned less than a percentage point of the value it created, despite all the patents and other protections the firm enjoys.
Of course this example is extreme. Most innovations don’t have the capacity to dramatically change activity around the world, but it illustrates how firms do not always capture the benefits their innovations produce.
This issue may have become more extreme in recent years. How much the benefits of ideas spill over to others is affected not just by institutions like patents and copyright, but also by technology itself. The internet and related technologies have led to larger spillovers, due to more easily accessible online patent libraries, Twitter accounts that track and comment on new inventions, and much more communication in general between innovators at different firms.
We can track this empirically. Patents are obliged to cite ‘prior art’: earlier patents and technologies that influenced them. In general, patents will reflect existing research pretty well, since patent examiners will add citations that firms themselves leave out. A recent study looked at 800,000 research papers put out by American firms between 1980 and 2015, and found that firms were citing their competitors’ patents more and more. This growth was more rapid than internal citation, suggesting that technological advances are spilling over more over time, and that competitors are benefiting more from the innovation others invest in. Technology like the internet has made it much easier for firms to research and learn from their competitors’ research.
Now let’s go back to the initial study by Nicholas Bloom and colleagues, which argued ideas were getting harder to find. It might seem like Bloom’s evidence is overwhelming, and we are forced to accept that ideas are getting intrinsically harder to find. But there are numerous ways that this research might not imply we have picked all the low-hanging fruit. Remember, Bloom’s firms are capturing only a small share of the returns to the innovations they produce (although the amount will vary by firm and sector). The money invested in research and development might not all be promoting pure innovation, and even when it does, it may not be working optimally.
One factor to consider is location. We know, for example, that bigger cities tend to be much more innovative per capita than smaller towns and rural areas, partly because agglomeration – lots of people being close together – and serendipitous interactions are all very important ingredients for innovation. But it’s also true that American cities, where most innovation happens, have been hollowed out since the 1960s, with dramatic reductions in many cities’ densities, and huge spreading out of inhabitants. There has also been an increasingly large financial penalty to living in the ‘superstar’ cities where most innovation is done, such as New York City and San Francisco.
The drastic undersupply of people in the right places is one big reason holding back research productivity that is not down to a shortage of low-hanging fruit. Location is not a small factor in research productivity. One study suggests that software firms in Seattle are 8 percent more productive at innovating due to the presence nearby of Microsoft alone. San Francisco and Silicon Valley’s many similar firms each have similar effects, and all of these ‘stack’ on top of each other. And it’s not just keeping the highest-skilled people out of clusters that is harmful. Flows of less-skilled workers can be hugely important as well since their talents are complementary: if an innovator can buy their food prepared and send their shirts out for dry cleaning, they have more time to work and relax.
Dramatically raising the cost of accessing these clusters is reducing the productivity of the research that we do there.
But what about Moore’s Law, the discovery that the density of transistors on microchips doubles about every two years? It may well be slowing down. But even if the low-hanging fruit story applies within particular fields or sectors, it doesn’t necessarily apply across the whole economy.
Consider artistic scenes or genres. Every year we might expect a continual flow of new music, or even more new music as we get access to more foreign scenes and musicians. But we need not expect a continual flow of original music in existing genres. The fact there has been a flood of good original music in the hyperpop genre of the late 2010s and early 2020s makes up for the fact that progressive rock is not able to generate as much novelty as it was during the 1970s. Similarly, visual artists invent new styles, such as pop art, pointillism, and futurism, creating a flurry of productivity, which eventually peaks and declines. Considered within a musical style, the low-hanging fruit story may well be true; considered across genres, there has been continual progress.
Or take energy. As we’ve seen above, more expensive oil has always driven us to find new ways to get hold of oil such that we’ve never actually hit peak oil, let alone run out. But in addition to this, ever since the first oil shock in the 1970s we have been steadily and continuously improving how much useful oil we actually get out of each barrel. According to one recent paper, each barrel is some three times as useful to us now compared to 1970. And of course, we have invented, improved, and expanded many other ways to produce energy. In terms of the narrow project – oil – we will see decline, but in terms of the broader human goal – powering things with energy – we see steady improvement.
What’s more, there is good reason to believe that insofar as we haven’t seen big progress in energy subfields, this is itself down to problems with how we pick fruit, not because the lower fruit have already all been plucked. For example, nuclear energy has flopped since its mid-century invention not because of technological constraints, but because of political ones.
It may well be that it is continually getting harder to fit more transistors within a microchip, and that at some point we will run out of ideas for how to do so more. But this does not imply that it is getting harder to improve computing generally – there might be completely different ways to increase the power of computers, for example through massive parallel processing. Or perhaps through stacking transistors on top of each other on a chip, with some solution for the cooling problems that might cause. And it seems likely that there is enormous scope for increasing the efficiency of current software. So these sorts of case studies should be treated as no more than indicative, and extremely unsatisfying – even unrepresentative – as final answers.
What it all comes down to is whether there are any plausible mechanisms that could have substantially worsened the effectiveness of scientific and technological innovation over time. The exact mechanisms we propose need not be the ones that are causing us trouble. But establishing that they could be making us sufficiently worse at picking fruit should make us much less comfortable about accepting that slower progress is inevitable.
I will suggest three reasons for why we are getting worse at picking fruit. Firstly, academic research has grown to be the largest part of the overall research pipeline, sucking in many of our most talented researchers, while having huge drawbacks and being extremely wasteful and inefficient. Secondly, while there may be especially huge returns to genius, there may be fewer geniuses researching today. And thirdly, we may be experiencing the long-term effect of lead pollution, which is now believed to steadily accumulate in people, stunt development, harm cognitive ability, and even substantially raise crime.
Back when the USA enjoyed its fastest rate of scientific and technological innovation, in the mid-20th century, formal academia accounted for a tiny fraction of research. Instead, the most important research efforts across the developed world generally happened in industrial R&D labs, and by this time increasingly in the USA. For various reasons – antitrust enforcement, too much spilling over of ideas, and the growth of the universities, these labs withered away and left the research ecosystem heavily dependent on academic researchers working at universities. But these reasons were all largely unrelated to whether the labs were creating useful and important innovations.
In the 1930s there were just 80,000 instructional staff across all American universities; today there are more than 1.5 million. They are almost all highly talented and hardworking, but this does not mean we are using their talents well. Academic research suffers from some serious drawbacks.
Most important is the way academic pay, status, prestige, and career progression is handed out. Academics generally compete for prestige. They gain this prestige by publishing in top journals and getting lots of citations from other scientists. Work like this is considered paradigm-setting. But there is no requirement that the work is useful or even true.
So the important thing for academic work is that it gains the respect of other scientists. Publications and citations may seem like obvious and sensible metrics to use, and – given the desire to organise science in roughly this way – there may not be a clearly better one. Even if they are the best possible option, they have gone through a Goodhart’s Law–style situation, which is where an observed relationship breaks down once it is used as a target, because people start to game the system or otherwise adapt their behaviour.
For example, in the 1960s and 1970s the Phillips Curve, which found that inflation and unemployment were inversely correlated, began to break down. Raising inflation was supposed to be able to bring down unemployment because workers would think that the higher wages were actually better offers, not just showing higher inflation, and thus would take jobs rather than holding out for better pay. But workers were not so easily fooled, and started making wage demands taking into account their knowledge that the government believed in the Phillips Curve. The relationship existed until it was used as a target.
Initially citations may have tracked output and impact almost perfectly. But when they became targeted, researchers learned to artificially increase output by reducing standards and exaggerating results, which caused the relationship between the metric and the desired output to break down. Although there are many institutions that push towards better-quality research, such as recent movements for larger sample sizes, preregistration of measured outcomes, deliberate replications, and so on, these weak forces stand against the much more powerful incentive of status, promotions, and jobs. The replication crisis has revealed just how many prominent empirical results in so many fields are unreplicable, meaningless, or false. And even more results that are true are not useful or important.
The German physicist Max Planck proposed that science proceeds not by ‘persuading its opponents and getting them to admit their errors, but rather by its opponents gradually dying out’. Authoritative figures in fields create paradigms that newer researchers work within, even if they are false, to gain status and do well in their careers. Only when the authoritative senior figures die can the fields progress. This is true of at least one of our most important fields: biomedical research. When eminent life scientists die unexpectedly, there is a shift as fewer of their collaborators produce work. By contrast a wave of new entrants begin work in the field and produce research that is disproportionately likely to be impactful.
In addition to the pressures of prestige and status, there is the search for grant funding. Grant funders, motivated by the same concerns expressed in this essay, want to know that the project they are funding is going to deliver some useful results. They are, after all, handing out a lot of cash. And once the project is funded, they want to keep tabs on the money to make sure it is being used in the way they intended.
But both of these well-intended systems dramatically hamper the effectiveness of academia. In all-pay auctions people often bid all the way up to the value of the reward, making no net gains. Similarly with grant applications, scientists have an incentive to compete right to the point where they are making almost no gains at all, once you account for the time they put into trying to get the grants. This work on grant applications ends up eating nearly half of scientists’ time, according to some studies – time they could be spending doing research instead. Yet further time is spent on teaching – no doubt an important activity, but a substantial time cost for many researchers. It is not at all obvious that teaching and research must be bundled together, with the same people doing both.
Almost as bad is the follow-up monitoring. Researchers that discover their ideas did not actually deliver results often complain that they are strongly incentivised to try to generate some publishable result from the agenda they went into the project with, rather than following up promising tangents when they appear. Once again they face a huge burden of work keeping up with reporting and reviews. But this does not seem to be necessary. When funding is instead given out without clearly defined deliverables, with long award cycles, without extensive reporting, and for ‘people not projects’, as it has been by the Howard Hughes Medical Institute, it has led to far more innovative work.
University researchers both do ‘basic’ research that filters slowly through science, and also the sorts of ‘applied’ research that might be used in practical applications, and perhaps patented. Academic researchers develop ideas that they then attempt to ‘translate’ into things that are useful for corporates developing products. But translation is an enormously difficult thing to do while at the same time making sure you earn the returns of your work. Information asymmetries – where two people on either side of an exchange know different amounts about the things being exchanged, and this affects their incentives to interact – are incredibly significant in information industries. This generates high transaction costs, which makes this sort of translation difficult to get off the ground. And so even the most useful research may not get used.
Secondly, it may be that grand innovations require the input of geniuses, and unique individuals making huge paradigm-setting moves. For example, the 10 Hungarian Jewish scientists listed by Wikipedia as ‘the Martians’ each made dramatic progress in mathematics, physics, and more: one member, Leo Szilard, identified the nuclear chain reaction, patented the fission reactor, invented the electron microscope, and later in his career switched to biology, also making huge progress there. Another was John von Neumann, whose incredible achievements across a wide range of fields, including game theory and computer science, are now legendary.
More broadly, in one dataset of 350,000 Finns, those in the top 1 percent of cognitive ability had more than double the propensity of becoming an inventor than that of those just on the cusp of the top 10 percent. A study of 1,600 American teenagers in the top 1 percent for maths ability in the 1970s found similar results. When followed up in 2014 they had achieved amazing things, between them writing 85 books, 7,500 journal articles, and securing 681 patents – and 1 in 20 was either a top executive at a major company or a top lawyer. A third study looking at an even narrower group, the top 0.01 percent in a mathematical test at age 13, found even greater achievements: earning four times as much in grants per capita, five times as many patents, and so on across the board. It seems that a tiny handful of people make especially outsized contributions to important things like scientific progress.
If this tiny handful now go into academia, then they may be spending a huge amount of their time on make-work or status competition. This would be especially worrying, since if they do make up such a disproportionate share of the contribution to progress then we would be unable to make up for this by increasing the amount we spend on other researchers.
Genius also tends to run in families. One reason may be the unique child rearing that certain parents do – which appears to matter even though parenting and schooling normally show very small effects in studies of the population at large. For example, Laszlo Polgar, a Hungarian teacher, intensively trained his three daughters to become chess champions, and was successful with all three. One daughter, Judit, was the highest-ranked female chess player ever, and the only one to reach the top 10. Similarly, James Mill, the Scottish philosopher, politician, and friend of Jeremy Bentham raised his son John Stuart Mill to be perhaps the greatest British public intellectual of the Victorian era with a unique education. The educationalist Benjamin Bloom’s ‘two sigma problem’ suggested that such results were achievable with a large number of children if children were raised with one-to-one tutoring. Another reason will be genetics.
Yet Leo Szilard had no children, and the 10 Martians between them had seven, in an era when an average 10 people might have been expected to have 30. As far as I can tell from biographies and obituaries, almost all of the Martians came from families with three or more children. During this period, when society as a whole was getting smarter and smarter due to the Flynn Effect, many of society’s most profound geniuses were having smaller and smaller families, and missing out on the special benefits being in those families would have conferred. If this is true then it may also be some part of the explanation for falling research productivity.
Thirdly and finally, pollution. The rise and fall of lead, to atmospheric concentrations around 1,000 times prehistoric levels and then back to around 100 times, has had significant effects on a huge range of human phenomena. Leaded petrol, the main source of atmospheric lead, was only completely phased out in most developed countries during the 2000s. Since buildup and clearout are very slow, and since past exposure has permanent developmental effects, this means we are still living with its effects – especially in older generations.
Average blood lead levels in the 1990s had fallen towards 50 μg/L, down from a peak of over 100 μg/L in the 1970s, but this is still vastly above the 3.5 μg/L threshold that America’s Centers for Disease Control uses for children. It is widely accepted today that lead pollution caused various social ills, including part of the huge rise in American crime rates over the second half of the 20th century. Higher blood lead concentrations are also associated with poorer cognitive development – exactly the sort of development that is necessary for scientific achievement.
For example, one research paper looks at children in elementary school who live near NASCAR tracks. In 2007 NASCAR phased out leaded petrol in their cars, meaning that children who went to these elementary schools after 2007 experienced steadily less cumulative lead pollution over time. Their results imply that being exposed to one extra car race’s worth of lead is about as damaging for learning outcomes as increasing class size by three students.
And lead is by no means the only environmental pollutant that we have discovered that could have serious social and economic effects. We are only just starting to understand how particulates affect us. A recent study looks at millions of Americans aged over 65, and at how variations in the stringency of Clean Air Acts across different areas affected their rates of Alzheimer’s. They find that being exposed to more PM2.5s, particulate matter less than 2.5 micrometres in diameter, raises the chance of an Alzheimer’s diagnosis. The effect size implies that London PM2.5 rates could be causing tens or hundreds of thousands of cases of Alzheimer’s.
Of course, while Alzheimer’s is costly in personal, social, and financial terms, since it kicks in late in life, an increased burden of it is itself unlikely to reduce the amount or quality of science going on. But the mechanisms that cause cognitive impairment in the elderly are similar to those that affect the young and unborn. One study finds that small reductions to in-utero exposure of pollutants caused by reduced economic activity in 1980s recessions led to substantially higher test scores later in life.
Other evidence also finds acute impacts: Israeli students do worse in their exams on days with particularly heavy PM2.5 and carbon monoxide pollution. Chinese evidence found similar results. They find that these acute impacts also had long-term effects by lowering the chance some students get over key educational thresholds, such as the Israeli equivalent of graduating high school. And retrofitting an entire fleet of buses in Georgia, USA, to reduce their pollution was found to lead to test score improvements much larger than most educational reforms.
If lead and particulates pollution can impact cognitive development, then it could hurt research productivity. And it opens up the possibility that there are other important environmental factors, yet undiscovered.
When you have lived in a world with declining research productivity for 50 years or more, then the low-hanging fruit analogy may feel obvious. There is a tendency to feel intuitively that most features of the world are inevitable – this is particularly true for those who see themselves as realistic, practical people. It may be true that the universe functions this way: as an analogy, it is impossible to disprove.
But I think it is far from certain, and there are other plausible explanations that give us more control over our destiny. Along with all the improvements of the last century, many of our institutions have seemingly weakened under the burden of an accumulation of sclerotic special interests, or become less relevant to their contexts. We all accept that we have become much less effective in some areas, like building infrastructure, permitting housing, and responding quickly to crises, likely due to institutional or organisational failings. If scientific progress is anything like those, then perhaps it can be improved and reformed, so that we can look forwards to a future when progress speeds ahead rapidly once again.
Ben Southwood is a founding editor of Works in Progress. He has been head of research at Create Streets, and head of housing at Policy Exchange, been part of three successful Emergent Ventures grants, and worked as a public sector consultant for KPMG. You can follow him on Twitter here.