From family farms to industrial agriculture, from 2,000-ton capacity schooners to 150,000-ton capacity container ships, industrial production and operations have historically moved in one direction: from small to large. In short, bigger seems better for maximizing productivity and lowering per-unit costs.
“‘Does it scale up?’ is the perennial question that investors ask of engineers who are creating new technology and infrastructure,” says Professor Garrett van Ryzin, who for the last few years has been team-teaching a course on the business of sustainable energy with Klaus Lackner of Columbia University’s School of Engineering and Applied Science. “Hearing that question again and again got Klaus thinking about whether it is really necessary to scale up.” The two colleagues’ conversations on that question soon turned into a full-fledged framework for determining the feasibility of moving from large (in size) scale industrial infrastructure to large (in number) but small-in-size scale industrial infrastructure.
Economies of scale have dominated industrial practice and investment decisions for so long because they have made sense — and profits — in the face of human and technological limitations. As the size of physical plants, means of transportation, and equipment increase, capital costs per unit and operating costs decline, while increased productivity makes the cost of labor more efficient. For example, 100 ten-ton dump trucks would require many more drivers than a fleet of 10 hundred-ton dump trucks. Further motivations for scaling up include geometrical efficiency (such as increasing the ratio of volume to area) and spreading out the fixed costs of certain components like control and safety systems.
But automation and communication technology have evolved to the point that a large number of small units may be better, cheaper, or more efficient than a small number of large units. Consider supercomputers: until the 1990s, the supercomputer industry focused on increasing computing speed and capacity by building bigger, more specialized machines with greater processing power. “But by the mid-1990s it had become cheaper to network the capacity of CPUs and memory from large numbers of personal computers and computer workstations rather than relying on a single microprocessor,” van Ryzin notes. This shift from large to massively modular computing led to an abrupt collapse of the traditional supercomputer industry in the 1990s. Are we now on the cusp of a similar radical shift in other industries?
The researchers, who also worked with doctoral students Eric Dahlgren and Caner Göçmen, conducted a plausibility analysis to show why thinking smaller might be viable or even preferred. They conclude that many industries are approaching tipping points that will soon make transitioning to small modular infrastructure practical and cost-effective.
Their analysis first looks at comparative capital costs to make a rough estimate of feasibility. For example, they found that the cost of capital for a car engine to produce a certain amount of power is far lower than a large-scale power plant producing that same amount of power. They also examine whether automation technology is advanced enough to lower labor costs to the point where manufacturing or operating smaller units becomes cost effective, and whether small scale units offer enough flexibility on other fronts to offset their costs.
Small chlorine plants illustrate these last two factors. “Chlorine is widely used but hazardous to ship so there is motivation for producing it locally,” van Ryzin says. “Some companies have developed small modular chlorine plants that use relatively innocuous ingredients to make chlorine on site. They are automated and remotely monitored.” The small scale of the plant meets essential needs at competitive costs while reducing the risk of a large-scale accident in manufacture or transport. Can the same logic behind these small-scale chlorine plants can be applied more broadly?
The flexibility of small-scale infrastructure is attractive because it gives firms the ability to deploy investments gradually over time, which further reduces cost and risk. “If a city’s electricity demand is growing, a utility firm doesn’t have to finance a gigawatt power plant that might take four or five years to come online,” van Ryzin says. “It can instead deploy smaller plants as needed.” In turn, a firm doesn’t incur the cost of building a large plant right away — one that might not operate at full capacity immediately. And adding smaller units as needed does not require large investments or a lengthy time horizon. For firms, this flexibility to deploy gradually over time reduces investment cost and risk, creating significant economic benefits.
Smaller units also offer geographic flexibility: they can be used together at a single location or concentrated around key supply or demand sources. They can produce savings on operating costs too because firms don’t have to operate all the units if they aren’t needed. And they offer insurance against large system failures: it’s much less likely that a large number of small units would all fail at the same time. Those that do fail would be easy to replace quickly. Finally, the researchers thought they might find that smaller units were less durable and might need replacing more often, but in many sectors smaller units are just as durable as larger ones. Even so, the flexibility offered by small units may trump considerations of durability in some industries.
The implications of this shift are profound and accordingly, as van Ryzin and Lackner acknowledge, not without barriers. But few of these barriers are technological or financial. To really facilitate mass production, for example, industries will have to agree on standard interfaces and controls, just as the PC industry standardized on “Win-tel” (Windows and Intel) architecture. The most significant hurdle may be convincing engineers, designers, investors, policy makers, and firms to rethink the “scale, scale, scale” mantra they’ve been taught and have embraced for decades. Instead of asking if a project can “scale up,” stakeholders must learn to ask if a project scales down. “That also means using different frameworks for evaluating return on investment — traditional net present value measures might capture the cost of capital but they won’t capture the flexibility benefits that small units offers,” van Ryzin points out.
“Especially in the area of energy, people are really starting to ask how we can switch from traditional energy sources to new sources,” van Ryzin says. “We may have to reinvent the world’s energy infrastructure, and we may need to ‘think small’ to do that.”
Garrett van Ryzin is the Paul M. Montrone Professor of Private Enterprise, chair of the Decision, Risk, and Operations Division, and faculty director of the Master Class Program at Columbia Business School.
Read the Research
“Small Modular Infrastructure.” Working paper, Columbia Business School, 2012.
Garrett van Ryzin
Garrett van Ryzin is the Paul M. Montrone Professor of Decision, Risk, and Operations at the Columbia University Graduate School of Business and Chair of the Decision, Risk, and Operations Division of the School. His research interests include analytical pricing, stochastic modeling, and operations management. He is coauthor of the book The Theory and Practice of Revenue Management, which won the 2005 Lanchester...