Skip to main content

Amory Lovins and His Nuclear Illusion – Part Five (Nuclear Plant Reliability)

We are now on part five in the continuing series that seriously looks at RMI’s latest nuclear bashing paper. RMI tries extremely hard on pages 21-26 in their paper to show that nuclear plants are unreliable. Sadly for RMI, a widely publicized set of data refutes their claim: capacity factors. A capacity factor is the amount of electricity a power plant actually produces in a period of time divided by the amount of electricity the plant is rated to produce during that same period of time. A high capacity factor implies high reliability.

From RMI, page 24 (pdf):
Though micropower’s unreliability is an unfounded myth, nuclear power’s unreliability is all too real.
In arguing that nuclear plants are unreliable, the RMI paper brings up a Union of Concerned Scientists’ report on long outages, refueling outages, heat waves, the shutdown of seven Japanese reactors due to an earthquake, and the 2003 Northeast Blackout. Other than the Japanese shutdowns, the four issues RMI brings up are all captured by the data in the graph below. Since 1971, U.S. nuclear plants have substantially improved their performance and reliability. RMI’s paper focuses on some unflattering situations that affected selected nuclear plants. As a fleet, though, U.S. nuclear plants have performed at a 90% capacity factor since 2000. RMI’s cherry-picking is showing again. They focus on a few negative events and ignore the outstanding performance of the rest of the fleet.
It is also interesting to note that nuclear plants have the highest capacity factors of any fuel type in the U.S. (source: Ventyx/Global Energy Decisions based on EIA data). Last year, nuclear plant capacity factors averaged almost 92 percent. If that is “unreliable,” as RMI claims, then what IS reliable?
From RMI’s paper, page 24:
Nuclear plants are capital-intensive and run best at constant power levels, so operators go to great pains to avoid technical failures. These nonetheless occur occasionally, due to physical causes that tend to increase with age due to corrosion, fatigue, and other wear and tear.
Actually, the data show the opposite is true. The average age of the operating U.S. nuclear plants is 28. The first graph above shows that the U.S. nuclear plants have improved their performance as they have become older. Not only that, Nine Mile Point 1 and Oyster Creek (nearly 40 years old and the oldest operating reactors in the U.S.) both averaged a capacity factor greater than 90 percent over the past three years.

From RMI’s paper, page 24:
Yet size does matter. Even if all sizes of generators were equally reliable, a single one-million-kilowatt unit would not be as reliable as the sum of a thousand 1-MW units or a million 1-kW units. Rather, a portfolio of many smaller units is inherently more reliable than one large unit—both because it’s unlikely that many units will fail simultaneously...
Inherently? Actually no. Let's do the math. Say one power plant at 1,000 MW is 90 percent reliable. According to RMI's logic, two 500 MW plants at a 90 percent capacity factor are more reliable than the 1,000 MW plant. The probability that these two plants will provide 1,000 MW, however, is not 0.9 (90 percent). It's 0.81. All you do is multiply 0.9 times 0.9. This is called joint probability which means that in order to find the probability of an event with two or more random variables, you multiply each of their probabilities together. So if you have 10 units at 100 MW each, the probability that all ten will be able to provide the 1,000 MW is 0.35. The probability of success continues to diminish as you increase the number of plants. The same conclusion occurs if you change the capacity factor up or down. There is of course much, much more to managing the grid but based on this simplistic statement from RMI, I am curious how they make the math work!

RMI's statement from above comes from Mr. Lovins' book Small is Profitable. I don't know if there's more information that backs up their statement above, because the $60 book is temporarily out of stock on Amazon and the link to buy the book on its own website is broken. But based on my simple calculation, one plant is more reliable than 10, 100 or 1,000 plants that are “equally reliable” providing the same amount of capacity.

That’s it for this post. I only need to show capacity factor data that is objective and traceable to a widely accepted source and calculate some simple probabilities to show that RMI’s claims are spurious. For those new to this debate, here are links to my previous posts for this series: Amory Lovins and His Nuclear Illusion – Intro, Amory Lovins and His Nuclear Illusion – Part One (The Art of Deception), Amory Lovins and His Nuclear Illusion – Part Two (Big Plants vs. Small Plants), Amory Lovins and His Nuclear Illusion – Part Three (Energy Efficiency and “Negawatts”), and Amory Lovins and His Nuclear Illusion – Part Four (Costs of New Nuclear Plants). One more post from me left to go...

Comments

Anonymous said…
The whole concept of size vs. reliability is nonsense. If you compare many small units and one large units and assume the same reliability, it is obvious that with small units you have less chance of generating 100% capacity and also less chance of generating 0% capacity. Integrated over operating time, both scenarios will generate exactly the same total amount of power.

This is of importance only in a grid so small that a single large units failure will result in negative margin and failure to meet demand. This is of interest to small countries with isolated grids but has no relevance at all to the U.S.
Charles Barton said…
I am going to disagree with you about the capacity factor of multiple reactors. When the capacity factor of the American reactor fleet is discussed, it is never discussed in terms of joint probability, but average capacity factor. Joint probability is not about what percentage of rated power is being generated, but the likelihood that at least one reactor will not be generating. Thus in the case of two 500 MW reactors, each with a .90 capacity factor, the joint probability that a least one of the reactors would be down is .81. However the average electrical output will still be ,90 of the name plate capacity because when on reactor is down, the other will almost always still be producing at capacity. The advantage of having two reactors is that when one reactor is down, capacity is cut 50%, not 100%. The disadvantage is that it is twice as likely that one reactor will be down.
Ray said…
Um... question. Why would the capacity factor go up so consistently with age?
Anonymous said…
Charles

Yes, the argument on joint probability is a bit lousy but it's the argument RMI is making. So their problem. I think David Bradish is just being playful there. A little bit of probabilistic judo :)

To make a real comparison, all sides would need to add a redundancy factor (which adds to cost).
Anonymous said…
To be honest, I think this response to the RMI thing leaves a great deal to be desired.

And in general, I am absolutely sick and tired of people on both sides of the nuclear debate not being able to look at ANYTHING objectively.

Capacity factor affects economics. Generally, if we're on the subject of 'reliability', we are interested in the reserve needed for a plant and how well it integrates into the grid. Capacity factor is not directly relevant in itself.

see industry statistics of
http://www.wano.org.uk/PerformanceIndicators/PI_Trifold/WANO15yrsProgress.pdf
unplanned capacity loss factor and
unplanned automatic scrams per 7,000 hours critical

I've had these arguments over and over again. You will get TORN UP just saying 'capacity factor' and leaving it at that. That is irrelevant, and even the unplanned capacity loss factor is not the whole story. If many small randomly intermittent units had the same unplanned capacity factor, it would be much better than large units. Why is this not a GREAT thing for renewables? Because wind power is NOT randomly intermittent, and nearly all power it's not making is unplanned.

The bigger the unit, the worse scrams and unplanned losses are, but that's not the WHOLE STORY. We have 400 some nuclear units in the world, and even if they ran the entire time, with 0.6 scrams/7,000 hours per the data, that would be 300 scrams per year, which is a reality utility operators have to live with, lines have to have enough capacity to handle a large regional flow of power when a nuclear unit goes offline.

Wind power, however, has the same problem only on a daily basis. In order to compensate for sweeping regional intermittency (weather patterns), an option is to oversize the grid on a level beyond what nuclear units require, thus the talk of HVDC and international power supplies from the renewable people, which is completely incompatible with the argument that wind is 'distributed generation'.

The issue is has much more to it than what this post points out, and with the greatest sincerity, I really kind of wish you would redo this.
David Bradish said…
I think David Bradish is just being playful there. ;)

thenaphibian, I appreciate your comments but I'm not going to turn a blog post into a complicated piece on reliability that includes WANO indicators and other metrics. It will bore readers. I said a "a high capacity factor implies high reliability." Most readers know what a capacity factor is. So for the purpose of this post, equating capacity factors to reliability is perfectly acceptable to rebut RMI.
Anonymous said…
Sure, that's fine, but as you pointed out before, RMI does look at this and will probably be addressing these. If for no other reason to get to it before them, and address the applicability of using capacity factors right now.

Capacity factor represents the amount the unit operates over all time. This obviously means that it's counting time that it was planned for the plant to not operate. Sometimes it's the plant's fault, sometimes it's not. France has used a large number of their units for load following, though it's a disappearing trend as the economics become more favorable.

However, it can still certainly be said to set a 'lower limit' to the reliability of the plant, because even neglecting all other factors, it's obviously still running over 90% of the time. That's pretty reliable. To be precise, look at the availability and unplanned losses in addition to capacity.

The PRIS database has data availability factors for each unit over time, which shows consistent good performance, epically recently - very consistent with what you see with capacity factors.
http://www.iaea.org/programmes/a2/

Nuclear availability and reliability compares favorably to coal and natural gas by a host of metrics, not just capacity factor. Renewables are... laughable on this matter.
Anonymous said…
ray said:
Um... question. Why would the capacity factor go up so consistently with age?

Experience, equipment upgrades, and operational improvements.

Experience allows operators to fix and avoid problems they have seen in the past. In addition, there is a lot of knowledge sharing among nuclear power plant operators so that problems seen in one plant becomes experience for all of the same type.

A lot of equipment is replaced over the years as a plant operates, and almost always the new equipment is more reliable than the old as the equipment manufacturers also benefit from experience and improved design tools such as computer based design.

Better operations also helps. For example, refueling time used to be measured in months. Now it is measured in weeks. Part of this is due to things like robots that were not available when the facilities started up.
Anonymous said…
"Better operations also helps. For example, refueling time used to be measured in months. Now it is measured in weeks. Part of this is due to things like robots that were not available when the facilities started up.

Most of the reduction in outage time stems from the shift in thinking from the navy "refit" mentality (the ship is in port for a long time and you fix everything) to an operating facility mentality (outages are for refueling the reactor, and the only additional work during the outage is the tasks that absolutely cannot be done with the unit online). The rest of the chores that used to be done during the long refuling outages are now done online.
Anonymous said…
"Um... question. Why would the capacity factor go up so consistently with age?"

Several factors have been mentioned. One that has not is NRC's so-called "maintenance rule," implemented in the early 1990s, which allows a great deal of maintenance to be performed online which previously required an outage.
Anonymous said…
The degree of variation about the average (expected) power output is a better definition of reliability than the probability of producing maximum possible output. If the combined output is fixed at 1000 MW, and the capacity factor of each plant is fixed at 90%, one would approach a smooth, guaranteed output level of 900 MW as the number of plants got very large. Thus, it is correct to say that, all else being equal, a larger number of plants increases reliability. That said, this is not a significant issue, since even 1000-1600 MW is a very small fraction of national or region power generation levels in the US, and the grid can easily handle the loss of a plant.

Also, it is not entirely correct to equate capacity factors with reliability, because it doesn’t distinguish voluntary and non-voluntary downtime. For example, the lower capacity factor shown for gas is mostly voluntary downtime, and is thus not an indication of unreliability. On the other hand, capacity factor also does not distinguish between planned downtime (such as nuclear refueling outages) and non-planned downtime. Planned downtime is not a problem, as it can be scheduled for low-demand periods.

The real measure of reliability would be the hours of non-voluntary, unplanned downtime per year. I believe the non-voluntary downtime percentages are very small (1% or 2%) for all the traditional sources, including nuclear. In other words, it’s not a significant factor. Thus, RMI does not have a real point here. For renewables, the involuntary and/or unplanned downtime fractions are much higher. For solar, at least (as opposed to wind), these downtimes are more predictable and generally coincide with low demand.

Jim Hopf
Matthew B said…
It's quite apparent to me that the utilities are going for the larger units, not smaller.

The AP600 is already licensed and doesn't have a single taker. The most popular is the AP1000, about the same size as the current Westinghouse 4 loop units.
David Bradish said…
The link to buy Amory Lovins' book Small is Profitable now works.

Popular posts from this blog

Fluor Invests in NuScale

You know, it’s kind of sad that no one is willing to invest in nuclear energy anymore. Wait, what? NuScale Power celebrated the news of its company-saving $30 million investment from Fluor Corp. Thursday morning with a press conference in Washington, D.C. Fluor is a design, engineering and construction company involved with some 20 plants in the 70s and 80s, but it has not held interest in a nuclear energy company until now. Fluor, which has deep roots in the nuclear industry, is betting big on small-scale nuclear energy with its NuScale investment. "It's become a serious contender in the last decade or so," John Hopkins, [Fluor’s group president in charge of new ventures], said. And that brings us to NuScale, which had run into some dark days – maybe not as dark as, say, Solyndra, but dire enough : Earlier this year, the Securities Exchange Commission filed an action against NuScale's lead investor, The Michael Kenwood Group. The firm "misap...

Wednesday Update

From NEI’s Japan micro-site: NRC, Industry Concur on Many Post-Fukushima Actions Industry/Regulatory/Political Issues • There is a “great deal of alignment” between the U.S. Nuclear Regulatory Commission and the industry on initial steps to take at America’s nuclear energy facilities in response to the nuclear accident in Japan, Charles Pardee, the chief operating officer of Exelon Generation Co., said at an agency briefing today. The briefing gave stakeholders an opportunity to discuss staff recommendations for near-term actions the agency may take at U.S. facilities. PowerPoint slides from the meeting are on the NRC website. • The International Atomic Energy Agency board has approved a plan that calls for inspectors to evaluate reactor safety at nuclear energy facilities every three years. Governments may opt out of having their country’s facilities inspected. Also approved were plans to maintain a rapid response team of experts ready to assist facility operators recoverin...

Nuclear Utility Moves Up in Credit Ratings, Bank is "Comfortable with Nuclear Strategy"

Some positive signs that nuclear utilities can continue to receive positive ratings even while they finance new nuclear plants for the first time in decades: Wells Fargo upgrades SCANA to Outperform from Market Perform Wells analyst says, "YTD, SCG shares have underperformed the Regulated Electrics (total return +2% vs. +9%). Shares trade at 11.3X our 10E EPS, a modest discount to the peer group median of 11.8X. We view the valuation as attractive given a comparatively constructive regulatory environment and potential for above-average long-term EPS growth prospects ... Comfortable with Nuclear Strategy. SCG plans to participate in the development of two regulated nuclear units at a cost of $6.3B, raising legitimate concerns regarding financing and construction. We have carefully considered the risks and are comfortable with SCG’s strategy based on a highly constructive political & regulatory environment, manageable financing needs stretched out over 10 years, strong partners...