Solar generated power is now broadly acknowledged not only as the cleanest form of generating electricity but also the cheapest – a fact established by a number of competitive power purchase auctions in different parts of the world.
There is, however, a big catch. Sun does not shine continuously, 24/7, or as predictably as does a nuclear, natural gas or coal-fired plant – although these plants also occasionally fail, and when they do it can be massively disruptive due to the sudden and significant loss of generation, which must be instantly compensated for.
The rapid growth of solar generation over the past decade, mostly from utility-scale but also distributed solar, has resulted in frequent episodes where there is an excess supply of capacity during mid-day sunny hours followed by a precipitous drop as the sun sets at the end of the day – the famous California Duck phenomenon, which is appearing in different versions in other parts of the world.
To counter this cyclical pattern of solar feast and famine, grid operators have mostly relied on natural gas peaking plants, or peakers, to balance variable generation and load. Wind, of course, is also variable and must also be balanced with other flexible forms of generation to keep the grid reliable.
As described in a related article in this issue, the need for flexibility has grown in importance due to the increasing variability of renewables, primarily solar and wind – both of which tend to follow particular patterns yet are not fundamentally dispatchable.
Flexibility, of course, can come from the supply or the demand side. As the cost of energy storage systems (ESS) drops and the technologies improve, however, they are expected to play an increasing role in balancing supply and demand.
One obvious approach that is gaining traction is to pair the sun and the wind with storage from project inception – not as an afterthought, as is currently the case.
Instead of investing in a massive wind farm or solar plant in isolation and relying on the grid operator to balance the load and generation, why not include storage at the same site with the purpose of providing more-or-less a steady, reliable supply of generation to the grid.
Grid operators, no doubt, would be relieved and be willing to pay a premium. Bottlenecks on the transmission network will be reduced.
For these reasons, the co-mingling and co-locating variable renewable generation and storage is likely to take off. And when and if that happens, the need for peaking gas- fired plants is likely to diminish.
Over time, according to one line of thought, gas peakers will become a rarity, only selectively and sparingly used in particular locations or systems with challenging physical limitations or localised operational constraints.
There is encouraging evidence supporting this line of thinking as described in an article titled Big Batteries Are Taking a Bite Out of the Power Market by Russell Gold, which appeared in the 12 Feb issue of The Wall Street Journal.
In the article, Jim Robo, NextEra’s CEO is quoted telling investors that utility-scale batteries can provide power “for a lower cost than the operating cost of traditional inefficient generation resources.”
As further proof, Fluence Energy LLC in a joint venture with AES Corp. and Siemens AG is building the largest lithium-ion battery in Long Beach in Southern California, reportedly 3 times the size of the ESS built last year by Tesla Inc. in South Australia. Referring to the storage system John Zahurancik, CEO of Fluence, is quoted in the WSJ article:
“It really is a substitution for building a new peaking-power plant,” adding, “Instead of living next to a smoke stack, you will live near what looks like a big-box store and is filled with racks and rows of batteries.”
In other words, batteries, quiet, non-smoking, and presumably safe, would be much easier to site in urban centers than peaking plants – solving the not-in-my-backyard (NIMBY) problem. Any vacant parking lot in the city center will do. And having storage in the midst of a load center is a big plus.
What is likely to make ESS cost competitive is that peakers are infrequently used and only for limited number of hours, sometimes as few as 100-300 hours per year. This means that they are hard to justify economically.
And they are not particularly clean or efficient when they do operate. In the past few years, they have lost revenue from the fact that the historical pattern of mid-day peak demand hours has vanished with the rise of free solar generation in a number of key markets.
Their primary role in places like California has shifted to the late afternoon hours when the sun goes down and the peak demand occurs – the 3-hour ramping opportunity in the “California Duck.”
Citing EIA data, the WSJ article says, “a new gas-fired peaking plant could generate electricity for about $87/MWh.” By contrast, a subsidiary of Xcel Energy Inc. recently ran a competitive solicitation for solar-plus-storage projects and received multiple bids with a median price of $36/MWh, according to the WSJ article.
Commenting on the record-low auction price, Ben Fowke, CEO of Xcel Energy is quoted in the article saying,
“I could see in 10 to 15 years where you have 30% of what is traditionally a peaker market served by storage.”
Batteries, of course, are already used in power grids but their application to date has been mostly limited to provide regulation services such as stabilizing voltage and frequency – something batteries are exceedingly good at delivering – but not for filling in the gaps in variable renewable generation, say for 1-4 hours.
The WSJ article reports that the PJM Interconnection, the biggest US market operator, already uses batteries to provide about a quarter of its regulation services.
Moving forward, of course, ESS are expected to get much bigger and far cheaper than they currently are. The
WSJ article quotes David Hart, a professor at George Mason University saying, “Peaker replacement is the biggest market they (grid operators) have in sight.”
How soon are we likely to see the changeover? Not until the cost of ESS drops substantially and their capabilities improve significantly.
Estimates on when that may happen vary, but the crossover point is probably not too far off, certainly within 5-10 years depending on the response time, duration, capacity, number of charge/discharge cycles, the overall efficiency – that is how much energy can be extracted as a percentage of what was put into the ESS in the first place – parasitic losses, etc.
Some ESS technologies such as pumped hydro are far more advanced, come with large storage capacity and are proven while others such as compressed air energy storage (CAES) or flywheels are further off from commercial-scale deployment.
Being a heavily regulated business, regulators and policy makers play critical roles in how ESS technologies evolve and how soon and in what form (s) they may be deployed (related article on page 21).
Not surprisingly, several states including California with a mandated 1.3 GW storage by 2020 are pushing forward as are others such as New York and Massachusetts who are considering similar schemes, all driven by the need to meet their RPS targets.
Sunny Arizona, which is also pushing ahead at higher renewable penetration levels, is considering a 3 GW storage mandate by 2030.
In case you are still not convinced, in Jan 2018, California regulators ordered Pacific Gas & Electric Co (PG&E) to consider ESS rather than gas-fired peaking units.
The decision was driven by the conclusion that the former would be cheaper than the latter. Even the regulators, not always the first to know anything, have figured out this one.
It is not particularly good news for gas turbine makers such as GE and Siemens. They can, however, use the time to shift strategy to storage before it is too late.
Perry Sioshansi is president of Menlo Energy Economics, a consultancy based in San Francisco, CA and editor/publisher of EEnergy Informer, a monthly newsletter with international circulation. He can be reached at [email protected]
Source: EEnergy Informa. Reproduced with permission.