Editor’s note: Across the world it is hard to find a grid operator that doesn’t admit that the future of “base-load” in the world’s grids is either lousy or non-existent. It is the result of the existing and anticipated rush of variable renewable sources, and the increased value of “flexibility” and “dispatchability” over the decades old concept of “baseload”.
Which is not to say that the lights will go out, but the market dynamics will change. This article below, from Paul McArdle at Watt Clarity, who has taken time to emerge from his detailed Generator Report Card, a review of the last 20 years in the NEM, looks at what’s happening already in Australia. And just a heads up, it is technical, but worth reading if you’re interested.
—–
As part of our research into the Generator Report Card, we have uncovered material that sums up the scale of the transition that has already overtaken the NEM (National Electricity Market)., and also gives some clue of how radically different the supply mix will look in the decades ahead.
(A) The rise of cycling
To help us delve into, and hopefully answer, a number of overlapping questions about technical and commercial aspects of power station performance, one of the metrics we’ve derived from original AEMO data is the number of starts for a unit (i.e. number of times the unit has increased output from 0MW to something measurable).
Number of starts is also the number of full cycles they have completed – with cycles (for thermal plant in particular) affecting certain aspect of damage on high cost components (e.g. fatigue), hence affecting remnant life.
An increased number of starts might be a clue into some future change in forced outage rate for the plant, for instance (though obviously there are other factors involved – such as the maintenance regime and so on).
There’s much more for us to do with this discrete data, which we have down at the level of every single unit across the NEM. However I thought the following chart was worth sharing as it summed up one of the headline changes in the NEM.
On the right axis, we trend the number of generation units (i.e. DUIDs) operational in the year and, only a slightly smaller number, the number of those units that cycled at least once. This should highlight, to those who are unaware, that most units do cycle at least once in a year.
There are relatively rare occasions where units run continuously for longer than a year (like Stanwell unit 1 which operated continuously for 1,076 days and set a world record through to 2015), but that’s the exception more than the rule.
On the left axis, we show the cumulative number of cycles across all units in that year.
There’s a step up in CAL 2018 due in part to the influx of large-scale solar plant commissioned through the year. Obviously each solar farm cycles every day (it’s actually more than once per day – more on that in the Report Card). However that’s only part of the story – even before the influx of solar, the trend had been upwards, growing more quickly than the growth in units.
To me the one thing that jumps out from this chart is that we’re entering into a world that will have an exponentiallygreater number of unit cycles each year than was the case when the NEM started 20 years ago….
(B) Implications of this change
That’s going to have all sorts of implications to how the market and grid operate, and to what needs to happen physically to keep supply and demand balance. Here are a couple off the top of my head:
(B1) Pricing patterns are already changing
As noted at the end of Q2 2016, and then updated again at the end of Q2 2017, there has already been a significant change in the pattern of pricing we seen across the NEM. This has happened, in part (though not wholly) because of the changes illustrated in the trend above – with implications flowing through in a number of respects, including the following.
We need to recognise that this shift is already happening, and not continue to think with yesterday’s paradigms (as discussed here earlier in the week)
(B2) What happens to “baseload” energy users?
The term “baseload” has been used by some to refer to large power stations, which are high capex and relatively low opex (per unit of production). In the NEM, these have been predominantly coal fired – plant which have already begun phasing out of operations, a trend which will continue in the coming years (with much conjecture and debate about the actual rateof closure).
My understanding is that the term “baseload” was originally used on the other side of the supply/demand mix – to “loads” that share similar basic properties, in that:
- they are high capex and relatively low opex per unit of production,
- hence find it most economically efficient to run 24x7x365 with as little downtime as possible.
These are the major energy users which represent a significant chunk of the total energy consumed from the NEM.
There are questions being raised by a number of these large energy users (a number of whom are our clients) about the looming exit of baseload on the supply side, and the implications of that for them. I understand their concerns and we are trying to help them as we can.
Already we have seen the shape of prices begin to change, and (below) I note about how orchestration is becoming more complex. There are valid concerns for these energy users, though working through to the answers is complex.
(B3) Increasing need for more Demand Response
Across the demand side in more general terms, it’s obvious to me that the increasing dynamic nature in the supply mix will also bring with it challenges (and opportunities) for increasing flexibility and responsiveness on the demand side – otherwise known as Demand Response in its various forms.
I recently posted these thoughts here about the latest round of AEMC deliberations about Demand Response, noting that:
- my sense is that the rule changes proposed won’t get us where we need to go; but
- that there does appear to be alternative approaches that could deliver more upside, with less downside.
(B4) Increasing need for balancing on the Supply Side
It should go without saying that there will be a need for increasing flexibility of other assets on the supply side to balance the intermittency of wind and solar.
That’s one reason (numerically) for what could be an exponential rise in the “# Unit Cycles” measure in the chart above moving forwards – i.e. we know Large Solar plant will cycle once per day, notionally (it’s actually more than that).
One of the implications is that each cycle of a Solar Farm will also mean cycling at other dispatchable assets (almost certainly many more than 1 other “matched but inverse” plant).
Many of these cycles won’t be full cycles (i.e. ON-OFF-ON), but they will still contribute to some aspects of plant degradation. Hence the need to track this sort of metric in order to understand how quickly we’re using up remaining life in the current plant mix, and what that means (in terms of durability) of the plant that will be built to replace them (for instance, implications for Barkers Inlet in SA).
(B5) Orchestration becomes more complex
It’s a bit (a lot!) of a simplification, but one of the inferences that we could try to draw from the chart above is that we’re moving from a “set and forget” world in AEMO’s operations centres and the network control rooms of our TNSPs (where those days of extreme activity were the perhaps more exception than the norm), to an environment where there are many more moving (and independent) variables.
This flags that there could be increased challenges ahead – which is why we are taking so much interest in metrics like the extent each individual unit has been Off-Target over all Dispatch Intervals in the past. I posted some early thoughts on 7th January, including highlighting a number of different reasons why a generator might be significantly Off-Target.
Our analysis has continued since that time and (over a few different time ranges) we have tabulated a list of the “worst performers” – i.e. the DUIDs that have most frequently been more than 100MW away from their dispatch targets (sometimes higher than target, more often lower than target).
We’re double-checking the results before inclusion in the Report Card, but have already flagged some intriguing questions that we are exploring, such as:
(a) Why do some particular wind farms appear as poor performers in this respect – and how does that correlate with Causer Pays Factors?
(b) Why are coal units in one particular region (across multiple portfolios) worse performers than coal units in the other regions?
(c) What technical characteristics are there at hydro that give rise to the results we’re seeing?
(d) If a unit is small but the percentage off-target is large, though perhaps under 100MW, what’s the significance of that from a system predictability point of view?
(e) Which fuel types are under-represented in the current “worst performers” list, and what are the reasons for (and possible implications of) this?
(B6) Implications for hedging
But the implications go further than that – into (for instance) what will be the nature of hedging required for participants in the wholesale market (and down into the retail market with a growing number of energy users if the transition proceeds down the “Left Turn” route here.
After all, we have for 16 years had a Base Load Futures contract traded on the ASX and it is still the most commonly traded contracts in the suite of energy hedges (historical data for which is available in both NEMreview and ez2view). Is this going to continue into the future, or are different (i.e. cycling-related) options going to become more dominant (like those Dave Guiver talked about here)?
Paul McArdle is director of Watt Clarity, which provide the popular NEM Watch widget that appears on RenewEconomy.