The Reliability Panel recently published their final report looking into the form of the reliability standard. What’s this gobbledygook? This week, we’re looking at reliability - how do we measure it? Are we measuring it correctly?
What does success look like?
There are plenty of metrics for assessing how well a power system is working. You might look at how costly it is, how renewable is it, how many people can access it etc.
For most, the obvious measure is what happens when they flick the light switch. The “lights will go out” has been topical ever since South Australia experienced a state-wide blackout. We’ve written previously about the disproportionate media attention given to the risk of blackouts.
The electricity system is designed, planned and operated to be very reliable. But, what does reliable actually mean? How do we measure reliability/unreliability? And will that change?
What is reliability?
Unfortunately, it’s actually kinda difficult to explain. Good thing we’re slowly publishing an explainer series!
In short, reliability refers to how well supply and demand match.1 In a reliable power system, who ever wants to use electricity, can. There’s a few reasons why the power supply might not be available. There are three main causes:
System reliability - is there enough supply of electricity to make sure anyone who wants electricity can get it by turning on a light switch?
System security - is the power system resilient enough to withstand a sudden shock, such a the trip of a large generator or the the collapse of some transmission towers?
Network reliability - are the powerlines connecting your house to the parts of the grid where there’s generation still connected? Or did a tree fall on them?
The NEM is designed and planned to be a reliable power system (thanks for that). To have a reliable power system, there must be sufficient generation and network capacity to supply customers with the energy they demand.
However, one of the difficulties with talking about reliability is that we split it up between system reliability, system security and network reliability. This distinction might seems like semantics (and in some respects it is), but the underlying causes are quite different - hence why they’re treated differently.
In this article, we’re focusing on system reliability. System reliability power outages due to a lack of supply actually constitute a very small portion (on average, just 0.3%) of total outages. The most significant cause of power outages, which would not surprise any folks living outside the cities, is due to faults on the distribution networks (e.g., trees falling across power lines).
There are many more power supply interruptions from these distribution network outages because it’s much easier to design a power system which almost always has enough supply, but its not economic to have the same level of redundancy into the poles and wires carrying electricity to houses. However, despite being responsible for a small proportion of outages, planning for a high level of power system reliability drives a significant proportion of the costs of the supply of electricity.
Planning for system reliability
To have a reliable power system, the electricity market is designed to incentivise investment in electricity generation to meet electricity demand. The design recognises there’s a limit to how much we want to build - it’s unrealistic and uneconomic to build a power system that supplies all demand for energy, all of the time. Therefore, there is a trade-off between the reliability of the system and the cost of maintaining a very high level of reliability.
An analogy2 for this trade-off is the reliability of a car. To make a car more reliable, you can invest in regular maintenance and build in redundancy into some of the key components (e.g. spare tire). However, you’ll start to experience diminishing returns, and it will take ever increasing costs to keep increasing the reliability of the car once its already very reliable (e.g. you wouldn’t drive around with a spare engine).
The design of the National Electricity Market makes a similar trade-off between two costs:
Up-front costs of reliability - The higher the level of reliability, the more investment in capacity is needed and/or more stringent operating conditions are required (such as making sure you have more redundancy at all times), all which impose costs.
The costs of unreliability - if the power system is unreliable, there will be supply interruptions for consumers, which also has a cost. This cost reflects how customers experiencing the outage aren’t able to use electricity, which might mean you can’t watch TV, cool your home, charge your car etc.3
The key policy setting for making this trade-off is called the reliability standard. The reliability standard is a benchmark use to determine how unreliable the system should be because it’s too expensive to have it be perfectly reliable. Using a metric called “unserved energy” (acronymised to USE), a maximum allowable level of expected outages is set.
Back to the car analogy, it’s setting the point at which you decide it’s too costly to keep upgrading the car, it’d be better to just bear the risk that that the car might break down and deal with the consequences.
The current reliability standard is 0.002%. This means, the reliability standard requires there be sufficient generation and transmission interconnection in a region so that at least 99.998% of forecast total energy demand in a financial year is met. Or, there is a very very very small chance that the power might go out because there wasn’t enough generation available.
Using the reliability standard, complex modelling is undertaken to try to make sure that the electricity market creates strong enough incentives for investors to build generators when they’re needed. Conversely, the market is also designed to not encourage overinvestment in generation - ideally striking the right balance between power system reliability and costs.
Now to the crux of this long post! As noted above, a key input to this very important trade-off is the costs to consumers of an outage. But how do we measure that? In practice, this is difficult to do and quite blunt.
We’re all in this together
The reliability standard is a collective measure of reliability. Under the standard, “unserved energy” is the same regardless of which end user of electricity experienced the outage. That is, the cost impact on the end user who lost power is treated the same no matter who they were or what they were doing.
In reality, the actual value of access to electricity is highly variable amongst consumers and across time. On a particular evening, someone hosting a birthday party would place a much higher value on the supply of energy than a pool pump at a holiday home. Yet, for the purposes of planning our energy system, they are treated equally.
If we take it back to the car analogy, a cash-poor uni student’s car is much more likely to break down than a car driven by a chauffeur. The uni student’s preferences are clearly different to the chauffeur’s - limited money means they’d much rather spend their money on food, housing etc. than maintaining their car to the standard of the chauffeur. On the other hand, the chauffeur invests more in the reliability of their car because their livelihood depends on its availability.
The NEM is designed and planned to meet the collective reliability standard. This means, we design the system based on a trade-off between more generation and the value an average consumer places on a reliable supply of electricity.
Stretching the car analogy to its limit,4 this is like (instead of setting a minimum, road-worthiness requirement) we mandated a consistent level of maintenance for every car in the country to make sure 99.998% of them are able to be on the road.5 The problem with this approach is its homogeneity - it treats every driver as if they have the same preferences. In reality, if you could measure the reliability of the car fleet across Australia, you would find a distribution of how reliable cars are based (in part) on the individual preferences on the owners.
Back to electricity - the preferences of individual electricity consumers for how reliable they want the energy system to be would also be a distribution. Some would prefer a more reliable system, others would be happier for lower bills and a higher likelihood of outages.
In a liberalised energy market, do we need to lump all consumers in together? Is it possible to let consumers choose their preferred level of reliability?
Choosing your own adventure
Planning the electricity system based on this collective reliability standard was, and continues to be, a sensible approach. It’s hard enough to measure reliability, let alone determining the individual preferences of consumers.
However, as the electricity market evolves, there are options for consumers to change their preferred level of reliability.
If you’re happy to use less power (or no power) at peak times, there are retailers that charge you the wholesale price of electricity for electricity. A customer of these retailers has the option of reducing their supply of electricity (such as turning off appliances, avoiding cooking, laundry etc.) during times of high wholesale prices. This should lead to an, on average, lower electricity price in exchange for a self-imposed lower level of reliability.6
On the flip side, if you really don’t want the power to go out at your house, you can install generation and storage on site (if you’re not a renter you have the money of course). This is expensive, and for the average person might not make commercial sense, but if keeping the lights on is something you value highly, it would be rational.
This all reflects how technology changes the concept of the value of reliable supply from the grid. Historically, determining the value an individual consumer places on supply from the grid was a simpler task, and yet, still quite difficult. Typically, it’s estimated by asking consumers how much they would need to be compensated to have their electricity supply disconnected under certain conditions.
As things like smart home technology, batteries and automated control become ubiquitous, this approach won’t work any more. The opportunity to store energy in a battery or avoid consuming during peak conditions by pre-cooling or heating your house reduces the dependence on the reliability of the grid. For example, disconnecting a consumer from the grid might not affect their comfort whatsoever. Instead, the impact might be the algorithm governing the household battery not being able to realise the full economic potential of the battery.
Even looking at something like an electric vehicle. There are times when I would be happy to not charge my car. But there are also times where I would be badly impacted if I couldn’t. And these times are incredibly contextual - I couldn’t tell you now what those times are.
So where does this leave us?
The reliability standard is a key parameter for planning the electricity system. A key input into that planning is the value electricity customers place of having access to the grid at all times. Currently, we base this value on the experience of the average customer.
In practice, electricity customers place highly varied value on access to the grid.
Technology and retail market developments are making it easier for electricity customers to actually experience their preferred level of reliability.
What to do?
If you’ve agreed with what I’ve written, it seems as though the current system for measuring reliability is on shaky ground. This begs the question - should we keep planning the system around the value for the average customer? It seems like this will be harder and harder to do as people buy EVs, install batteries etc.
This is a good problem to have. The more people that are happy not to use power at certain times, the more efficient the system can be as a whole. It is important to encourage this.
The question of rethinking the reliability standard is a different one. Is it urgent? I don’t think so. But those arcane wizards over at the Panel do take time to consider and change things, so perhaps it’s worth some consideration. Over to you wizards.
This concept is often referred to differently overseas e.g. “security of supply.”
This seems to be some sort of unofficial competition in electricity - finding the perfect analogy. There’s some holy grail out there and we’re all trying to find it…
This cost is referred to as the value of customer reliability (VCR). VCR is measuring by asking consumers what they would pay to maintain access to electricity under certain conditions.
Electricity has so much jargon, its so hard to make it relatable!
I get this is oversimplified but that’s the point of an analogy…
For someone keen to really in the weeds, this throws up another question. If you choose not to demand energy because of the price, does that make the system less reliable? I do not think so - I think measuring unserved energy assumes you wanted to use power either regardless of, or knowing the price. If the price was too high and you decided not to use power, this should not be unserved energy. But others might disagree - and maybe a question that becomes more pertinent if more consumers are responding to electricity prices?
1. The problem for most consumers is that the reliability standard has nothing to do with their actual experience. If we doubled the Reliability standard to 0.001% or weakened it by a factor of 10 to 0.02%, the lost time per customer would hardly vary because generation shortages would fall from 0.3% of outages to 0.15% an average of 8 seconds less or increase from 0.3% to 3% or 2.5 minutes more lost time per year. Who would notice either of these changes.
2. We do have very course methods of improving reliability for critical customers:
a) Large load centres are usually served by multiple feeders so if one generator or transmission line goes down, power can still be maintained
b) When load shedding is required areas including hospitals or critical infrastructure are usually last on the list.
c) Emergency generators or local batteries.
3. Now that generation and storage is being installed all along the supply chain from solar farms to houses, who is responsible for maintaining supply, the generator or the storage operator?
4. a) When Loy Yang A tripped after the power pylons collapsed and 90,000 customers were shed,
b) When wind farms were turned off because bushfire threatened their grid connections
Were these generator reliability events? I would argue that they were not, but others would argue they were.
In view of all the above, the focus on the reliability standard is a distraction from the real problems in the system which are mainly transmission and distribution weaknesses.
Thanks Declan. Good post and I agree with Peter's comments that system reliability is not as impactful to customers as network reliability is.
Network reliability is currently running around 99.95% (for my distributor) - 4 hours per annum. This compares to the reliability benchmark at 99.998% (I recognise that hours and energy are not the same measure, but network reliability is not measured that way.)
For network reliability we use the Value of Customer Reliability (VCR) to determine whether capital works are justified for improved reliability. The current VCR averages around $25/kWh and is updated by the AER on a regular basis through customer surveys.
The VCR is an average figure, but this discrepancy in reliability outcomes (network vs system) suggests that the reliability benchmark is set too high and that system reliability is far higher than customers are willing to pay for. (Although lower than what politicians may be willing to accept.)