Analysis: mandatory hourly matching’s high costs would likely kill so much clean energy procurement, it would increase total long-run emissions.

As the GHGP undertakes revisions to its Scope 2 guidance to evolve beyond the current status quo of annual matching, hourly matching with tighter market boundaries (aka 24/7 CFE) is a prominent contender.

Studies suggest that if a company's hourly matching percentage is high enough and the clean energy is fully deliverable — both big assumptions — hourly matching could avoid more emissions than current annual matching. But... and it's a big but... hourly matching is SIGNIFICANTLY more expensive.

That added cost could have a large negative influence on the voluntary corporate clean energy procurement market. The core economic principle that underpins this concern is “demand elasticity”: that when something becomes more expensive, companies will do less of it. And in this case, the “it” is voluntary clean energy procurement.

If the GHGP mandates hourly matching, it might increase the beneficial impact of each company that continues to buy renewables following GHGP guidance, but it would also reduce the number of companies who do so. So, we investigated the net result of those two opposing forces. In this analysis, we take a closer look at the numbers, using both third-party, peer-reviewed studies (such as He, et al. and Riepin and Brown) and WattTime data.

Our analysis finds that, based on best available data, it is very likely that the GHGP mandating hourly matching would increase emissions compared to the status quo, not reduce them.

Modeling four procurement scenarios

To study this question, we developed a method of simulating procured renewable energy portfolios using cost [1,2] and load [3] assumptions provided by the National Renewable Energy Laboratory (NREL). The simulation solely considers costs of the total estimated levelized cost of energy and transmission for projects and does not include any revenue or costs from grid electricity markets. 

We simulate portfolios for each grid region in the US in the year of 2030, and then estimate the avoided emissions from each strategy using the Long Run Marginal Emissions Rate (LRMER) provided by Cambium. The strategies we considered fall into four categories:

Hourly matching is ~600% more expensive than the status quo

Compared to the current guideline of non-local annual matching, annual matching with a local procurement constraint was only ~60% more expensive on average (range: 20% to 120%). Emissions-focused annual matching was at cost parity with non-local annual matching (range: -40% to +20%). By sharp contrast, hourly matching was an average 600% more expensive (with a range of 200% to 1,200% across others’ studies and WattTime analysis).

Each of these studies — He, et al., Riepin and Brown, and WattTime — looked at a different set of locations and times, so cost variations are expected. However, each of these studies found a very significant cost premium for achieving 100% hourly matching, as well as large differences in the cost to achieve each kg of avoided carbon emissions. Below we show our estimates alongside others in the literature.

Understanding how cost might affect participation 

But how might these higher costs for hourly matching affect corporate participation and total emissions impact?

To answer that question, we need three things. 

First, we need to know the level of demand at the current status quo cost. How many companies currently have net-zero emissions targets under current Scope 2 rules? How much C&I electricity load do they represent? How much clean energy does that imply? 

Many studies include a scenario in which 10% of commercial and industrial load participates in net-zero claims. Our best estimate is that this is reasonably close to the actual status quo in real life (under the current system of non-local annual matching) because in 2024, total contracted energy in the US by corporations was 74.6 GW [4], which most closely matches the size of the non-local annual portfolio.

But how would companies respond to a change in cost for implementing their net-zero emissions and/or 100% clean energy strategy? This relationship between participation and price can be estimated the same way models estimate how much renewable energy grid will build: using supply and demand curves. 

So second, we need a supply curve. This curve represents how much it would cost for any given amount of companies (measured in their associated megawatts) to achieve net zero under the GHGP depending on what the rules are. We can calculate that based on the existing literature and the cost simulations above.

Lastly, we need a demand curve: a way to estimate what levels of participation to expect at different levels of cost. The shape of a demand curve is usually measured by its price elasticity of demand. And the price elasticity for corporate net-zero claims is not known. But, it can be instructive to ask what would happen if it is anywhere close to typical values that have been measured in similar markets, to get a sense of the scale. We found several examples in the literature: 

So while the actual price elasticity of demand for net zero claims under the GHGP is not known, the best estimates we have show a range including 0.96, 0.62, and 0.5.

Hourly matching’s high cost would push some corporates out of the voluntary clean energy procurement market, increasing emissions by an estimated 42.6 million tonnes annually

The big-picture takeaway is alarmingly clear. At a range of potential cost premiums to achieve hourly matching and across a range of demand elasticities, GHGP mandating hourly matching would effectively kill voluntary corporate clean energy procurement. The median estimate is that it would increase grid emissions by 42.6 MT CO2e per year, compared to existing GHGP standards of non-local annual matching.

By contrast, emissions-focused annual matching avoids more emissions than non-local annual matching at all values of demand elasticity, because while it has a slightly higher price than non-local annual matching, it also has a higher avoided emissions rate that compensates for the potential decrease in participation. 

Weighing the risk: could high costs undermine net-zero progress?

Of course, we can’t predict exactly what would happen if costs skyrocket. Supply and demand curves represent an idealized version of economics with many simplifying assumptions. Perhaps the demand elasticity of companies to make net zero claims under the GHGP is far lower than clues from previous studies suggest. Or perhaps companies might abandon their net zero claims, but still try to achieve fairly low emissions. Maybe.

But this is a big risk to take. Across several studies, the price premium for achieving 100% hourly matching has been shown to be at least 200% higher than the current standards. For that to fail to significantly reduce participation would require a massive, almost-unheard-of decrease in price elasticity. 

Further, this risk is not hypothetical. Like our analysis, E3’s 2024 study cautioned that “increases in [energy attribute certificate] EAC prices may reduce the voluntary demand for clean energy generation.” Their analysis estimated that a 4x increase in EAC prices could lead to an increase as much as 102 million tonnes per year. More recently, a survey of clean energy buyers by Green Strategies has found that “nearly 80% of respondents lack confidence that they would be able to procure time-matched clean electricity within smaller market boundaries. Respondent insights indicated concern over higher costs and whether suppliers will be able to provide resources that meet time and location criteria.”

And last week, another survey by the Clean Energy Buyers Associate found that 75% of their members are opposed to mandatory hourly matching, stating that it is “very difficult to implement”.

The rising costs of renewable energy and the changing political climate have created an environment where keeping net-zero commitments is a much more challenging goal to justify than it used to be. Raising the costs still further could make it very challenging to justify continuation of this goal to executives outside the sustainability team.

Again, this study is not conclusive. But the preponderance of evidence suggests that the massive price disparity between 100% hourly matching raises a very strong risk that the GHGP mandating hourly matching would on net cause enough price-sensitive companies to cease participating than it would on net increase emissions, not decrease them. Meanwhile, emissions-focused procurement and other carbon matching strategies would not increase costs while reducing emissions.

Future research into the effects of mandatory requirements of voluntary programs should consider effects on voluntary program participation and how that impacts total emissions. But in the meantime, the GHGP should strongly consider that what evidence does exist suggests they are currently trending toward a policy change that will increase emissions, not decrease them. 

image source: Pexels | Tom Fisk

Case study: carbon accounting approaches and an analysis of Meta’s 2023 data center electricity consumption and clean energy procurement

Summary

Since 2020 Meta has matched 100% of its electricity use with more than 15 gigawatts of long-term clean energy purchase commitments, making it one of the world’s largest corporate buyers of clean energy. As a result, Meta has reduced its electricity-associated emissions reported under the current industry standard, the Greenhouse Gas Protocol’s (GHGP) market-based method, to nearly zero. But how well do these standard reported methodologies capture Meta’s physical emissions in the real world?

The GHGP has played a key role in driving over 200 gigawatts of corporate clean energy purchases. But today it is undergoing a major revision — its first in over a decade. Since it was last updated, many power grid operators and third-party providers started releasing far more granular and complete emissions data than were available at the time the current system was devised.

These new data show that the carbon intensity of electricity varies substantially by time and exact location. The emissions impact of using or generating electricity depends not just on how much is consumed, but also on when and where — and what technologies (coal, natural gas, hydropower, etc.) are on the grid at that moment. These variations in emissions impact have become even more pronounced in recent years due to the widespread deployment of clean energy. In certain times and places electricity has become very clean — for example, in West Texas when the wind is blowing — while others have changed little.

If we’re serious about reducing pollution from electricity grids and power sector decarbonization, then we need to measure the emissions impact of electricity consumption and clean energy generation more accurately, enabling companies to make informed decisions about where and when clean energy investments can have the greatest impact. The GHGP revision process currently underway provides a critical opportunity to ensure this foundational global standard better reflects real-world variations in electricity’s carbon intensity across time and place.

A key element of past GHGP updates has been examining case studies. At this pivotal moment in the GHGP’s evolution, Meta engaged WattTime to analyze its 2023 data center operations and clean energy procurement using three different methodologies currently under consideration by the GHGP. The goal was to use Meta’s real-world data as a test case for the potential implications of different approaches for all companies.

The three methodologies examined in the case study were: 1) Annual Matching (current GHGP methodology), 2) Hourly Matching (24/7 CFE methodology), and 3) Carbon Matching (emissions matching methodology). This analysis strongly suggests a need for the GHGP (and other carbon accounting frameworks) to adopt more accurate carbon accounting methodologies such as Carbon Matching that more accurately reflect real-world emissions impact and empower companies to make more targeted, better informed, and higher-impact clean energy investments. Methodologies such as carbon matching are well aligned with the three main criteria of the GHGP Scope 2 revisions: scientific rigor, will drive ambition in climate action, and feasibility.

Download the case study PDF:
How carbon accounting approaches do (or don’t) reveal real-world impacts: An analysis of three methodologies to report emissions from Meta’s 2023 data center electricity consumption and clean energy procurement.

How to use the GHG Protocol’s consequential electricity emissions reporting option

Everyone knows that there’s only one way to stop climate change: reduce actual system-wide GHG emissions. This is known as causing consequential emissions reductions. But as we laid out in our joint white paper with Electricity Maps, the GHG Protocol Corporate Standard currently mandates that companies report their attributional emissions, which are not the same thing.

At WattTime, our priority is to help companies reduce real-world consequential emissions. Whether companies then choose to report those reductions is up to them. But if you would like to do so, you may be interested to learn that the GHG Protocol today also has a separate, much less well-known mechanism to optionally report consequential emissions reductions. 

The GHG Protocol Scope 2 Guidance points out that attributional methods “may not always capture the actual emissions reduction accurately.” And adds that is a problem because “Ultimately, system-wide emission decreases are necessary over time to stay within safe climate levels. Achieving this requires clarity on what kinds of decisions individual consumers can make to reduce both their own reported emissions as well as contribute to emission reductions in the grid.”

That’s why section 6.9 of the GHGP Scope 2 Guidance states that companies interested in making decisions on the basis of actual consequential impact “can report the estimated grid emissions avoided by low-carbon energy generation and use” by using a different method, the GHG Protocol Project Protocol which is supplemented by the Guidelines for Grid-Connected Electricity Projects.

And it turns out, the Guidelines for Grid-Connected Electricity Projects is an extremely useful tool for identifying and reporting on the consequences of any activity (“project”) that causes emissions or emissions reductions. Why, then, do so few practitioners know about it? 

Partly because until relatively recently, the necessary data didn’t exist in most places. But that has recently changed significantly. 

Rising access to marginal emissions data

A few years ago, the UNFCCC began producing free, global marginal emissions data of the type you need at the country and annual level, available here

As of this month, WattTime and other mission-driven organizations have gone even further and now released free, global marginal emissions data at the hourly and balancing authority level. Those are available free at GridEmisssionsData.io (for operating margin) and https://www.gem.wiki/MBERs (for build margin). We’d like to credit REsurety, Climate TRACE, Global Energy Monitor, Transition Zero, Global Renewables Watch, Pixel Scientia Labs, Planet Labs, and Georgetown University for making this possible.

Having free, globally available, hourly marginal emissions data solves another issue with the Guidelines: they’re written as a long, complex document, particularly because they include many lists of optional choices for what to do when you don’t have good data. And now that free high-quality data exist, that extra guidance is much less relevant than it used to be. 

So, as you’ll read below, WattTime has done the work for you of going through the Guidelines with painstaking care and working out the most simple, accurate, impactful ways to comply in a world where free high-quality global data do exist. 

Key considerations in following the Guidelines

It turns out, at its core, what the document is saying is actually very simple. The key formula in the Guidelines is that the consequential emissions of any project that generates, consumes, procures, or shifts electricity is:

So, here’s what you will need to follow the Guidelines:

In many ways, following the Guidelines is very similar to following the Scope 2 Market-Based Method. For any given assessment, one combines the generation, procurement, and/or consumption by region and time period; multiplies them by the relevant emissions factors; and then adds up the times and regions to get the total emissions. The biggest difference is that the emissions factors are marginal, not average.

But there are other differences as well, such as the sign convention. The Guidelines measure (net) electricity reductions, not (net) emissions footprint. Thus, in this framework positive numbers are a good thing. But negative numbers are very much allowed — they just indicate projects that on net induce more emissions than they reduce or avoid.

Another difference is that there are several options, with no systematic decision criteria on how to choose. For example, companies are able to choose how to calculate a build margin baseline; how to select a build margin weight; whether or not to update emissions factors over time; and so on. Each of these cases opens up considerable opportunities for gaming. Further, in every case, WattTime found that sufficient free global data now exist to make using the  highest-accuracy, highest-impact option quick and easy. And although this is not explicitly stated, we’ve noticed each option appears to be listed as a de facto data hierarchy, in ascending order from lowest data requirements to highest accuracy and impact. In order to maximize accuracy and impact, and to eliminate potential for gaming, WattTime strongly recommends that, for all the lists of options in the Guidelines, companies select the final option in the list.

If you are interested in reporting on your consequential emissions impact under the optional section in the GHG Protocol, you can start using this guidance and new datasets today!

The emissions risks of AI data center buildout

Commitments to massively expand infrastructure for artificial intelligence (AI) have accelerated significantly within the past year. The January 21st announcement of Project Stargate — a four-year, $500 billion push to scale AI infrastructure in the US by OpenAI, Microsoft, NVIDIA, SoftBank, and others — represents an unprecedented scale of infrastructure investment in a single technology sector. 

It’s likely that power-hungry AI infrastructure will continue to grow and will need to be served by electricity generation in some form, existing or new, renewable or fossil-fueled, on-site or across town. If managing the climate impacts of this growth is a priority, then emissions impacts should be considered when evaluating the options for how to meet the growing AI demand. Already, AI-driven electricity demand has increased the emissions of large companies like Google and is threatening their climate goals. 

Forecasts for data centers’ ballooning electricity demand

Lawrence Berkeley National Laboratory (LBNL) recently projected that data centers could consume between 6.7% and 12% of US electricity by 2028, a 2-3x increase from 2023. The corresponding load growth from data centers alone is in the range of 145-400 TWh, which may require some 33-91 GW of new generation capacity to be built by 2028. That's a massive amount of new electricity generation. That's also just for the US, a leader — but far from the only — player in the AI landscape.

The LBNL report is one to take seriously. It was created by researchers, some of whom have been loudly skeptical of AI, to fulfill a 2020 request from Congress. These numbers are in line with or relatively low compared to other recent projections. Meanwhile, Chinese AI startup DeepSeek’s announcement last week that it managed to produce a powerful model with a fraction of the compute compared to leading AI companies is a reminder that efficiency gains are likely. These numbers are constantly in flux.

Estimating the emissions implications of data center load growth

The implications of how new electricity is supplied are massive. If all LBNL’s projected U.S. data center load growth through 2028 were served exclusively by natural gas generation (the most prevalent source of generation in the US currently), it would result in about 180 million tonnes of additional CO2 emissions annually. If it were a country, that would make it the 45th largest emitter. 

The emissions impact of this massive buildout depends entirely on choices made today about where to site these facilities and how to power them. A particular computing load, depending on time and location, could cause coal to be burned in Wyoming, or gas to be burned in California, or could cause no emissions at all if it absorbs surplus wind power in Kansas or Texas.

The speed demanded by AI development timelines creates pressure to choose quick solutions over optimal ones. While co-located gas-fired power might seem like an expedient and economically wise choice, particularly given current political signals or to avoid long interconnection queues, it also comes with future fuel cost risk and creates long-term emissions lock-in that will be increasingly difficult to unwind as climate pressures mount.

Ultimately, the emissions impacts are dependent on which power plants will serve the new electrical demand of these facilities. When a new grid-connected data center is switched on, one or more existing power plants will ramp up to meet the demand. In some cases, new power plants will be built to make sure enough generation capacity is available to serve the new load. 

The emissions caused when existing power plants respond to changes in load (or new plants are built) are measured by marginal emissions rates. We can use these marginal emissions rates of electricity grids to compare the climate outcomes of different data center scenarios — both the emissions caused by which data centers get built where, and also the emissions consequences of associated impacts on marginal emissions by that load and any new power plants that get built.

Siting new data centers to cause fewer emissions

Let's compare and contrast two significant data center hubs: Northern Virginia's Data Center Alley (in the PJM grid) and Texas's emerging AI corridor (in ERCOT), where the first Stargate data center is being constructed in Abilene. The induced emissions impact of a grid-connected 100 MW data center operating at 95% capacity differs substantially between locations.

A 100 MW data center in Northern Virginia would result in about 463,000 tonnes of CO2 emissions annually, while the same facility in Texas would produce about 386,000 tonnes (17% lower).

These, of course, aren’t the only places where new data centers could get built, and in fact, neither location represents the optimal case from an induced emissions perspective.

For example, the same 100 MW facility built in Kansas (in SPP) would produce about 358,000 tonnes of CO2 annually (23% lower than in Virginia). Further, building it in Northern California (CAISO) would produce about 309,000 tonnes (an even greater 33% reduction vs. Virginia). What Kansas and California have in common is an oversupply of clean and renewable energy for many hours of the year — wind in one and solar in the other.

These calculations assume constant operation near maximum capacity throughout the year, typical for large data centers with critical workloads. While actual emissions would vary based on specific operating patterns and grid conditions, these numbers illustrate the massive emissions implications of siting decisions for new AI infrastructure.

Siting data centers in grids with lower marginal emissions rates can cut the potential induced emissions by up to half. That’s massive.

Data center induced emissions by grid region

Building new clean power where it can avoid more emissions

As new data centers increasingly look to bring their own clean power (or procure it), this also opens the question of where that new clean generation should get built to not just meet data center load growth but also avoid the most fossil emissions. (In practice, which power plants get built where has many influences, including the capacity needs of a specific balancing area, interconnection queues, transmission constraints, and other factors. But for now, let’s assume total freedom to choose your location.)

With this in mind, siting new data centers on grids with lower marginal emissions rates is only half the story. The electricity generation supply side of the equation is the other.

There are two dominant ways new power plants could get built for data centers: a) co-located within the same grid balancing area as the data center itself, and b) siting the new clean power on grids with higher marginal emissions rates, and thus where new wind or solar could avoid more fossil emissions.

Continuing our earlier example of would-be new data centers in either Northern Virginia’s Data Center Alley or Texas’s emerging AI corridor, let’s look at a few scenarios for emissions implications, depending on where new wind or solar capacity gets built in association with the new data center load. For example:

We see a clear pattern. When renewables sized to 100% of data center load are procured from within the same grid as the data center, those renewables have a more-modest avoided emissions effect relative to the data center load’s induced emissions. On the other hand, siting data centers on grids with lower marginal emissions rates — and then investing in new renewable capacity on grids with higher marginal emissions rates (where wind and solar displace more fossil fuel generation) — can generate substantial net reductions in total emissions. This approach to building renewable energy in the most impactful places regardless of where data centers are built is already being used by Amazon, Meta, Apple, and Salesforce.

Renewable energy avoided emissions by grid region

The role of compute load shifting to further reduce emissions

While siting decisions have the largest impact on emissions, there's also potential to reduce emissions of data center use through smart load management. Data centers, particularly those running AI training workloads, require extremely reliable, constant power. Once a training run starts, interruptions can waste days or weeks of compute time. This makes them less flexible than other types of new electricity demand like EV charging, where timing can be shifted to match clean energy availability. 

But many data center workloads, like batch processing and cooling, are timing flexible, so shifting that energy use to times of the day when marginal emissions are lower, like when renewables are being wasted, can achieve large emissions reductions.

While load shifting alone won't solve data center emissions, it represents another tool for reducing emissions impact, particularly for facilities that handle workloads beyond AI model training that are flexible. These techniques are already being used by Microsoft, UBS, and other members of the Green Software Foundation.

Data centers will require massive amounts of energy, even if we don’t know precisely how much. And the combination of high reliability requirements and constant load patterns means careful planning is crucial — rushed infrastructure decisions could lock in unnecessarily high emissions for decades. 

Conclusion 

At a moment when electrification is accelerating across the economy, from vehicles to buildings, the surge in AI infrastructure presents both a challenge and an opportunity. By making smart decisions now about where to build data centers, how to power them, and how to operate them, we can ensure a revolution in compute drives rather than hinders the clean energy transition.

The unprecedented scale of AI infrastructure investment — from Stargate's $500 billion commitment to the broader industry — represents the largest concentrated buildout of computing power in history. Every decision about where and how to build this infrastructure matters more than ever.

image source: iStock | Gerville

UNFCCC marginal emissions data show that building renewables in the Global South has greatest benefit

For companies and other organizations investing in new renewable energy projects, two main strategies guide their procurement:

  1. 24/7 Carbon-Free Energy (24/7 CFE): focuses on hour-by-hour megawatt-hour (MWh) matching of renewable generation’s timing and a corporation’s electricity demand load profile, with the clean energy procured from the same grid region where the electricity load is located
  2. Emissionality: an emissions-first approach that targets the dirtiest grids globally, procuring clean energy from wherever the new renewable capacity will have the greatest avoided emissions benefit by displacing generation from the most-polluting fossil-fueled power plants

These two strategies have important implications for where clean energy investment will flow and where new renewable capacity will get built, and consequently, on how much (or how little) climate benefit those projects will ultimately have.

In this analysis, we use marginal emissions data from the United Nations Framework Convention on Climate Change (UNFCCC) to gain insights into these questions, especially: Does location matter for the avoided emissions benefit of a new renewable energy project? And if so, where should new renewable energy projects get built to have the greatest overall climate benefit?

Tapping the UNFCCC’s marginal emissions data

To better understand the beneficial impact of new renewable projects across the globe, the UNFCCC — the UN body that oversees the Paris Agreement — developed a methodology for estimating the long-term impact on grid emissions in each country around the world. UNFCCC’s combined margin emissions factors take into account both operating margin and build margin, giving a sense for renewable energy’s climate benefit in the nearer and longer terms.

Using data from the IEA’s Global Energy and Climate Model (previously known as the World Energy Model) — which underpins IEA’s annual World Energy Outlook — UNFCCC experts calculated the marginal emissions rates resulting from predicted new generation across both traditional firm energy sources (e.g., fossil fuels, nuclear, geothermal) as well as clean energy technologies (e.g., solar PV, wind, tidal). The IEA model incorporates information on existing energy sources, economics, and policies in 26 large countries and regions, with additional regression modeling for other countries, making it comprehensive in breadth and scale.

Mapping renewable energy’s avoided emissions potential

Looking at a global heat map of marginal emissions rates for new renewable energy sources, the greatest avoided emissions potential based on UNFCCC data is primarily located in the Global South, in countries spanning Asia, Africa, and Eastern Europe (red shades on the map). These countries’ grids tend to rely on heavier-polluting sources of generation, such as coal-fired power plants.

Conversely, the lowest avoided emissions potential is primarily in the Global North, in countries spanning the EU and North America, as well as select countries elsewhere around the world where hydropower (and sometimes, nuclear) provides a dominant share of electricity generation (blue shades on the map).

Power grid combined marginal emissions factor by country map of the world

Multiplying the avoided emissions benefit of renewable energy investment

For any organization deciding where to invest in new renewable capacity, using marginal emissions estimates like these from UNFCCC can lead to much larger reductions in overall global emissions.

In Annex I countries (which largely overlaps with the Global North), the average avoided emissions rate (weighted by total electricity generation) is 345 g CO2/kWh. Meanwhile, countries in the top 50% of most-polluting power grids have an avoided emissions potential of 702 g CO2/kWh, and countries among the top 10% of most-polluting electricity generation have an avoided emissions rate of 979 g CO2/kWh. These heavier-polluting power grids are predominantly throughout the Global South.

In other words, investing in renewable energy projects across the Global South can yield 2x to nearly 3x greater climate benefit vs. renewables projects in the Annex I countries of the Global North. Organizations considering siting renewable energy — whether bilateral national agreements now being drafted under the revised Article 6 framework of the Paris Agreement, or voluntary corporate actors using the GHG Protocol — may want to consider UNFCCC’s data when deciding where to invest in renewable energy projects.

Avoided emissions rate of renewable energy projects by global countries category column chart

How procurement approach influences renewable energy’s potential

24/7 CFE proponents argue that it encourages the buildout of renewables that would generate during “off” hours, helping power grids move closer to 100% clean energy around the clock. This may be partly true. But it also amounts to massive investment aimed at “squeezing the last drop” of emissions from grids that have already significantly decarbonized. This misses opportunities for major larger global decarbonization by building renewables in other places where they’d have greater avoided emissions benefits and where coal-fired generation still dominates the grid mix.

Moreover, some of 24/7 CFE’s biggest proponents are major tech companies whose operations and data centers are overwhelmingly located in the EU, US, and other Global North locations. These are regions that have already seen large investment in new wind and solar capacity, especially. Meanwhile, Global South locations — the same places where UNFCCC marginal emissions data show there are the greatest avoided emissions opportunities — have seen chronic underinvestment in clean energy technologies, according to IEA data.

From a global climate action perspective, it’s far less impactful to inch California or Texas (where wind and solar have already made huge gains) closer to 100% carbon-free energy than it is to invest in new renewables in a place such as India, where coal still contributes more than 70% of the nation’s electricity generation and clean energy investment in 2024 was just one-fifth of what it was in the US.

On top of this compelling climate argument, there’s also the crucially important humanitarian component, too. Investing in renewable energy in Global South countries will also bring economic and health benefits by expanding energy access and reducing air pollution in the places that also have the worst air quality. Globally, 1 in 8 deaths are now attributed to air pollution, predominantly in countries with the most-polluting electric generation, since the same power plants spew both carbon dioxide and PM2.5.

Conclusion

The UNFCCC model is not the only model for estimating marginal emissions rates. One key difference from WattTime’s MOER model is that the UNFCCC model does not account for imports. This can significantly affect rates when low-emission countries border high-emission ones, as is the case with Sweden and Finland. In terms of long-run build margin, the UNFCCC also lacks many of the more-detailed features of other models such as Cambium, GenX, and PyPSA. In particular, it does not consider variance in emissions rates within a country, which can be great in large countries such as the US or China. But it is one of the only existing models that covers the entire globe, which is a critical consideration when evaluating emissions reductions. 

More research is needed to evaluate these different modeling approaches and to develop more detailed models across the globe, so that renewable energy investments can be targeted at the location where they have the most impact. For now, one thing is clear: data is increasingly pointing to the Global South as a critical focus for the world’s future renewable investment.

image source: iStock | rvimages

The methodology behind our latest global data expansion

This week we’re excited to announce a major, global geographic expansion of our flagship Marginal Operating Emissions Rate (MOER) carbon data signal — growing from 40 to 210 total countries and territories. You can read the full press release here. With this expansion to ~170 new geographies, WattTime now offers actionable marginal emissions data — available with 5-minute granularity and a combination of historical, real-time, and rolling 3-day forecast perspectives — for 99% of the world's electricity consumption.

MOER coverage map

Accurate, location-specific, timely, and granular MOER signals are central to a trio of solutions — carbon-aware load shifting, emissionality-based renewables siting and procurement, and supply chain decarbonization — that can save more than 9 gigatons of emissions every year. That’s equal to nearly 20% of total global carbon emissions. These solutions also accelerate the reduction of harmful air pollution that disproportionately affects the developing countries that often are last to gain access to technological solutions that improve their lives.

Adopting those three solutions at scale (and unlocking those 9+ gigatons of emissions reductions) depends on a truly global MOER signal. Which is why this week’s announcement represents an enormous step-change for what’s possible.

In this article, we’ll take a closer look at the foundational methodology driving our MOER models thus far, as well as the new methodologies that allowed our team to expand to substantially global data coverage.

The science behind WattTime’s current MOERs

When electricity demand rises or falls, or new wind or solar capacity gets built, which generators respond varies. Namely, a certain generator (or generators) ramp up or down — or turn on or off — in response to changes in load or new renewables added to the grid mix. These power plants on the edge of the dispatch stack order are what’s known as marginal generators, and their associated emissions are what’s described by our MOER signal. Sometimes polluting, fossil-fueled peaker plants might set the marginal emissions rate; at other times, surplus renewables being curtailed might be on the margin.

Scientists agree that such a marginal signal is the best way to guide (and measure the impact of) interventions such as cleaner EV charging to reduce emissions or the avoided emissions of building a new wind farm on a coal-heavy grid.

Our current-best MOER — with foundations in academic literature and iterated on by WattTime for over ten years — is based on causal, empirical modeling. In the grid regions of North America and Europe, we use robust, detailed, generator-level data inputs to refine and train our algorithms. For example, we incorporate generation and emissions data from individual power plants via the US EPA’s Continuous Emissions Monitoring System (CEMS). We get demand, interchange, and generation by fuel type data from the US Energy Information Administration (EIA) and European Network of Transmission System Operators for Electricity (ENTSO-E). We also integrate myriad other data sources into our overall modeling. For regions where these data are available, we feed it all into a binned regression model. The result is today considered WattTime's highest-quality methodology for the MOER signal type.

In regions where we have access to partial real-time and forecasted information about the status of the grid (such as demand or energy prices), but are lacking specific ground truth time series data such as emissions or generation by fuel type required to train a binned regression model, we use a proxy regression model where we use machine learning to identify a data-rich grid with similar characteristics to use as a proxy for the grid with partial data. The power plants for the region of interest are characterized by Climate TRACE and assembled into a supply curve that resembles economic dispatch for the proxy region, which is then used to estimate the MOER based on the real-time demand in the region. MOER data from this “Proxy Regression” model is available in countries such as Brazil, India, Chile, and Turkey. (Longtime WattTime partner Microsoft helped fund development of this method.)

But after deploying these higher-quality MOER signals for the grids of all countries with the necessary data, we still had not covered about 170 countries of the world. Yet many of these are the same countries where emissions are not yet falling, and it’s not good enough to just ignore them. Our latest MOER expansion fixes that in big ways, thanks to a novel modeling approach from our team.

Inside the methodology powering our global MOER expansion

With the goal of making MOER data truly global, we’ve employed a synthetic demand model for countries where we lack historical and real-time grid information. It extends and improves upon the 2021 work of Mattsson et al.

In lieu of actual real-time demand data (like those used in our proxy regression models), the Mattsson framework uses atmospheric estimations derived from ERA5 climate and weather data, along with demographic information, to model synthetic power demand with a mean absolute percentage error of only 8% when averaged by month and hour.

To model the MOER, we expanded upon Mattsson’s academic model to account for more geographic and economic diversity. This enabled us to produce estimates of real-time and forecasted demand. To productionize this, we needed to build a sophisticated weather data modeling pipeline, ingesting gigabytes of global weather data four times a day, thus producing timely synthetic demand estimates. Currently, we’re using these synthetic demand estimates as inputs to our existing proxy regression models in order to produce MOERs using a model we are calling the “Synthetic Demand Proxy Regression” model, or “Synthetic Proxy” for short.

While the MOER signals resulting from this method are inherently less accurate than those that we can derive using the binned regression model, they’re nevertheless useful for both carbon-aware load shifting and renewables siting — which unlocks significant emissions-reduction potential. Of course, data transparency will only increase in regions across the globe. As it does, we will vigilantly be upgrading our MOER signals according to higher-quality binned regression methodologies.

Putting global MOER data into action 

The MOER signals generated via our synthetic proxy regression models unlock potential to enable much more strategic emissions reductions decisions and automations, and in regions that have never before enjoyed access to such actionable data. Historically, granular and accessible emissions data have been far more accessible for countries in the Global North. This release is a major step in closing this gap, and opens up opportunities to drive emissions reductions and new solution possibilities across the Global South.

All WattTime data can be accessed through our API. Basic access is available for free to all users. Partners on our Pro data plan get full access to historical and forecast data, including premium support. Our partners with a global data license will automatically gain access to these new grid regions. Contact our team to learn more.

Is battery energy storage (finally) living up to its promise of enabling a net-zero grid?

From the World Economic Forum to utility industry magazines to the US Department of Energy, in recent years there’s been a growing refrain: how batteries can enable a net-zero electricity grid. Implicit in that statement is the idea that batteries can (and should) help lower grid emissions, increase the integration of zero-emissions renewable energy sources, and support overall power sector decarbonization. Yet battery energy storage is sometimes finding itself in the hot seat for exactly the opposite reason.

Earlier this year, a University of Michigan study focused on the PJM market (the large regional transmission organization covering all or part of 13 U.S. states plus Washington, D.C.) found that batteries sometimes increased grid emissions. While the U-M study was based on older data (from 2012 to 2014), its takeaways echo concerns we’ve heard before. 

In the early 2010s, California’s Self-Generation Incentive Program (SGIP) — a major driver of the state’s behind-the-meter battery energy storage market — shifted its focus to specifically prioritize greenhouse gas reductions for the Golden State’s power grid. But then circa 2018 and 2019, analysis found that batteries were often increasing, rather than decreasing, grid emissions.

Batteries are only as clean as the electricity used to charge them

For the better part of a decade, batteries have been described as a Swiss Army knife of the power grid, capable of performing myriad functions — from customer-centric services such as backup power, peak shaving, solar self-consumption, and time-of-use energy arbitrage to grid-centric services such as frequency and voltage regulation, demand response, and mitigating renewables curtailment.

Ultimately, doing all of that involves software algorithms that dictate when a battery energy storage system charges and discharges. Those algorithms typically co-optimize around various price signals. But it’s the marginal emissions of the power grid at the times a battery is charging vs. discharging that determines whether the battery causes a net decrease (or increase) in grid emissions.

Unless energy storage considers emissions in their control approach, there’s no guarantee that they’ll help decarbonize power grids. Energy journalist David Roberts summed it up well: “It’s a mistake to deploy batteries … as though they will inevitably reduce emissions. They’re a grid tech, not a decarbonization tech,” more akin to transmission lines that can equally carry dirty or clean power, agnostic to the electricity’s generation source and the associated carbon emissions. So, too, with batteries in the absence of the right signals.

California’s battery emissions success story

To address the emissions increase caused by energy storage participating in SGIP, the rules of the program were revised with the goal of enabling the state’s participating behind-the-meter commercial and residential batteries to live up to their emissions-reducing promise. Almost immediately after the rule change, we started to see positive outcomes. A detailed impact evaluation published earlier this year by CPUC with analysis by Verdant gives a longer-term view of SGIP’s turnaround story.

Between 2018 and 2022 (the period covered by Verdant’s analysis), battery systems in California’s SGIP fully reversed course, flipping from causing a net increase in grid emissions to causing a significant net decrease in a resounding decarbonization success.

Now, energy storage has cemented its central role supporting California’s goal of achieving 100% carbon-free electricity by 2045. The state boasts more than 10 GW of installed battery capacity, and earlier this year, batteries became the single largest contributor to the state’s grid briefly during the evening peak. Grid-scale batteries charged on excess daytime solar are starting to displace natural gas power plants. And during this year’s solar eclipse, batteries charged on excess renewable energy carried California’s power sector through the temporary slump in solar PV generation.

Net GHG emissions of battery energy storage in CA's SGIP

A cautionary tale for other states

California may be the country’s most-prominent example, but it’s hardly the only US state setting combinations of both emissions-reduction / net-zero emissions targets as well as energy storage goals. For just four examples, Connecticut, Massachusetts, New Jersey, and New York — all members of the Regional Greenhouse Gas Initiative (RGGI) — each have robust energy storage targets tied to 100% clean energy and GHG reduction goals. So does Michigan.

For energy storage to help these and other states achieve their clean energy goals, it will be crucial to learn from California’s SGIP growing pains — and using a true marginal emissions GHG signal, rather than a proxy metric, to inform batteries’ duty cycles. Just look at what has transpired in Texas and the ERCOT market.

The Lone Star State has been called “the hottest grid battery market in the country.” But analysis from Tierra Climate published in June 2024 in collaboration with REsurety, Grid Status, Modo Energy, and WattTime found that 92% of batteries in ERCOT increased grid emissions in 2023. This is largely because those batteries are not co-optimizing their operation in coordination with a carbon signal like SGIP’s GHG signal. That same report found that co-optimization with a carbon signal (or a carbon price) would move these battery energy storage assets from carbon increasing to carbon decreasing.

The US energy storage market is growing fast, with record-setting capacity additions in Q1 2024 and a staggering 75 GW of cumulative new capacity forecasted to come online during the period 2024–2028. If battery energy storage is to continue living up to its promise of enabling a net-zero grid, it’s more important than ever that state policies and battery control algorithms include a marginal emissions signal as part of their intelligence under the hood.

Inside Texas’s power sector paradox

The United States’ clean energy leader is also its number-one source of electricity emissions. Welcome to the Lone Star State, aka the China of the U.S. in terms of fossil fuel historical dominance — as well as record-setting wind and solar.

In March, Texas published its first-ever greenhouse gas (GHG) inventory, joining more than 20 other U.S. states in cataloging annual statewide emissions. This inventory, which covers the state’s 2021 GHG emissions, revealed the Lone Star State as the United States’ top emitter overall by state. Per both the U.S. EPA and Climate TRACE data from 2022, Texas’s overall, economy-wide GHG emissions were more than double that of America’s second-place emitter, California. 

According to its inaugural self-assessment, in 2021 Texas released more than 873 million tonnes (Mt) of GHG emissions. To put that in context, if Texas were its own country, it would rank 11th on a global scale — just ahead of Mexico and behind Saudi Arabia, according to Climate TRACE 2022 data.

There’s more to this story than meets the eye. Pop culture portrayals of the Lone Star State have long made ample use of oil barons and rigs dipping into dusty prairie, and for good reason… at least historically. While that fossil-fuel-happy reputation still applies — Texas remains America’s top producer of crude oil and natural gas — it’s also become America’s clean energy leader.

So let’s take a closer look at what WattTime knows best (electricity) and unpack Texas’s power sector energy and emissions data to better understand where it has been — and where it might be going.

Everything’s bigger in Texas — including appetite for renewables

The self-proclaimed “Energy Capital of the World,” Houston is fast becoming a clean hydrogen hub and currently ranks #1 on the EPA’s Green Power Partnership list — a program ranking organizations by their voluntary clean energy procurement — in the local government category. In fact, the top six slots nationwide include five Texan entities, including Dallas, DFW airport, Austin, and Harris County. Austin, Dallas, Houston, and San Antonio are all officially working toward net-zero-by-2050 goals

Since the early 2000s, Texas has famously led America’s wind energy pack, comfortably sitting in the top spot for installed wind capacity, according to U.S. DOE WINDExchange data. With more than 41 GW, the state is responsible for more than a quarter of all U.S. wind energy capacity, tripling silver-medal Iowa’s contribution of 13 GW. 

Solar PV is growing, too. Late last year, Texas overtook longtime solar leader California to capture the top spot among U.S. states for installed utility-scale solar capacity. And in early 2024 solar generation passed coal-fired electricity generation for the first time in Texas history. 
Although natural gas still leads the generation stack for ERCOT — the independent system operator (ISO) that balances supply and demand for 90% of Texas’s electricity — as of May 2024, wind and solar together are closing in on gas’s lead, with 38.4% of the state’s electric generating capacity, compared with natural gas at 44.3%.

Texas power sector emissions 2021

Texas’s fossil-burning power plants in focus

So with clean energy scaling rapidly, where exactly are all Texas’s electricity generation emissions coming from?

For starters, Texas has some 180 combustion power plants, per Climate TRACE and WattTime data. Most of those are gas–fired power plants. But data from the Texas Comptroller shows that 15 coal-fired plants are currently operational; a third of them are slated for retirement by 2030.
Among those 180 fossil-burning plants identified in Climate TRACE data, the biggest power sector culprit is the dual gas/coal-fired WA Parish Generating Station. With 3.9 GW capacity, the #5 emitter for power plants across the U.S. is notorious among environmental advocates and Texas media outlets, which have been raising alarms about the pollution and harm caused by the WA Parish plant.

power plant satellite image

However, while coal-fired power plants might be Texas’s dirtiest on a per-MWh basis, the sheer size (and policy support) of the state’s gas-fired fleet matters, too. In recent years, the Lone Star State has been digging in its spurs — or, at least, its heels — to prop up the gas fleet.

For instance, in early 2021 Winter Storm Uri infamously caused widespread blackouts across Texas and $195 billion in damages. The Texas PUC set an astronomical system-wide price cap of $9,000 ​​per MWh in a bid to bring more generation online (market-clearing prices were closer to $1,200 per MWh). In the wake of that catastrophe, Texas legislators introduced bills to bolster gas-fired generating resources and keep more fossil-burning power plants online, even though multiple Uri post-mortems found that gas infrastructure was the biggest failure during the winter storm; nearly twice as much gas-fired capacity went offline as wind capacity.

More recently, during April 2024’s total solar eclipse, gas proved Texas’s electricity generation fuel of choice, when ERCOT ramped up gas to make up for solar’s temporary dip.

Austin also illustrated this “can’t quit fossil” dynamic when it approved a plan in 2020 to shut down its greatest source of carbon emissions, the coal-fired Fayette Power Project plant. The decision was a boon to Austinites’ goal of producing wholly emissions-free electricity by 2035; however, the closure never came to pass and the Fayette Plant remains operational today. 

An hour south in San Antonio, CPS Energy — America’s largest municipally-owned electric and gas utility — has already acknowledged it won’t meet the Climate Action and Adaptation Plan the city adopted in late 2019, now that its customers owe $200 million-plus in late bills for natural gas purchased at elevated rates during winter 2021. And in late 2023, ERCOT asked CPS Energy to bring a coal plant it had recently shuttered back into operation in an effort to secure more reserve power ahead of winter.

America’s China?

The energy landscape within America’s second-largest GDP after California in many ways parallels that of the world’s second-largest economy — one similarly marked by massive ongoing power sector emissions, clean energy leadership, heavy industrial growth, surging populations, and uncertainty about how the future will unfold.

Like China, Texas’s key contributors to new emissions stem from the power and industrial sectors. Despite China’s role as the world’s leading deployer of renewable energy in electricity generation — it’s the only entity in the world that tops Texas in installed wind capacity — the production of fossil fuels continues to grow strongly in China, which is the world’s #1 emitter of GHGs. Still, coal’s share within China’s electricity generation mix has steadily declined — as it has in Texas — and the long-term plan is to phase it out. But over the near term, coal will retain its pivotal role within China’s generation mix, which could translate to bumps in its coal-fired emissions. 

Texas is a space to watch for that same phenomenon, especially this summer, as the window spanning May through August historically marks Texas’s high point for power generation and demand. And given the heat wave that slammed Texas over Memorial Day weekend, this summer looks to be a heat demand doozy, requiring fast-response power resources. 

Don’t mess with Texas’s clean energy leadership

On a more hopeful note, in addition to its greenhouse gas inventory, the Texas Commission on Environmental Quality used EPA grant funds to create an emissions reduction plan for Texas. According to its estimates, implementation of suggested measures — divvied into buckets tailored to each of the state’s highest-emitting sectors: industry, transportation, and electric power — could reduce GHG emissions in the Lone Star State by 174 Mt from 2025 through 2030 and 592 Mt from 2025 through 2050.

The plan spells out precise priority measures — voluntary, yet incentivized ones, created with extensive input from a variety of Texan stakeholders. And in 2027, the TCEQ will publish a status report detailing implementation progress, priority analyses, next steps, and future budget and staffing needs to continue deployment of the measures. So it seems Texas is taking its emissions reduction plan seriously.

A shining example of a power grid in the midst of a massive transition — wherein wind, solar, and battery energy storage are poised to together become the dominant wedge of the power generation pie, supplanting natural gas’s piece — Texas provides a valuable example for how grids across the country can tap wind, scale up solar, utilize existing energy infrastructure to generate clean hydrogen, and ultimately, decarbonize the power sector. Especially as coal-fired generation retires in the years ahead, Texas’s model, from a clean energy leadership perspective, is one not to be messed with.

A tale of two grids: how CA and TX generation responded differently to the April 2024 solar eclipse

On April 8, 2024 the contiguous United States experienced its second total solar eclipse of the 21st century. The first happened in 2017; the next won’t happen for another two decades. No shortage of digital ink was spent covering the run-up to — and post-mortem analysis of — the eclipse, and especially how it impacted solar PV generation across the country.

Coverage ranged from the measured (“Darkness from April's eclipse will briefly impact solar power in its path. Experts say there's no need to worry,” noted USA Today) to the dramatic (“The solar eclipse is a critical test for the US power grid,” declared Vox) to outright fear-mongering (the New York Times and many others debunked myths that the eclipse would cause the grid to fail).

In practice, grid operators as well as government agencies such as US EIA and NREL were well-prepared for this year’s Great North American Eclipse, as it’s become known. But exactly how the nation’s grid operators handled the predicted drop in solar power generation differed significantly, which is what we’re examining more closely in this blog post.

In California, batteries that charged on excess renewable energy backfilled solar’s slump

Across the Western Interconnection (WECC) — which includes all or part of 14 U.S. states — the percent of solar obscuration ranged from 20% in the Pacific Northwest (farthest from the path of totality) to 80% in the southeast corner of New Mexico. Across all of WECC, NREL estimated that the maximum reduction in solar PV generation would reach 45%, although that varied significantly by proximity to the eclipse path.

In California, the impact ranged from ~30% for utility-scale solar farms in the central part of the state to 50+% for solar in southern California. Statewide on April 8, CAISO reported that solar generation peaked that morning at close to 14.5 GW, plateaued around 12.4 GW through most of mid-morning, then fell a further ~27%, bottoming out at ~9.1 GW around 11:15 am. By 12:15 pm — with the eclipse over — solar generation had rebounded to 14+ GW.

That much of the story has already been well-reported, but at least two other interesting things happened in tandem.

First, through the hours of the eclipse, solar curtailment on CAISO’s grid all but disappeared. In the hour before the eclipse, California discarded more than 2.5 GWh of solar energy while simultaneously charging energy storage.

Second, battery energy storage — which normally charges during daytime periods of solar excess generation in preparation for California’s evening peak — flipped from charging at nearly 2.6 GW into discharging at 2.7 GW in less than an hour. In doing so, storage almost entirely backfilled the midday solar slump from the eclipse. Meanwhile, natural gas — which usually sleeps during the day awaiting the evening ramp — barely registered a change in generation. After the eclipse, energy storage resumed charging in preparation for the evening peak.

In Texas, natural gas illuminated the darkness

The dark path of this year’s eclipse passed straight through the heart of ERCOT solar country, where NREL forecasted up to a 93% drop in peak solar PV output. ERCOT data confirm that reality matched expectations: solar generation plummeted from ~13.8 GW at 12:15 pm local time to just 0.8 GW a short 45 minutes later at 1:30 pm, a 94% reduction. By 2:45 pm, solar was back up to 13.7 GW. Solar’s generation profile that day looked like a narrow-waisted hourglass tipped on its side, going from 27.6% of ERCOT generation to 1.7% and back up to 27% in the span of just two hours.

But unlike in CAISO — where batteries were the chief responding resource — in ERCOT natural gas stepped in to meet demand, ramping up from ~19 GW to 27+ GW, then quickly tapering back to ~18 GW. Energy storage made a smaller, incremental contribution of ~1.4 GW during the peak of the eclipse, but gas-fired generators dominated the response.

Across the Eastern Interconnection, the story was much the same as in Texas. In PJM — where totality passed through Ohio and then western Pennsylvania — natural gas backfilled solar’s temporary dip. That motif repeated in NYISO, and then ISO New England. In New York and New England, behind-the-meter solar — rather than utility-scale solar — was the protagonist. In each case, though, the grid response followed suit, with natural gas stepping in.

Conclusion

The response to the eclipse can be seen as a microcosm of how grids are managing the transition to renewables and their predictable variability.

Places like California are using energy storage (usually charged on excess renewable energy) to fill the gaps in the fluctuations of wind and solar energy (not to mention sudden disruptions in fossil-fueled thermal power plants). In grids like Texas and the Northeast, where there is not yet considerable excess renewable energy or sufficient energy storage, fossil natural gas plants are used to make up the difference.

Maintaining grid reliability while also minimizing electricity-related emissions requires a detailed understanding of how power plants, energy storage, and load flexibility can all participate in a choreographed dance to support the grid’s real-time needs for supply / demand balance.

Hero image of the 2024 solar eclipse passing over the Washington Monument in Washington, DC, by NASA/Bill Ingalls. Used with permission via CC BY-NC-ND 2.0 DEED.

Inside the post-pandemic power sector’s emissions ups and downs

Electricity generation annual emissions for G20 countries graph

This story is already familiar to most, and for many, already feels like a distant memory: in March 2020, much of the world went into lockdown as COVID-19 raged. Everyday life paused and economic activity slowed. In tandem, air pollution and carbon emissions both dropped noticeably.

But then, as life resumed and the global economy returned closer to normal in 2021 and 2022, emissions predictably rebounded. This was true across more or less every sector of the economy, including power sector emissions. The United States — the world’s #2 source of carbon emissions, both overall and for electricity generation in particular — is a good example of this general trend. So is the United Kingdom.

Here at WattTime, we dug deeper into G20 countries’ pre-, during-, and post-pandemic electricity emissions — all cataloged in the detailed Climate TRACE data — and found some interesting alternate trends that deviated from the “standard” pandemic emissions trajectories seen in the U.S. and other countries.

They largely fell into three buckets: 1) countries whose power sector emissions climbed straight through the pandemic and have continued rising, 2) countries whose emissions fell but didn’t rebound, and which have continued falling, and 3) countries whose electricity emissions underwent sharp booms and busts. Why these trends happened in any given country is especially interesting.

Countries where electricity emissions climbed straight throughout the pandemic — and beyond

electricity emissions increase for China and India during pandemic

Across the 19 individual countries of the G20 (the G20 currently also includes the European Union and African Union), most saw their power sector emissions slump during the 2020 pandemic and about half of the G20 hit all-time lows that year. But for a select few, emissions from their country’s electricity generation didn’t blink. It rose during the pandemic and has continued climbing higher since.

China’s power sector emissions march upward: China is the world’s #1 source of greenhouse gas pollution, and the power sector is the country’s single largest source of carbon emissions, according to Climate TRACE data. Those emissions rose in 2020 vs. 2019, then again in 2021 and yet again in 2022 to a new all-time high. Despite rapidly expanding clean energy generation (China installed about as much new solar in 2022 as the rest of the world combined), ongoing expansion of the country’s coal-fired generation and a drought that impacted its sizable hydro fleet have resulted in power sector emissions still creeping upward.

India’s emissions ascent continues: Although India’s rising power sector emissions briefly stalled during the pandemic, they’ve since reached an all-time high in 2022. In fact, India is one of only three countries (behind China and the United States) whose annual emissions from electricity generation exceed 1 billion tonnes — and India’s electricity emissions at #3 globally equals countries 4, 5, and 6 combined. Coal-fired generation comprises more than 70% of the nation’s power mix. Ironically, summer heat waves intensifying from climate change prompted the country’s leaders to mandate that coal-fired generation operate at full capacity to meet surging electricity demand, further contributing to the climate-induced problem. Early this year, India announced plans to further expand its coal-fired capacity.

Countries where power sector emissions have stayed on the down slope

Australia, Japan, and South Africa emissions declined during and after the pandemic

Emissions in the Land Down Under keep declining: In sunny Australia, power sector emissions have been on a five-year run of annual declines since at least 2017. They fell 3.8% during the 2020 pandemic year vs. 2019, then 5.1% in 2021 and a further 4.1% in 2022, totaling an 18.7% drop from 2017 levels. Large declines in the country’s coal-fired generation — and, in parallel, a meteoric rise of new solar capacity, plus some new wind — have driven down overall electricity emissions. These trends are expected to continue, with AEMO forecasting that coal could all but disappear from the nation’s generation mix within a decade.

Falling emissions in the Land of the Rising Sun: As many will recall, Japan largely relied on nuclear power until the 2011 earthquake and subsequent Fukushima accident. In response, the country shuttered its nuclear reactors and pivoted to fossil-fueled generation, including hefty LNG imports, raising the nation’s power sector emissions in the short term. But those emissions have been declining since at least 2015, reaching lows in 2021 not seen since before the Fukushima incident. In 2022, Japan’s power sector emissions bumped up slightly, driven by increased coal-fired generation as a reaction against higher natural gas prices. However, growing renewable generation and offshore wind ambition are keeping the country on an overall downward emissions trajectory.

Coal-dependent South Africa turns the corner: Thanks to coal’s 85% dominance of South Africa’s electricity generation mix, the nation boasts the highest power sector carbon intensity of any country in the G20. There are signs that the situation may now be changing, as evidenced by sharp declines in the country’s electricity emissions in 2022. In recent years new solar installs have been booming, reports BNEF, while state-owned utility Eskom grapples with an ongoing energy crisis and charts a pathway that decommissions much of the nation’s coal-fired power plants as part of a just energy transition plan.

Countries on an electricity emissions roller coaster

Brazil and Mexico emissions have been variable

Drought hurts hydro in Brazil: Hydro comprises nearly two-thirds of Brazil’s electricity generation. It’s one big, wet reason why the country ranks 6th overall globally for GHG emissions, yet sits outside the top 30 for electricity generation emissions in particular. Consequently, Brazil has one of the cleanest power sectors of any major economy. But across the years 2020–2022, a curious thing happened amidst the nation’s power sector emissions. They predictably slumped during the 2020 pandemic, then skyrocketed 68.8% higher in 2021, before falling massively to all-time lows in 2022. Why? As it turns out, in 2021 drought hit the country hard, suppressing hydro generation and prompting elevated LNG imports to compensate. By 2022, the rains returned while wind and solar expanded.

Mexican manufacturing and the growth of natural gas generation: After years of declining power sector emissions — through the pandemic and into 2021 — Mexico’s electricity emissions rebounded massively in 2022, to near an all-time high. At least three concurrent factors contributed: 1) a rise in Mexico’s manufacturing sector (partly in response to nearshoring trends), 2) drought that reduced the country’s hydro generation to a 20-year low, and 3) a significant bump in natural gas-fired electricity generation. Meanwhile, the nation’s lawmakers eliminated its Climate Change Fund and have put the future of clean energy development into question.

Conclusion

Looking back across these examples, it becomes clear that specific causes in each country’s power sector are driving the macro trends for annual electricity emissions: 1) Where wind and solar are scaling and capturing a great portion of a nation’s generation mix, fossil-fueled electricity emissions are falling. 2) In countries where the buildout of coal-fired generating capacity continues, electricity emissions are still rising, too. 3) For countries with a notable slice of hydro power in their electricity mix, they are backfilling drought-reduced hydro generation with natural gas, causing electricity emissions to yo-yo.

Later this year, WattTime and Climate TRACE will update our data with 2023 numbers, too. It will be interesting to see how these and other countries continue to track.