Making Electricity Ubiquitous, Ubiquitous But Not Necessarily Cheap

I attended the 37th International Association for Energy Economics International Conference 2014 June 15-18 in New York City.  During the Wednesday “Dual Plenary Session: Utility Business Model” Michel Derdevet, Secretaire General, Electricite Reseau Distribution France, the French distribution utility, raised the issue of getting electricity to the billion people in the world who don’t have access to electricity.

During the audience discussion period, I raised the concept of a different business model, unregulated micro-grids owned by non-utilities.  I mentioned that a friend thought this concept could be applied to the nearly ubiquitous cell phone tower.  Cell phone towers require a power supply.  My friend thought that the provider of a cell phone tower should be granted a 10-year license to provide power on an unregulated basis.

My thought was that owner of the cell phone tower should be allowed to provide electricity and compete against anyone else that wanted to provide electricity.  Competition can better drive down prices than can regulation.  Regulation in terms of getting electricity to be ubiquitous would just stifle innovation.

Over the years, newspapers and newsmagazines have had pictures of the electric grid in some countries that look like a spider’s web on LSD.  Local entrepreneur’s buy their own diesel generators and provide “backup” electricity to their neighbors over wires strung in parallel or across the wires of the local utility.  The electricity is “backup” electricity in that it is used only when the local utility doesn’t have enough central station power to provide electricity to the entire national grid.  The utility then blacks out some neighborhoods.

The neighbors buy only a small amount of “backup” electricity from entrepreneur because the “backup” electricity is so expensive, being produced by diesel generators, which are less efficient and use a premium fuel.  The “backup” electricity is used for lights and a fan at night, perhaps for a refrigerator, not for those devices that might otherwise by electricity guzzlers.[1]  When the utility once again has enough power, competition drives the price down, putting the high cost entrepreneur out of business.

These micro-grids, whether run by the owner of the cell phone tower or by a neighborhood entrepreneur, can make electricity ubiquitous, even if the electricity is not cheap.  After all, Michel Derdevet said to me after his panel was done that some people were pushing for ubiquitous power supplies so they could make money selling electricity, not just for an eleemosynary purpose.  Thus, the power might not be cheap.

During the plenary session, Jigar Shah, Founder, SunEdison, LLC, claimed that California, with the highest electricity rates in the U.S., does not have the highest energy bills, because California residential consumers use less electricity. [2]  This is consistent with my comment about the lower usage of “backup” electricity relative to central station power.  However, elasticity may not be the only explanation for the lower consumption in California.  There is also the issue of climate and rate design.

Standard rate design practices also result in higher prices for customers with smaller consumption levels. The standard residential rate design has a monthly customer charge (say $10/month) and a commodity charge (say $0.09/KWH).  These rate levels nominally reflect the way that a utility will incur costs, a fixed cost per customer and then a cost that varies with the amount of energy the customer consumes.  A customer using 100 KWH per month would have a monthly bill of $19 and an average cost of $0.19/KWH.  A customer using 1000 KWH per month would have a monthly bill of $100 and an average cost of $0.10/KWH.  Thus, an area of the country with lower electricity consumption can be expected to have higher average cost and lower overall bills.

The micro-grid could be operated by the above mentioned owner of the cell phone tower or an entrepreneur.  I like to think of innovators who decide to install their own electric systems and then share with their neighbors, which is a form of the first type of owner.  The generation is put into place for a purpose other than selling electricity but if the sales are lucrative enough, the owner may decide to upsize his generating capacity until the owner looks like a utility.

A friend built such a system during the 1980s while in the Peace Corps in Africa.  He had an engineering degree from MIT.  So he took the concepts learned dabbling in the MIT labs in Cambridge, Massachusetts and applied the concepts in the field in Africa.  Later my friend worked for a Westinghouse subsidiary building power supplies for remote micro-wave towers (the same concept as a cell phone tower) and remote schools.  His company used mostly renewable energy, such as solar and wind, because diesel was so expensive in the remote areas he electrified.  Diesel was used to top off the associated batteries when there had been insufficient renewable energy supplies for extended periods of time.  It was during this period of his life that I met this fellow MIT alumnus.

Yes, we can make electricity ubiquitous.  But it will take competition to make it cheap, or at least not quite so expensive.

[1] As an aside, the consumption reduction during periods when “backup” electricity is being used demonstrates the concept of the elasticity of consumption.  When prices go up, consumption goes down.

[2] Looking at Table 6 of the EIA’s utility data for 2012, the average price to residential consumers in California was $0.153/KWH, or 8th most expensive.  The average consumption for residential consumers in California was 6,876 KWH/year, or the 3rd lowest after Hawaii and Alaska.  The average bill for residential consumers in California was $1,053/year, or the 10th lowest in the U.S.

Disruptions, Energy Markets and “Joseph and the Amazing Technicolor DreamCoat”

On 2014 April 22 as this year’s president of the National Capital Area Chapter (NCAC) of the U.S. Association for Energy Economics (USAEE), I will preside over NCAC’s 18th Energy Policy Conference, which this year has the title “Disruptive Technologies Shock All Energy Sectors.”[1]  These disruptive technologies will require additional infrastructures, such as pipelines, wires, refineries, and generators.  And, since we operate a free market economy in the United States, we will need dynamic markets to handle the effects of these disruptive technologies as we see a change in the way energy flows in North America.

Wind pockets in the Great Plains and West Texas need high capacity lines to transport the energy across space to areas where the need for electricity is greater.  We need ways to pay for those transmission lines.  In response to the intermittency associated with wind, we will need fast response generators and ways to pay those generators to operate only a small fraction of the year.

Some fast response generators will be storage devices.  A high price for storage devices providing electricity for a small fraction of the year will be meaningless unless there are low prices during the portion of the year that the storage devices are being recharged.  This will move electricity across time, using cheap electricity during periods of fat to provide electricity during later periods of lean.

Oil production areas in North Dakota and Montana need pipelines and rail cars to move oil across space to market.  For years, the availability of low cost oil pipelines has reduced the price differential across the U.S.  The lack of sufficient pipeline capacity has depressed the well head price of oil in the Bakken fields, reflecting the higher cost of rail transportation to refineries.  New oil pipelines will reduce this price differential.

The natural gas system has many storage fields.  I mentioned earlier electricity’s growing need for storage.  And petroleum and its refined products also need storage.  During January 2014, there was not enough propane in storage in the Midwest and prices soared.  The shortage could have been handled by more refined products pipeline capacity, but additional storage would also have been an option, perhaps a cheaper option.

Though the conference is about technological disruptions, the shortage of propane in January can be thought of as a weather disruption.  Some people say that we are experiencing climate change.  My first experience with a claim of climate change was in 1990, when Edith Corliss, a physicist with the National Institute of Standards and Technology, a bureau of the U.S. Department of Commerce, told me was that the weather at that time more variable than weather had been since the time of Christ.  Our summers were alternately either (A) hotter and dryer or (B) cooler and wetter.  Or to put it mathematically, we were seeing a greater statistical variance and standard deviation in the measured temperature and the measured rainfall.  The el Niños were getting more intense, as were the la Niñas.  We were not having more of one and fewer of the other, just seeing more intensity in each.

I am reminded of the stage musical  “Joseph And The Amazing Technicolor Dreamcoat.”  The DreamCoat refers to a vision by the pharaoh that Joseph interpreted as a climate disruption.  There were to be seven years of fat followed by seven years of famine.  Joseph then created a physical system and a market to handle this insider knowledge.  He stored grain during the years of fat and used the grain sparingly through the end of the years of famine.  In commercial parlance he bought low and sold high.  In legal parlance, he traded on insider information and made a killing.

We need new infrastructure to handle the growing disruptions created by technological changes.  But we also need dynamic markets and new market mechanisms in our free market economy.  At least that is my Technicolor dream.

[1] See the conference notice at

Electric Demand Charges: A Lesson from the Telephone Industry

The only ad with prices that I remember from 50 years ago was AT&T’s offering of a three minute coast to coast telephone call for $1.00.  With the inflation that we have seen over the last 50 years, one would expect that a coast to coast call would now be at least $10.00 for three minutes.  Instead, most telephone bills show a monthly service fee and no itemization for individual calls.  Automation has allowed the telephone companies to do away with most telephone operators, which was a significant portion of the variable cost of making long distance telephone calls.  The principal cost is now the investment in the wires, which doesn’t change with the number of calls that are carried.  So, most carriers now charge a monthly fee and little or no charge per call.  Perhaps it is time for the electric industry to go that way?


The restructuring of the electric industry has generally separated the distribution wires function from the generation[1] and transmission[2] function for most customers of investor owned electric utilities.  This restructuring puts such electricity customers into the same position as their counterpart customers of municipally and cooperatively owned utilities.  Municipally and cooperative owned utilities have generally been distribution only utilities, buying generation and transmission services from others, instead of being vertically integrated like most investor owned electric utilities.


The restructuring of the electric industry has resulted in most customers being served by a distribution company which has very little variable cost, much like the telephone companies.   A significant distinction is that telephone lines handle one call at a time.  The telephone line is either in use or is not in use.  In contrast, electric utilities provide a continuously variable service.  The customer may be taking 10 watts (a small light bulb) or 10 kilowatts (running the A/C, water heater, and stove at the same time), or any amount in between.  The telephone company has the wires to serve the customer’s demand, whenever that call occurs[3].  The electric distribution company similarly has the wires to serve the customer’s demand, whenever that demand occurs.  While the telephone company will have customers on a binary basis (they are either a customer or are not a customer), the electric distribution customer serves its customers on a continuous basis (they might be very small customers who never use more than 10 watts or a very large customer that might use up to 100 MW.)


The binary basis of telephony customers allows the telephone companies to charge their customers a specific amount on a monthly.  The continuous nature of the size of electric services suggests that electric distribution companies charge their customers a price based on the size of the electric service used by the customer.  For commercial and industrial customers, electric utilities have long included in their tariffs a demand charge that depends on the maximum power that the customer used during the billing period[4].  Typically such demand charges will be based on the average consumption for some 15 minute period.


Cost has been a significant factor that limited the use of demand charges to commercial and industrial customers.  Demand meters are more costly to manufacture, in that they do more than just accumulate the amount of energy that goes through the meter.  Demand meters are more expensive to read, in that the meter reader has to note two quantities and has to manually reset the demand register.  These two cost factors are lesser issues in regard to determining residential demand now that the industry has moved significantly to Advanced Meter Reading (AMR) and to Advanced Meter Infrastructure (AMI[5]), both of which automatically collect consumption data, including for 15 minute intervals.


Historically residential demand charges was thought to produce an insignificant shift of revenue among residential customers.  The reasoning was that, though residential customers are different in size, they have a similar load pattern.  A customer using 1,000 KWH a month would have ten times the demand as a customer using 100 KWH a month.  Implementing a demand charge that collected an amount equal to 20% of the energy revenue collected from the larger customer would also collect an amount equal to 20% of the energy revenue collected from the smaller customer.  There would be no revenue shift among these residential customer, at least for consumption.  However, the utility would have had to install more expensive meters, which would have increased the monthly customer charge of both customers without providing a significant benefit to the utility or to the customers.


The move to AMR and AMI has reduced the cost of determining the demand for residential customers.  Now the cost of determining metered demand is not an issue in differentiating between customers with different consumption patterns.  Customers who should be paying a demand charge equal to 30% of their energy payments can be distinguished from customers who should be paying a demand charge that is only 10% of their energy payments.  Further, on site generation has changed the paradigm that residential customers have similar load patterns, so that now the industry knows that there are the 30% customers versus the 10% customers and can bill them appropriately.  Indeed, for houses with sufficient on-site generation, the revenue from the demand charge could be several times the revenue from the energy charge, especially when the energy charge vanishes for a net zero home.

The growth in AMR and AMI along with the growth in residential on-site generation makes this an appropriate time for restructuring residential tariffs to include a demand charge to collect the cost of the distribution utility owning the power lines.  The energy charge should continue to collect the cost of generation and transmission, though the energy charge should be time differentiated to reflect the real time value of generation and transmission, as well as the associated energy losses.

[1] The creation of Independent System Operators (ISOs) is alleged to have brought competition to the generation sector of the electric industry.  However, many distributed generators, such as roof top solar, do not experience the real time market prices set by their local ISO.  This distorts the market for distributed generation.

[2] The creation of ISOs is also alleged to have brought competition to the transmission market.  But ISOs compensate many transmission lines on a cost of service basis, through a monthly fee, though they charge geographically differentiated prices based on line losses and line congestion and generally don’t compensate for loop flow or parallel path flows, such as PJM imposes on TVA and on the Southern Company, both of which have lines in parallel to PJM>

[3] Telephone customers occasionally receive a business signal, indicating that the called party is using his/her phone.  More rarely, customers will receive a circuits business signal, indicating that intermediate wires are in full use, not that the called party is using his/her phone.

[4] Demand charges come in a variety of forms including contract demand, thermal demand, and ratcheted demands, a distinction beyond the scope of this discussion.

[5] AMI is generally distinguished from AMR in that AMI generally includes the ability to communicate both ways, from the meter to the utility and from the utility to the meter/customer location.  The ability to communicate from the utility to the meter allows the utility to control devices that the customer has opted to put under the utility’s control such as electric water heaters, air conditioning compressors, and swimming pool pumps and heaters.

Utility 2.0 or Just Utility 1.X

On Tuesday, 2013 October 29, I attended a discussion of the report “Utility 2.0: Building the Next-Gen Electric Grid through Innovation.”  I left feeling that the innovations discussed are just more of the same, just as I have often described the smartgrid as SCADA[1] on steroids.  The innovations are not creating Utility 2.0 as much as making slow changes to the existing utility structure, just varying the X in Utility 1.X.

Electric utilities began automating the electric system as soon as Edison started his first microgrid, the Pearl Street Station.  At one time, an operator would read a frequency meter to determine the balance between supply and demand.  In the earliest days, Edison had a panel of light bulbs that would be switched on and off to maintain that balance, which was a strange form of load management.  The operator would also be able to vary the generation by change the water flow to a hydro-turbine, the steam from the boiler, and the fuel into the boiler.  Edison invented control mechanisms that were cheaper than the labor costs of the operator, control mechanisms that his companies also sold to other utilities.  These control mechanisms can be considered to be some of the first SCADA systems.  As the control mechanisms and telephony got cheaper and labor become more expensive, more labor saving devices could be installed.  The policy of having an operator at every substation was replaced by remote devices, lowering the cost of utility service.  The smartgrid concept is just more of the same, as computers become cheaper and faster, remote metering less expensive, and remote control easier to accomplish.

The true quantum change in utility operations occurred in federal law.  PUHCA[2] effectively prohibited private individuals from selling electricity to a utility, by defining the seller to be a utility, subject to utility type regulation and to prohibitions on non-utility operations.  Because of PUHCA, Dow Chemical operated its chemical plants as the ultimate microgrid, running asynchronously and unconnected to local utilities.  Dupont installed disconnect switches that would separate its microgrid chemical plant from the local utility if power began to flow out of the plant.  International Power and Paper became International Paper.  Exxon intentionally underinvested in its steam plants, limiting its ability to produce low cost electricity.  PURPA[3] provided exemptions from PUHCA for cogeneration plants such as those mentioned here and for qualifying small producers using renewable resources.  The latter exemption was almost in anticipation to the growth of roof top solar photovoltaics (PV).  These facilities needed utility markets into which to sell their surplus, which generally resulted in individually negotiated contracts.  The creation of the ISO[4] concept could be considered to be an outgrowth of the desire by these large independent power producers (IPPs) for a broader, more competitive market, instead of the monopsony into which they had been selling.  ISOs now have a footprint covering about 2/3 of the lower US, excluding Alaska and Hawaii.

ISOs generally deal only with larger blocks of power, some requiring participants to aggregate at least 25 MW of generation or load.  ISO control generally does not reach down into the distribution system.  The continued growth of labor costs and the continued decline of automation costs has allowed the SCADA concept to be economic on the distribution grid, including down to the customer level.  This expansion of SCADA to the distribution system will soon require changes in the way the distribution system is priced, both for purposes of equity and for Edison’s purpose of controlling the system.

  • The growth in rooftop PV is dramatically reducing the energy that utilities transport across their distribution system.  This energy reduction generally reduces utility revenue and utility income.  Under conventional utility rate making, the result is an increase in the unit price charged by the utility for that service.  Some pundits point out that the owners of the rooftop PV panels are generally richer than the rest of the population served by the utility.  These solar customers are cutting the energy they consumer, though not necessarily their requirements on the utility to provide some service through the same wires.  The rate making dynamics thus result in other, poorer customers seemingly subsidizing the richer customers who have made the choice for rooftop solar.  This seems inequitable to some.
  • The growth in rooftop PV has outstripped the loads on some distribution feeders, with reports that the generation capacity has sometimes reached three times the load on the feeder.  These loading levels cause operating problems in the form of high voltages and excessive line losses.  During periods of high voltage and excessive line loss, prices can provide an incentive for consumers to modify their behavior.  The genie seems to be out of the bottle in regard to allowing the utility to exert direct physical control over PV solar, but real time prices could provide some economic control in place of the tradition utility command and control approach.

I have discussed the need for real time pricing of the use of the distribution grid in “Net Metering:  Identifying The Hidden Costs;  Then Paying For Them,” Energy Central, 2013September 20.[5] I have described a method in “Dynamic ‘Distribution’ Grid Pricing.”[6]

Changes in state regulations have also impacted this balance between labor costs and automation costs.  Some states now have performance incentives based on the number of outages and the typical restoration times.  The cost associated with the time of sending a line crew to close a circuit breaker now competes with the incentives to get that closure faster, through the use of automation.

In conclusion, the increase in utility automation is not so much innovation as it is a continuation of the historic utility practice of the economic substitution of lower cost technology for the ever increasing cost of labor.  The 1978 change in federal law led to the growth of ISOs and bulk power markets, but did not reach down to the distribution level, perhaps of the lack of non-utility industrial support.  The growth in rooftop PV will provide the incentives for expanding the real time markets down the distribution grid to retail consumers.  Though computers indeed have gone from 1.0 (vacuum tubes), to 2.0 (transistors), to 3.0 (integrated circuits), I don’t see the current changes proposed for utilities to be much more than following the competition between labor costs and automation costs.  We are still Utility 1.X, not Utility 2.0.

[1] Supervisory Control And Data Acquisition.

[2] Public Utility Holding Company Act of 1935

[3] Public Utility Regulatory Policies Act of 1978

[4] Independent System Operator

[6] A draft of this paper is available for free download on my web page,

FERC, Barclays, and Formulary Arbitrage

On 2013 July 16, FERC ordered Barclays bank to pay a half billion dollars for market manipulation.[1]  The next day Barclays responded by suing FERC in federal district court, forcing FERC to prove the allegations in a venue that Barclays feels would be a level playing field.  On July 22, a New York reporter called a friend of mine who is normally well versed in utility legal matters, having been a regulator and a utility executive, seeking to understand what Barclays did to get FERC upset, asking for a simplified explanation.  My friend suggested another industry expert and also called me.  I got back to my friend on July 23, heard the request, and wrote a message to him explaining what I understood Barclays to have done on a generic basis, without having read much more than articles in The Washington Post.  It is that message of yesterday that I am copying here.

Thanks for the call on Monday in response to a NYC reporter who was asking you for background or information about the Barclays spat with FERC.  I have not followed the details of the spat, but the way I find it easiest to describe is as a thinly traded formulary arbitrage.

An arbitrage is buying and selling related securities with the hope of making a profit on the difference.  For instance, one might buy oil for $90/bbl and sell at $100/bbl and make $10/bbl on the paired transactions.  If the transactions are for oil delivered in different locations, one might also have a transportation cost of $1/bbl, reducing the profit to $9/bbl.  There may be some insurance and other handling costs, but if they are minor, one makes a profit so long as the differential is greater than the cost of transportation.  The transactions can be for the same location but different times.  One might buy a May futures contract for $90/bbl and sell a June futures contract for $100/bbl and make $10/bbl on the paired transactions.  But one has to store the oil, which might again cost $1/bbl for going into and out of storage and for one month in storage.

My understanding is that futures contracts settle at the end of the month before at the average spot price of oil on the last few days of the month.  So, a May futures contract would be settled at the end of April based on the spot price on April 30, or some sort of average.  I call this a formulary settlement.

If one has bought a lot of May futures contracts, one would like to see them settle at a very high price.  So, one might buy lots of spot oil on April 30 to push up the price at which the May futures contracts would settle.  This would be a formulary arbitrage.

But considering that the spot market is very large, it is difficult to budge by buying a lot of spot oil on April 30.  One might need to buy enough oil to supply ExxonMobil.  That would be a thickly traded formulary arbitrage.

Some commodity exchanges offer electricity forwards markets which settle based on the actual spot price of electricity on some of the ISOs.  The ISO spot prices might be thinly traded.  A paired transaction on the electricity forwards market and the ISO market may be a thinly traded formulary arbitrage.  At least that is what I would have told the reporter had you directed him to me.

Hope this helps for the next time.

[1] The Federal Energy Regulatory Commission (FERC) today ordered Barclays Bank PLC and four of its traders to pay $453 million in civil penalties for manipulating electric energy prices in California and other western markets between November 2006 and December 2008. FERC also ordered Barclays to disgorge $34.9 million, plus interest, in unjust profits to the Low-Income Home Energy Assistance Programs of Arizona, California, Oregon, and Washington. FERC News Release

Storage/Pricing — Chicken/Egg

On Tuesday, 2012 November 27, I attended the Heritage Foundation’s discussion of Jonathan Lesser’s 2012 October paper “Let Wind Compete: End the Production Tax Credit.” The only philosophical statement on which there seemed to be agreement was that improved storage systems could improve the market for wind.

But who would own the storage systems necessary to make wind even more viable? Unless the ownership is in common with the wind systems, how would these storage systems be compensated?

  • And, can we expect entrepreneurs to build these storage systems and then expect FERC to set an appropriate price? Beacon Power produced a flywheel storage system but couldn’t get FERC approval of a tariff before it ran out of operating cash and is now bankrupt.
  • Or should FERC put into place a pricing mechanism that could compensate storage systems when they arrived on the scene? I look at this as the Field of Dreams mantra of “If you build it (a competitive market appropriate for storage systems), they (storage systems) will come.”

Truly, a chicken and egg issue.

Wind has been accused of having two failings. Wind often provides a lot of power at night, when electricity is not highly needed.  Wind provides less power on the hot mid-summer afternoon, when electricity is needed the most. This is an intra-day issue for storage to handle. Wind power also follows the wind speed. A wind gust can push power production up to great heights. A wind lull can suddenly drop power production. Storage could be useful for handling this intra-hour issue.

Not all storage can handle both the intra-day and the intra-hour issues well. For example, the storage part of the Heritage Foundation discussion mentioned only pumped storage hydro as a representative storage technology to help wind. Pumped storage hydro has been used for decades to transfer power from the nighttime and weekends to the midweek daytime periods. That is, pumped storage is known as a way to handle the intra-day issue. I like pumped storage. My first job after getting a Masters from MIT’s Sloan School was with American Electric Power which owned a pumped storage plant. This perhaps accounts for some of my bias of liking pumped storage hydro.  (Actually I like to have a variety of generation options available, not just pumped storage.) Pumped storage hydro is excellent for intra-day transfers of power.

I have never seen anyone use pumped storage hydro for intra-hour transfers of power, or even propose it for such purposes. The absence of a historical use of pumped storage to provide intra-hour storage doesn’t mean that pumped storage could not be used for that purpose. After all, many people tout pumped storage for its ability to respond in seconds to changes in the need for electricity.

Pumped storage is often touted as being about 75% efficient. For every 100 MWH used for pumping, 75 MWH can be subsequently generated. We can model the effect of shorter duty cycles by beginning with the assumption that 0.5 hours in the pumping mode is ineffective. Under this modeling assumption, for 13 hours of pumping, there is the equivalent storage of 12.5 hours. With the 75% efficiency assumption, the system can generate for 9.375 hours, for a revised efficiency of 72% (9.375/13). Reduce the pumping time to 5 hours will reduce the generating time to 3.375 hours and the revised efficiency to 67%. Reduce the pumping time to 1 hour will reduce the generating time to 0.375 hours and the efficiency to 37%. This is not a very good efficiency ratio but we normally don’t think of running pumped storage on an intra-hour basis. I don’t know that pumped storage can run with just one hour of pumping, just that trying to do so will be costly, indeed very costly.

The intra-hour situation has been handled by batteries, flywheels, magnetic storage devices, and theft of service. Theft of service is a harsh term. When an electric utility faces the intra-hour problem associated with rapid changes between wind gusts and wind lulls, the physics of the electric system results in inadvertent interchange, electricity moving into and out of the utility.  With the inadvertent interchange going both ways, which utility is providing a service to the other utility?

If the wind gust occurs first, the power is stored on a neighboring utility system. If the lull occurs first, the utility is borrowing electricity and then gives it back. There is no systematic payment mechanism associated with this storage or borrowing of electricity. It is a free service as I described over two decades ago in “Tie Riding Freeloaders–The True Impediment to Transmission Access,” Public Utilities Fortnightly, 1989 December 21.

Most of the currently operating pumped storage systems were put into place by vertically integrated utilities. AEP often looked at its coal fired generating system as providing cheap, efficient capacity, allowing AEP to make large sales to its neighboring utilities. But the pumped storage system also helped AEP with its minimum load issues. The large AEP generating units were very efficient. The investments made to achieve these efficiencies hampered the ability of generators to cycle down at night, during minimum load conditions. Pumped storage systems helped AEP with that situation. Now many pumped storage systems operate in advanced markets operated by ISOs/RTOs, where their value can be assessed based on their interaction with the advanced market.

The thought process of testing how a pumped storage system would operate on an intra-hour basis also provides some information about profitability issues. For 13 hours of pumping and 9.375 hours of generation requires the off-peak price to be less than 72% of the on-peak price to achieve breakeven revenues, that is revenue from the sales to be equal or exceed the payments for pumping energy. The off-peak price has to be even less for the pumped storage system to have book income, that is the ability to cover its investment and other operating costs. The shorter the operating period, the smaller the break-even off-peak price relative to the on-peak price. A competitive market for storage systems needs to have very low “off-peak” price relative to its “on-peak” prices.  In this context, off-peak price and on-peak prices could be better described as storage prices versus discharge prices.

The advanced markets have hourly pricing periods that are consistent with the dispatch periods of pumped storage.  But for rapid response storage, hourly energy prices do not provide any incentives for the storage system to operate on an intra-hourly basis.  Indeed, if storage systems are to operate on an intra-minute period, then prices need to be differentiated on an intra-minute basis, not just on an intra-hour basis.  Area Control Error (ACE) is an intra-minute utility metric that can be used to set an intra-minute price for storage systems that are expected to be operated on an intra-minute basis.  India has developed a very simplified pricing vector that uses ACE to set the price for Unscheduled Interchange on an intra-dispatch period basis.

In India, the regional system operators set hourly schedules for the utilities and for non-utility owned generators.  Though the schedules are hourly, the utilities and non-utility owned generators are nominally required to achieve an energy balance every 15 minutes.  Each 15 minute energy imbalance is cashed out using a pricing vector that indexes the price for all imbalances against system frequency.  In India, system frequency is the equivalent of ACE.

There are ongoing discussions in India about modifying the pricing vector to reflect the hourly settlement price, to expand the pricing vector for more extreme values of ACE, to geographically differentiate the price, etc.  Though there are discussions about revamping the pricing vector, the pricing vector concept has greatly improved the competitive system against which the utilities and non-utility owned generators have be operating.  The pricing vector concept could be used to price intra-dispatch period storage to provide the competitive market from which the storage systems could draw power and into which the storage systems could discharge power.

Utilities, including ISOs/RTOs, use ACE to determine dispatch signals for their generators.  ACE is calculated every three or four seconds using the frequency error on the network and the interchange being delivered inadvertently to other utilities on the network.  Generally, the convention is that a positive ACE means that the utility has a surplus, while a negative ACE means that the utility has a shortage.

  • A surplus means that the utility is giving away energy, not getting any money for the surplus energy.  Under the situation of a positive ACE, the utility will want its generators to reduce their generating levels and would want storage systems to store energy.  As demonstrated by the earlier thought experiment, the market price for unscheduled energy into the storage system would have to be low for the storage system to absorb the energy economically.  When the utility is giving the energy away and not getting any money for the giveaway then any price, even a low price, for the energy going into storage can be appropriate.
  • A shortage means that the utility is taking energy from its neighboring unities, without paying for the shortage.  This is one form of the theft of service I mentioned facetiously above.  Under the situation of a negative ACE, the utility will want its generators to increase their generating levels and would want storage systems to produce energy.  As demonstrated by the earlier thought experiment, the market price for unscheduled energy coming out of the storage system would have to be very high for the storage system to produce the energy economically.  When the utility is stealing energy, then any price, even a very high price, for the energy coming out of storage can be appropriate.

For an explanation of the Indian mechanism for pricing Unscheduled Interchange, I recommend “ABT – Availability Based Tarrif”,[1] a completion of postings on InPowerG, the Indian equivalent of IEEE’s PowerGlobe and “ABC of ABT: A Primer On Availability Tariff,”[2] written by Bhanu Bhushan, the developer of the Indian pricing vector concept.  For a discussion of advanced pricing vectors that could be used for pricing storage, see the papers on my web site,[3] especially those filed recently with FERC.

The advanced markets have prices for generators that respond to the dispatch programs in a rectangular manner. For instance, consider a 5 minute dispatch period.  The price does not differentiate between those generators that are ramping versus those that are constant or those that move up and down to counteract ACE excursions.  An intra-dispatch period price for generation excursions would reward those generators (and loads) that help with ACE excursions and charge those generators (and loads) that cause the ACE excursions.  A pricing plan that achieves such a concept would be worthwhile even before fast acting storage systems came on line.




Economic Failures Contribute to Indian Grid Blackouts

On 2012 July 30 and July 31, India experienced massive blackouts on its electricity grid. The first blackout was early in the morning of July 30 and affected only the Northern Region. The second failure was at midday on July 31 and affected the Northern Region, the Eastern Region, and the Northeastern Region. The Western Region, though interconnected with the Northern, Eastern, Northeastern Region in the NEW Area, survived, as did the Southern Region, which is not interconnected synchronously with the NEW Area.

Various comments have been made about the events that led up to the blackouts. This blog entry will only discuss some failures of the economic systems that contributed to the blackouts.


Prices of Inadvertent Didn’t Reflect Security Issues, That Is, No Locational Marginal Prices

Beginning in 2002, India implemented its Availability Based Tariff (ABT) that included the creation of a market for imbalances, Unscheduled Interchange (UI).  ABT pricing of UI has a geographically uniform price.  I said early on that the price in areas that could be at risk of a blackout due to a power deficit after a transmission failure should have higher prices than other regions, those that have a power surplus.

Other pundits have suggested that utilities in the Northern Region ignored operators’ requests to reduce load because the energy price was low enough, that there were no economic consequences of taking too much electricity. The economic system failed the Indian electric network by not providing sufficient monetary pain for ignoring operators’ request.

A rigorous locational pricing plan could have produced that monetary incentive. I have not heard that all of the NR utilities were drawing more power than the amount which had been scheduled. A feature of ABT pricing of UI is an incentive for some utilities to draw less power than the amount which has been scheduled, not just for utilities to reduce their draw to the scheduled amount. Thus, a locational pricing plan would have led to some NR utilities to under draw and help stabilize the system.


High Inadvertent Prices Could Not Incent Backup Generators to Assist the Grid

Customer owned back up generators can do double duty. The standard use of a back up generator is providing electricity to the owner when there is a rotating blackout that affects the owner. When rotating blackouts are not affecting the owner, stand by generators can provide electricity to the grid when the value of electricity on the grid is high enough to pay for the fuel cost of the back up generator.

This second use of back up generation requires

  • the back up generator being able to operate synchronously with the grid; and
  • metering to identify the amount of energy provided to the grid to displace the high value UI power that the utility would otherwise be purchasing on a real time basis.


Power Deliveries to Farmers Aren’t Structured so Farmers Can Help Save the System

Indian farmers do not participate in the market for electricity, generally receiving several hours of free service. Giving Indian farmers a fixed subsidy would allow them to participate in the UI market for the number of hours they need for electricity. This would give the farmers incentives to help the system when there is a larger shortage of electricity. From the farmer’s perspective, the farmer would be able to use electricity for more hours since the average price could be cheaper than that upon which the subsidy was predicated.

Heads I Win, Tails You Lose: What to Do When Wind Doesn’t Perform as Promised

Wind generation is unpredictable.  Many like to use the term intermittent.  Some say that the term intermittent is inaccurate.  I prefer to talk about unscheduled flows.  The wind operator makes a commitment to produce power at a specified rate.  Sometimes the production exceeds that specified rate.  Sometimes the production is less than the specified rate.  Seldom is the production exactly equal to the specified rate.  It reminds me of Goldilocks and the three bears,  “Too hot, too cold, but seldom just right.”

Most utility approach unscheduled flows of electricity by punishing the provider for any imbalance.  If production exceeds the specified amount, then the price for the surplus is less than the standard price.  If production is less than the specified amount, then there is a high price changed for the shortage.  “Heads the utility wins.  Tails the generator looses.”

Utilities are used to the concept of “Too hot, too cold, but seldom just right” in the way they control their operations using the metric of Area Control Error (ACE).  Until about a decade ago, the operating paradigm was that ACE should pass through zero at least within 10 minutes of the last time it passed through zero.  ACE never was quite equal to zero, sometimes it was “too hot”, sometimes it was “too cold”, but never was it “just right.”  ACE just passed through being just right.

These “seldom just right” concepts can be combined into a financial model.

  • When ACE is positive and there is “too much electricity,” we can set a very low price for unscheduled amounts of wind.
    • If the wind is producing too much, then the wind operator will be disappointed with the price. 
    • But if the wind is operating below the specified rate, the charge for the shortfall will be the same very low price.
  • Conversely, when ACE is negative is there “isn’t enough electricity,” we can set a very high price for unscheduled amounts of wind.
    • If the wind is producing too much, then the wind operator will enjoy the high price for its surplus generation.
    • If the wind is producing less than specified, then the wind generator will face a penalty rate for the short fall. 

Since ACE is nominally a continuous variable, the price can vary continuously around some set point, such as the utility’s announced hourly price for electricity.

I call this pricing concept WOLF, for Wide Open Load Following.  You may want to read an old paper of mine or recent comments

  • “Tie Riding Freeloaders–The True Impediment to Transmission Access,” Public Utilities Fortnightly, 1989 December 21,
  • “A Pricing Mechanism To Facilitate Entry Into The FCAS Market” Investigation Of Hydro Tasmania’s Pricing Policies In The Provision Of Raise Contingency Frequency Control Ancillary Services To Meet The Tasmanian Local Requirement, Office of the Tasmanian Economic Regulator, 2010 July 9
  • “Ratemaking To Facilitate Contra-Cyclical Operations” FERC Docket RM10-17-0000 Demand Response Compensation In Organized Wholesale Energy Markets, 2010 December 27.

Grid Security in India

On 2011 February 26, S.K. Soonee, CEO at India’s Power System Operation Corporation Limited posted on LinkedIn’s Power System Operator’s Group a link to a paper written by the staff of India’s Central Electricity Regulatory Commission.   “Grid Security – Need For Tightening of Frequency Band & Other Measure” can be accessed at  Through LinkedIn I provided the following comments.

 I am dismayed that the simple elegance of the UI pricing vector, as shown in the two diagrams for 2002-2004 and 2007-2009, will be replaced by the convoluted vectors on 2011 May 3. It seems that there is a potential for mischief with having the multitude of simultaneous prices, with an undue accumulation of money by the transmission grid as some UI out of the grid is priced at very high prices at the same instant that other UI into the grid is being priced at a lower price. This is an unwarranted arbitrage for the transmission system.

The HVDC links between S and NEW could provide a warranted arbitrage situation where the grid with lower frequency delivers to the grid with the lower frequency. The different frequencies would result in different prices, with the price differences providing some financial support for the HVDC links.

I was surprised that there was no mention made in regard to Figures 1 and 2 as to when UI pricing started, and how that UI onset resulted in a narrowing of the spread between daily high frequency and daily low frequency. I believe these figures could be well supplemented by a presentation of histograms of the monthly frequency excursions, and how those histograms have changed over time. A numeric approach would include monthly average frequency and monthly standard deviation from 50 Hertz, a statistic for which you have a special name that I forget.

Parts of the Agenda Note discuss the serious impact of very short periods of frequency excursions. These short periods of concern are much shorter than the 15 minute periods used for determining UI. The various parts of the Agenda Note could be harmonized by reducing the size of the settlement period for UI from 15 minutes to 5 minutes or 1 minute.

There is a discussion of limits on the amount of UI power that a participant can transact. I question the need for such limits. As a participant increases the UI power being transacted, the price will move in an unfavorable direction, providing an additional financial incentive for the participant to reduce UI power transactions. For example, a SEB that is short of power and is buying UI faces higher prices as the UI transaction amount increases. These higher prices provide a multiplicative incentive for the SEB to reduce its shortage and its purchase of UI.

Many systems plan for the biggest credible single contingency, which the report treats as the single largest unit. The report shows that entire plants have gone out at a same time, suggesting that the biggest credible single contingency is a plant not a generating unit.

As an aside, in the listing of the generating capacity by size of generating unit, my experience in the US suggests that the list understates the number of generators. There would be many times the identified number of plants if the list included captive generators such as backup generators, which may be as small as a few KW. Again, based on my experience in the US, the total capacity of those unidentified generators will rival the total capacity of the identified generators.

I wonder why the under frequency relays in the East are set lower than the relays in the other regions.

I don’t understand the terminology that “Nepal has several asynchronous ties with the Indian grid (AC radial links).” My interpretation is that Nepal has a disjoint system with each section tied synchronously to different locations of India, making the sections synchronous to each other through their links to India.

“Too Much of a Good Thing” Revisited

In “Renewable Electric Power—Too Much of a Good Thing: Looking At ERCOT,” Dialogue, United States Association for Energy Economics, 2009 August, I looked at the impact that wind was having on the dispatch prices in ERCOT, the Independent System Operator for much of Texas.  Prices were negative during about 23% of the month of April 2009 in West Texas, the region dominated by wind generation and during about 1% of the month in the rest of ERCOT, a region dominated by fossil generation.

This week my Dialogue article was brought back to mind by two messages I received, one on the IEEE list server PowerGlobe the second a ClimateWires article sent to me by a friend.  Both dealt with the issue of “grid operation during very high levels of wind energy”, the subject line of the IEEE PowerGlobe message.  The ClimateWires article deals with Bonneville Power Authority’s reaction to such situations.

My reaction to both messages is that we need a true spot price for electricity.  I once heard that a spot commodity price was for the commodity delivered on the spot out of inventory, before more of the commodity can be produced.  We don’t have an inventory of electricity, but we do have an inventory of production plant.  So, combining the concepts, the spot price of electricity would be applicable to deliveries made before we can change the operating levels of our production plants.  That may mean a different price for each second.  Certainly a different price for each minute.

But a spot price should apply to a different quantity than might the dispatch prices developed by independent system operators (ISOs) like ERCOT.  The dispatch prices should apply to quantity specified by bidders in the ISOs.  Any variation from that quantity, up or down, should be priced at the spot market.  Further, the spot price should be allowed to vary greatly from the dispatch price.  Otherwise the weighted average price of the total delivery might be seemingly insignificantly different from the dispatch price, as shown in the following table.

Description MWH Price Extension
Dispatch 100 $40.00  $     4,000.00
Spot -5 $30.00  $      (150.00)
Metered 95 $40.53  $     3,850.00


The basic assumption is that the generator committed to providing 100 MWH at a price of $40.00/MWH, and that the ISO accepted that price.  As it turned out, there actually was a surplus, such that the spot price was reduced to $30.00/MWH.  For some reason, which irrelevant for this analysis, the generator only delivered 95 MWH through the meter during this period.  Thus, the generator effectively bought 5 MWH in the spot market to achieve its dispatch obligation of 100 MWH.  The effect was that the 95 MWH that were actually delivered had a unit price of $40.53/MWH.  Some would say that the generator got lucky in this situation.  An arrogant generator might say that he was smart and dispatched down his generator.  The point that I am trying to make with the table is that the average price experienced by the generator is only 1.3% different from the $40.00/MWH dispatch price.

Effect on Average Price of Spot Volumes and Spot Prices

Given 100 MWH Dispatched at $40/MWH


  -$50 $30 $40 $200 $2,000
-10 $50.00 $41.11 $40.00 $22.22 -$177
-5 $44.74 $40.53 $40.00 $31.58 -$63.16
0 $40.00 $40.00 $40.00 $40.00 $40.00
5 $35.71 $39.52 $40.00 $47.62 $133.33
10 $31.82 $39.09 $40.00 $54.55 $218.18


The next table shows the effect of making a variety of spot transactions at a variety of prices, including negative prices and prices many times the dispatch price.  I note that the average price stays at the $40.00/MWH dispatch price when the spot price stays at $40.00/MWH or when the spot delivery stays at 0 MWH.  The average price from the first table appears in this table at the price of $30.00/MWH and a spot delivery of -5 MWH.

Generators prefer to be in the top left portion of the table or the bottom right, first where they are short when prices are low and second when they are long when prices are high.  Consumers prefer to be in the top right portion of the table or the bottom left, first where they consume less than the amount entered into the auction and the auction price is high and second where thy consumer more than the amount entered into the auction and the auction price is low.