The Goldilocks Dilemma

An old posting about why intermittency is not a big deal came to my attention today.  I re-read some of what had been said, especially when I had just sent out a paper on the topic yesterday.

I believe that the value of electric “energy” is often overstated.  The author of the old posting, Chris Varrone, inadvertently acknowledges this when he wrote

However, the energy in wind is worth 100% of the energy in nuclear (or anything else) in the spot market; wind energy in the day-ahead market may be worth a little less, but this can be “firmed” using energy trading desks or by using other assets in the operator’s fleet.

If the day to day differential can be handled by firming with other assets, then the value of the electricity is not just energy.  It is not worth debating what to call this other value, but a substantial part of the value in the spot market is something other than energy.

As to The Goldilocks Dilemma, the paper I sent out yesterday, I began by asking

Is the price paid to dispatchable generation too high, too low, or just right for intermittent generation?

I then answer

Though intermittent generators often argue that they should receive the same price as dispatchable generation and some utilities argue that they should pay less to intermittent generators, sometimes intermittent generators should face a higher price than dispatchable generators, such as when intermittent generation is part of the market during instances of extreme shortage.

The entire paper is available on my web site, the companion to this blog site.  Look for the hot link to the library near the bottom of the first page.  A hot link for the article is near the bottom of library index in the section called drafts.

Electric Demand Charges: A Lesson from the Telephone Industry

The only ad with prices that I remember from 50 years ago was AT&T’s offering of a three minute coast to coast telephone call for $1.00.  With the inflation that we have seen over the last 50 years, one would expect that a coast to coast call would now be at least $10.00 for three minutes.  Instead, most telephone bills show a monthly service fee and no itemization for individual calls.  Automation has allowed the telephone companies to do away with most telephone operators, which was a significant portion of the variable cost of making long distance telephone calls.  The principal cost is now the investment in the wires, which doesn’t change with the number of calls that are carried.  So, most carriers now charge a monthly fee and little or no charge per call.  Perhaps it is time for the electric industry to go that way?

 

The restructuring of the electric industry has generally separated the distribution wires function from the generation[1] and transmission[2] function for most customers of investor owned electric utilities.  This restructuring puts such electricity customers into the same position as their counterpart customers of municipally and cooperatively owned utilities.  Municipally and cooperative owned utilities have generally been distribution only utilities, buying generation and transmission services from others, instead of being vertically integrated like most investor owned electric utilities.

 

The restructuring of the electric industry has resulted in most customers being served by a distribution company which has very little variable cost, much like the telephone companies.   A significant distinction is that telephone lines handle one call at a time.  The telephone line is either in use or is not in use.  In contrast, electric utilities provide a continuously variable service.  The customer may be taking 10 watts (a small light bulb) or 10 kilowatts (running the A/C, water heater, and stove at the same time), or any amount in between.  The telephone company has the wires to serve the customer’s demand, whenever that call occurs[3].  The electric distribution company similarly has the wires to serve the customer’s demand, whenever that demand occurs.  While the telephone company will have customers on a binary basis (they are either a customer or are not a customer), the electric distribution customer serves its customers on a continuous basis (they might be very small customers who never use more than 10 watts or a very large customer that might use up to 100 MW.)

 

The binary basis of telephony customers allows the telephone companies to charge their customers a specific amount on a monthly.  The continuous nature of the size of electric services suggests that electric distribution companies charge their customers a price based on the size of the electric service used by the customer.  For commercial and industrial customers, electric utilities have long included in their tariffs a demand charge that depends on the maximum power that the customer used during the billing period[4].  Typically such demand charges will be based on the average consumption for some 15 minute period.

 

Cost has been a significant factor that limited the use of demand charges to commercial and industrial customers.  Demand meters are more costly to manufacture, in that they do more than just accumulate the amount of energy that goes through the meter.  Demand meters are more expensive to read, in that the meter reader has to note two quantities and has to manually reset the demand register.  These two cost factors are lesser issues in regard to determining residential demand now that the industry has moved significantly to Advanced Meter Reading (AMR) and to Advanced Meter Infrastructure (AMI[5]), both of which automatically collect consumption data, including for 15 minute intervals.

 

Historically residential demand charges was thought to produce an insignificant shift of revenue among residential customers.  The reasoning was that, though residential customers are different in size, they have a similar load pattern.  A customer using 1,000 KWH a month would have ten times the demand as a customer using 100 KWH a month.  Implementing a demand charge that collected an amount equal to 20% of the energy revenue collected from the larger customer would also collect an amount equal to 20% of the energy revenue collected from the smaller customer.  There would be no revenue shift among these residential customer, at least for consumption.  However, the utility would have had to install more expensive meters, which would have increased the monthly customer charge of both customers without providing a significant benefit to the utility or to the customers.

 

The move to AMR and AMI has reduced the cost of determining the demand for residential customers.  Now the cost of determining metered demand is not an issue in differentiating between customers with different consumption patterns.  Customers who should be paying a demand charge equal to 30% of their energy payments can be distinguished from customers who should be paying a demand charge that is only 10% of their energy payments.  Further, on site generation has changed the paradigm that residential customers have similar load patterns, so that now the industry knows that there are the 30% customers versus the 10% customers and can bill them appropriately.  Indeed, for houses with sufficient on-site generation, the revenue from the demand charge could be several times the revenue from the energy charge, especially when the energy charge vanishes for a net zero home.

The growth in AMR and AMI along with the growth in residential on-site generation makes this an appropriate time for restructuring residential tariffs to include a demand charge to collect the cost of the distribution utility owning the power lines.  The energy charge should continue to collect the cost of generation and transmission, though the energy charge should be time differentiated to reflect the real time value of generation and transmission, as well as the associated energy losses.



[1] The creation of Independent System Operators (ISOs) is alleged to have brought competition to the generation sector of the electric industry.  However, many distributed generators, such as roof top solar, do not experience the real time market prices set by their local ISO.  This distorts the market for distributed generation.

[2] The creation of ISOs is also alleged to have brought competition to the transmission market.  But ISOs compensate many transmission lines on a cost of service basis, through a monthly fee, though they charge geographically differentiated prices based on line losses and line congestion and generally don’t compensate for loop flow or parallel path flows, such as PJM imposes on TVA and on the Southern Company, both of which have lines in parallel to PJM>

[3] Telephone customers occasionally receive a business signal, indicating that the called party is using his/her phone.  More rarely, customers will receive a circuits business signal, indicating that intermediate wires are in full use, not that the called party is using his/her phone.

[4] Demand charges come in a variety of forms including contract demand, thermal demand, and ratcheted demands, a distinction beyond the scope of this discussion.

[5] AMI is generally distinguished from AMR in that AMI generally includes the ability to communicate both ways, from the meter to the utility and from the utility to the meter/customer location.  The ability to communicate from the utility to the meter allows the utility to control devices that the customer has opted to put under the utility’s control such as electric water heaters, air conditioning compressors, and swimming pool pumps and heaters.

Net Metering–Morphing Customers Who Self Generate

The U.S. Public Utilities Regulatory Policy Act of 1978 started a flood of non-utility generation, initially a few very large cogeneration plants and recently a large number of small roof top solar generation.[1]  The rapid growth in the number of small roof top solar generators requires the electric industry to develop a pricing plan that is fair to traditional customers as well as to the hybrid customers, those still connected to the grid but with some self generation.

Electric utilities support their pricing decisions with class cost of service studies  (CCOSS).  The CCOSS allocates the utility’s revenue requirement[2] to groups of customers, called classes.  Classes of customers are claimed to be homogeneous, such as being of a similar size, but more often as having similar load patterns.

Some costs in a CCOSS are allocated based on the number of customers, perhaps weighted by the cost of meters and services.  Fuel costs are allocated based on energy through the meter, though often weighted by the losses incurred to reach the  meter.  A large portion of the costs are allocated based on demand, the amount of energy used by the class during the times of maximum stress on the utility, or at times of maximum stress upon portions of the utility, such as on the generation, the transmission, distribution.  Utilities are concerned about recovering these demand related costs as customers morph from being a full requirements customer to being hybrid customers.

Electric utilities have long alleged that the homogeneity of residential load patterns allowed the utility to use energy meters, often called watt-hour meters, to determine how much each residential customer should pay each month.  The logic is that the allocation process brought costs into the rate class based on the customer’s demand.  Further, homogeneity means that the amount of energy through the meter is proportional to the customer’s demand.  The utility could collect roughly the right amount of money from each residential customer by charging residential customers based on their energy consumption[3] instead of charging residential customers based on the demand.

Charging customers based on energy allowed utilities to reduce substantially the cost of owning and reading meters without significantly distorting how the revenue to cost ratio from each customer.  At least until roof top solar substantially reduced the amount of energy that goes through the meter without necessarily reducing the customer demand.  Thus, with roof top solar, the revenue collected from the customer goes down greatly while the costs brought in by the customer demand goes down only slightly.

The growth in roof top solar coincides with the growth of Advanced Metering Infrastructure (AMI).  AMI often includes automatic meter reading and interval metering .  Automatic meter reading generally means replacing the person walking door to door with equipment.  The carrying cost of the equipment is often less than the cost of the human meter reader, allowing AMI to pay for itself.  Interval metering means collecting the amount of energy delivered during small time intervals, generally one hour (24×7), though sometimes on an intra-hour basis.  These interval readings are the demands in the CCOSS.

The intra-hour meter readings made possible by AMI would allow electric utilities to charge all residential customers based on their maximum demands, the determinant used in CCOSS to allocate costs to customer classes.  No longer would the utility have to rely on homogeneity assumptions in regard to residential customers.  The demand charge permitted by AMI would reduce the disparity between the lower revenue to cost ratio for residential customers with roof top solar relative to the revenue to cost ratio of standard residential customers.



[1] See

[2] the amount of money the utility needs to collect each year to continue functioning

[3] with a very inexpensive watt-hour meter

Utility 2.0 or Just Utility 1.X

On Tuesday, 2013 October 29, I attended a discussion of the report “Utility 2.0: Building the Next-Gen Electric Grid through Innovation.”  I left feeling that the innovations discussed are just more of the same, just as I have often described the smartgrid as SCADA[1] on steroids.  The innovations are not creating Utility 2.0 as much as making slow changes to the existing utility structure, just varying the X in Utility 1.X.

Electric utilities began automating the electric system as soon as Edison started his first microgrid, the Pearl Street Station.  At one time, an operator would read a frequency meter to determine the balance between supply and demand.  In the earliest days, Edison had a panel of light bulbs that would be switched on and off to maintain that balance, which was a strange form of load management.  The operator would also be able to vary the generation by change the water flow to a hydro-turbine, the steam from the boiler, and the fuel into the boiler.  Edison invented control mechanisms that were cheaper than the labor costs of the operator, control mechanisms that his companies also sold to other utilities.  These control mechanisms can be considered to be some of the first SCADA systems.  As the control mechanisms and telephony got cheaper and labor become more expensive, more labor saving devices could be installed.  The policy of having an operator at every substation was replaced by remote devices, lowering the cost of utility service.  The smartgrid concept is just more of the same, as computers become cheaper and faster, remote metering less expensive, and remote control easier to accomplish.

The true quantum change in utility operations occurred in federal law.  PUHCA[2] effectively prohibited private individuals from selling electricity to a utility, by defining the seller to be a utility, subject to utility type regulation and to prohibitions on non-utility operations.  Because of PUHCA, Dow Chemical operated its chemical plants as the ultimate microgrid, running asynchronously and unconnected to local utilities.  Dupont installed disconnect switches that would separate its microgrid chemical plant from the local utility if power began to flow out of the plant.  International Power and Paper became International Paper.  Exxon intentionally underinvested in its steam plants, limiting its ability to produce low cost electricity.  PURPA[3] provided exemptions from PUHCA for cogeneration plants such as those mentioned here and for qualifying small producers using renewable resources.  The latter exemption was almost in anticipation to the growth of roof top solar photovoltaics (PV).  These facilities needed utility markets into which to sell their surplus, which generally resulted in individually negotiated contracts.  The creation of the ISO[4] concept could be considered to be an outgrowth of the desire by these large independent power producers (IPPs) for a broader, more competitive market, instead of the monopsony into which they had been selling.  ISOs now have a footprint covering about 2/3 of the lower US, excluding Alaska and Hawaii.

ISOs generally deal only with larger blocks of power, some requiring participants to aggregate at least 25 MW of generation or load.  ISO control generally does not reach down into the distribution system.  The continued growth of labor costs and the continued decline of automation costs has allowed the SCADA concept to be economic on the distribution grid, including down to the customer level.  This expansion of SCADA to the distribution system will soon require changes in the way the distribution system is priced, both for purposes of equity and for Edison’s purpose of controlling the system.

  • The growth in rooftop PV is dramatically reducing the energy that utilities transport across their distribution system.  This energy reduction generally reduces utility revenue and utility income.  Under conventional utility rate making, the result is an increase in the unit price charged by the utility for that service.  Some pundits point out that the owners of the rooftop PV panels are generally richer than the rest of the population served by the utility.  These solar customers are cutting the energy they consumer, though not necessarily their requirements on the utility to provide some service through the same wires.  The rate making dynamics thus result in other, poorer customers seemingly subsidizing the richer customers who have made the choice for rooftop solar.  This seems inequitable to some.
  • The growth in rooftop PV has outstripped the loads on some distribution feeders, with reports that the generation capacity has sometimes reached three times the load on the feeder.  These loading levels cause operating problems in the form of high voltages and excessive line losses.  During periods of high voltage and excessive line loss, prices can provide an incentive for consumers to modify their behavior.  The genie seems to be out of the bottle in regard to allowing the utility to exert direct physical control over PV solar, but real time prices could provide some economic control in place of the tradition utility command and control approach.

I have discussed the need for real time pricing of the use of the distribution grid in “Net Metering:  Identifying The Hidden Costs;  Then Paying For Them,” Energy Central, 2013September 20.[5] I have described a method in “Dynamic ‘Distribution’ Grid Pricing.”[6]

Changes in state regulations have also impacted this balance between labor costs and automation costs.  Some states now have performance incentives based on the number of outages and the typical restoration times.  The cost associated with the time of sending a line crew to close a circuit breaker now competes with the incentives to get that closure faster, through the use of automation.

In conclusion, the increase in utility automation is not so much innovation as it is a continuation of the historic utility practice of the economic substitution of lower cost technology for the ever increasing cost of labor.  The 1978 change in federal law led to the growth of ISOs and bulk power markets, but did not reach down to the distribution level, perhaps of the lack of non-utility industrial support.  The growth in rooftop PV will provide the incentives for expanding the real time markets down the distribution grid to retail consumers.  Though computers indeed have gone from 1.0 (vacuum tubes), to 2.0 (transistors), to 3.0 (integrated circuits), I don’t see the current changes proposed for utilities to be much more than following the competition between labor costs and automation costs.  We are still Utility 1.X, not Utility 2.0.



[1] Supervisory Control And Data Acquisition.

[2] Public Utility Holding Company Act of 1935

[3] Public Utility Regulatory Policies Act of 1978

[4] Independent System Operator

[6] A draft of this paper is available for free download on my web page, www.LivelyUtility.com

A Romp Through Restructuring

Today I presided over the monthly lunch of the National Capital Area Chapter (NCAC) of the U.S. Association for Energy Economics, with Craig Glazer, Vice President-Federal Government Policy, PJM Interconnection.  Besides announcing future events and talking about the successful NCAC field trip of October 4-5[1], I got to ask questions and comment as the luncheon moderator and President of NCAC.  I include some of those questions and comments below, along with several that where beyond what I felt like imposing on the luncheon attendees.

I liked that Craig mentioned that code words were often used in the industry, though not the ones I sometimes point out.  But when one questioner commented about the growth in distributed generation (DG), I pointed out that I look at DG as a code word for non-utility generation.  Nominally DG should be any generation on the distribution grid, but is generally used to restrict the ownership options.

Craig identified “Rates significantly above the national average” as one of the issues that drove the restructuring movement.  Unlike the children of Lake Woebegone where children are all above average, retail rates can’t be above the national average everywhere.  Thus, there are some parts of the country where restructuring was not an issue and the utilities have not been restructured.

Craig used the term “Half Slave/Half Free” to address the case of Virginia, where the State Corporation Commission still regulates retail rates but the generation and transmission systems participate in the competitive PJM market.  I noted that the result of restructuring was that the market value of electricity in my home location of Eastern Kentucky went from very low prices to moderately low prices, at least according to one of Craig’s slides.  But Craig had already made me feel better about this by telling of his trips to Kentucky to persuade the regulators to let their utilities join PJM.  He told them that one result the Kentucky electric companies joining PJM would be higher utilization of Kentucky’s cheap power plants.

These power plants joining PJM could sell the very low cost generation (the pre-restructuring picture) at moderately low prices (the post-restructuring picture), with the differential being used to reduce the prices for Kentucky residents.  As I pointed out, this is an example of Craig’s term “Half Slave/Half Free” where he pushed the concept.  I also pointed out that a substantial portion of the country has not restructured, which was my initial thought when he mentioned the term.  So we went back to the issue that not all parts of the country would benefit from restructuring.

Craig stated that restructuring changed the risk allocation formula.  He made the point that there was no Enron rate case.  In other situations where utility investments were cratering, there were rate cases, but not with Enron in the restructured world.  Further, there was effectively not even a hiccup in the PJM bulk power market on the day that Enron collapsed, even though Enron had been a major player in the PJM bulk power market.

Craig says that capacity prices are too low.  I see capacity as being a multi-year issue, requiring a multi-year solution.  Pre-restructuring, the utilities handled the variations in the need for capacity, and the value of capacity, through long term rates.  They built what they thought was needed and didn’t worry that the bulk power market went up and down, the utilities kept on trucking as vertically integrated entities.  Indeed, one of the problems that caused the California debacle of 2000/2001 was that the entire market was forced to pay the spot price of electricity.  The Texas market seems to be greatly hedged in that when the bulk power market price went up by a factor of 10, on average, for the entire month of August 2011, the retail price hardly budged.

Craig made an excellent point in regard to the question of who decides what in the electric industry, providing a list of governmental entities.  I notice that he did not mention the U.S. Department of Energy (of course he was a substitute speaker who replaced Melanie Kenderdine, assistant to the Secretary of the U.S. Department of Energy, because Melanie thought she would not be allowed to speak because of the shutdown of the federal government that ended about 24 hours before the lunch.)  He also listed state legislatures but not Congress.  But then the other decision makers are the owners of the facilities.

A continuing issue that I have with regulation is tangential to Craig’s “Half Slave/Half Free” term.  His PJM operates in parallel with several other entities.  I have frequently pointed to the Lake Erie donut[2] , with is the path around Lake Erie that allows electricity to flow from Chicago to New York City along two major paths, north or south of Lake Erie.  I have said that when there is unscheduled loop flow, e.g., more going north of Lake Erie than has been scheduled, that there should be payment for that unscheduled flow.[3]  The same issue applies to PJM versus TVA, which have lines in parallel.  Sometimes one system is paid for the contract path but some of the electricity actually flows on the other system.  And just south of TVA is the Southern Company, providing a fourth east/west path for loop flows.  I say that a mechanism to pay for loop flows may be one of the ways to get around the transmission cost allocation and siting issues mentioned by Craig.

I note that I did not raise all of these issues during the lunch Question and Answer period, I spoke enough as it was.  Craig is certainly welcomed to comment on this blog, as are others.



[1] See “NCAC-USAEE Overnight Field Trip of 2013 October 4-5,” 2013 Oct 07, http://www.livelyutility.com/blog/?p=233

[2] See my “Wide Open Load Following,” Presentation on Loop Flow to NERC Control Area Criteria Task Force, Albuquerque, New Mexico, 2000 February 14/15, on my web site, under publications under other publications.

[3] See my blog entry “Socializing The Grid: The Reincarnation of Vampire Wheeling,” 2011 Mar 17,  http://www.livelyutility.com/blog/?p=83

The Electric Transmission Grid and Economics

Tuesday, 2013 October 8, I went to the MIT Club of Washington Seminar Series dinner with Anjan Bose of Washington State University talking about Intelligent Control of the Grid.  Anjan began with giving two reasons for the transmission grid but then seemed to ignore the predicate in explaining what the government has been doing in regard to the grid.

The first slide identified two reasons for the electric transmission system.  The first was to move electricity from low cost areas (such as hydro-electric dams) to higher cost areas.  This is an obvious reference to economics.  The second was to improve reliability.  Anjan did not get into the discussion of how that is an economics issue, but it is.  Reliability is greatly improved by increasing the number of shafts connected to the grid.  We can produce the same amount of electricity with five 100 MW generator or one 500 MW generator.  The five units provide greater reliability but also higher costs.  The higher costs are associated  with various economies of scale, including higher installed cost per MW, less efficient conversion of the fuel into electricity, and the need for five sets of round the clock staffs.  A transmission system allows dozens of 500 MW units to be connected at geographically dispersed locations, achieving the reliability of many shafts and the lower cost of larger generators.

But, the presentation had little to do with the economics of the power grid, and the investigations into those economics.  I noticed that much of the discussion during the question and answer period did talk about the cost of operating the grid, so people were indeed interested in money.

Anjan said that the financial people used different models than did the engineers who operate the system.  I have long said that we need to price the flows of electricity in accord with the physics of the system, by pricing the unscheduled flows.  The engineers and operators may plan to operate the system in a prescribed way, but the flows of electricity follow the laws of physics, not necessarily the same was the way some people have planned.

Anjan said that deregulation[1] has caused a dramatic decline in new transmission lines, especially between regions such as into and out of Florida.  My feeling is that new transmission lines would be added more willingly if the owners of the new transmission lines would be paid for the flows that occur on the transmission lines.  For instance, twenty years ago a new high voltage transmission line in New Mexico began to carry much of the energy that had been flowing over the lower voltage transmission lines of another group of utilities.  The group of utilities called the service being provided “vampire wheeling” and refused to make any payment to the owner of the new transmission line.  The new line provided value in the reduced electrical line losses and perhaps allowed a greater movement of low cost power in New Mexico, but that value was not allowed to be monetized and charged.

I note that a pricing mechanism for the unscheduled flows of electricity would have provided a different mechanism to handle the 2011 blackout in Southern California, which began with a switching operating in Arizona.  Engineers swarmed to the area to find data to assess the root causes but were initially blocked by San Diego Gas & Electric’s attorneys who feared that any data could be used by FERC to levy fines pursuant to the 2005 electricity act.  I remember a discussion at the IEEE Energy Policy Committee on that proposed aspect of the bill.  The IEEE EPC voted to suggest creating mandatory reliability standards.  I was the sole dissenting vote, arguing that the better way was to set prices for the unscheduled flows of electricity.  Thus, SDG&E and the Arizona utilities would have been punished by the market instead of risking a FERC imposed fine.



[1] I prefer to use the more accurate term restructuring, since the entire industry is still regulated, even though generation is often subject to “light handed regulation” by FERC, which approves concepts instead of specific prices.

System of Governance

On 2013 February 27 & 28, I attended the National Research Council’s workshop on “Terrorism and the Electric Power Delivery System.” Though “terrorism” was in the title of the report issued November 2012, the issues were as applicable to natural disasters as to terrorist attacks. In regard to problems on the electric delivery system I was reminded of the Yogi Berra quip, “It’s déjà vu all over again.” Except, I kept thinking, “It’s déjà vu all over again, and again, and again… .”

During the final session of the workshop, Granger Morgan of Carnegie Melon University, the NRC panel chair, said that microgrids could only work if local utilities were disenfranchised. I had just moved up from the audience to the panel table to pass a note to Richard Schuler of Cornell University and took advantage of sitting at a microphone to challenge the need to disenfranchise local utilities in order to have effective microgrids. My thesis is that the benefits of microgrids can be achieved by real time pricing of electricity imbalances within the footprint of the microgrid, where that real time market for imbalances is operated by the local wires company. I wrote about the concept four years ago in “The WOLF in Pricing: How the Concept of Plug, Play, and Pay Would Work for Microgrids”, IEEE Power & Energy Magazine, January/February 2009[i] and in “Microgrids And Financial Affairs – Creating A Value-Based Real-Time Price For Electricity,” Cogeneration and On-Site Power Production, September, 2007[ii]. The benefits of self generation such as a combined heat and power plant can be retained by the participants within the footprint through bilateral hedging, with the actual transactions being with the franchised utility. I note that Granger Morgan’s Carnegie Mellon University is in Pennsylvania, a retail access state, and is in the footprint of PJM, an ISO that operates such a real time market. “Déjà vu.”

I wanted to pass a note to Richard Schuler because he had commented that Australian industrial consumers had noticed that bulk power prices varied inversely with frequency, mentioning a study that he had seen from the mid 1990s. I wanted to get a reference to that study because in the 1980s I had proposed to automate the concept of pricing unscheduled flows of electricity, setting the price the same way, by the price varying inversely with frequency.  The concept of prices varying inversely with frequency is simply illustrated in the first figure, which somewhat replicates the graph Richard Schuler drew for me to illustrate his memory of the findings in Australia.  My first published paper on the topic was “Tie Riding Freeloaders–The True Impediment to Transmission Access,” Public Utilities Fortnightly, 1989 December 21.  So, “Déjà vu all over again.”

Inverse Relation between Prices and Frequency

Richard Schuler made an aside to me after my comment on microgrids about the need to increase the capacity of the wires between pairs of participants on the microgrid because of the size of some distributed generation projects. Such upgrades are part of the responsibility of a franchised utility, but until such upgrades are made and paid for, there needs to be a way to extend the dynamic pricing to include the dynamic use of wires, as I wrote in “Dynamic Pricing: Using Smart Meters to Solve Electric Vehicles Related Distribution Overloads,” Metering International, Issue 3, 2010. Now, truly, “Déjà vu all over again, and again.”

Terry Boston of PJM Interconnection (and perhaps others) repeatedly commented on the need to control frequency and voltage.  When FERC was investigating the concept of ancillary services in the mid 1990s, one pundit said there were 31 flavors of ancillary services.  I wrote “Thirty-One Flavors or Two Flavors Packaged Thirty-One Ways: Unbundling Electricity Service” The National Regulatory Research Institute Quarterly Bulletin, Summer 1996.  The two flavors I identified were active and reactive power which respectively control frequency and voltage. “Déjà vu all over again, and again, and again.”

I believe that microgrids would have the most value during the islanding of the electricity system, which might be the result of terrorism or a natural disaster. Sue Tierney of Analysis Group said that we need a system of governance.  I say that a real time pricing system would provide such a system of governance while the system is stressed, such as by a terrorist attack or by a natural disaster.

David Kaufman of DHS/Federal Emergency Management Agency asked what private actors need from the government, after all, 44 of the top 100 economies in the world are private companies and during emergencies private actors often provide much of the relief.  I believe that the government needs to allow and perhaps operate a system of real time prices while electric systems are operating on an island mode.  David Kaufman also told the story of visiting Haiti after the earthquake and being amazed by the entrepreneurship of kids.  They took batteries from abandoned cars and provided a cell phone charging service.  Batteries could be used on a microgrid during an emergency if appropriate real time prices were available for charging and discharging the battery.

Miles Keogh of the National Association of Regulatory Utility Commissioners said that better competitive markets are very important over short periods of time, after which other systems need to take over.  The real time pricing mechanism that I described in many of my papers could function well on an island electric system, at least until the island was reconnected to the grid and another pricing mechanism could take over.

Following Terry Boston’s admonition to control frequency and voltage and using the concept mentioned by Richard Schuler, I say that we can have a system of prices that vary inversely with frequency.  As I discuss in various papers including “Markets Instead of Penalties: Creating a Common Market for Wind and for Energy Storage Systems,” 8th CMU Electricity Conference: Data-Driven Management for Sustainable Electric Energy Systems, Carnegie Mellon University, Electrical & Computer Engineering and Engineering Public Policy Departments, Pittsburgh, Pennsylvania, 2012 March 12-14, my current thinking is that the shape of the inverse relation between prices and frequency should be a negative hyperbolic sine, such as presented in the next figure.  The hyperbolic sine is symmetrical about a price of zero and in this case a frequency of 60 Hertz.  The price needs to be offset from zero, such as with a price that varies inversely with time error.

 

The hyperbolic sine gets the price high enough to incent private actors who own backup generators to dump electricity into the island grid when frequency is perilously low.  I note that backup generators are notoriously expensive to operate, especially when the replacement of fuel is problematic.  If the price is changing every minute or every five minutes, the price will also drop when there are too many such backup generators or too many solar voltaic systems on the line.  The hyperbolic sine will also push the price negative when system frequency gets to be too high.  This swing in prices between high and low (or negative) would provide an incentive for the batteries to discharge and charge, as I wrote last year in “Reply Comments Of Mark B. Lively, Utility Economic Engineers, On The Need To Create A Program To Price Imbalances,” Rulemaking 10-12-007: Order Instituting Rulemaking Pursuant To Assembly Bill 2514 To Consider The Adoption Of Procurement Targets For Viable And Cost-Effective Energy Storage Systems, Public Utilities Commission Of The State Of California, 2012 February 13.

As I said, “It’s déjà vu all over again, and again, and again … .”


[i] Most of the articles, papers, and comments identified in this blog are available on my website, LivelyUtility.com.

[ii] http://www.cospp.com/articles/article_display.cfm?ARTICLE_ID=307889&p=122

Oil Storage

During the Arab oil embargo of 1973, some people speculated that the US had a strategic petroleum reserve in the form of gasoline sitting in the driveways of most suburban homes.  The speculation was that many people made a point to refill their gas tanks as soon as ¼ of the tank had been consumed.  At that rate, the average amount of gasoline in this mobile storage was 7/8 of the tank.  By some calculations that was the equivalent of a month’s usage of gasoline.  Whether the mobile storage was indeed the equivalent of a month’s usage of gasoline or was much less, the storage capability was quite large.

 

During the question and answer period after Adam Sieminski, Administrator of the US Energy Information Administration talked to the National Capital Area Chapter of the US Association for Energy Economics on 2012 October 19 on the EIA Winter Fuel Outlet, I asked Adam about the possibility of oil supply interruptions in the Northeast, which is the area most heavily dependent on residential heating oil.  I included in my question a reference to the gasoline shortage in California that had pushed gasoline prices there more the 50 cents a gallon above the national average.

 

Part of Adam’s response included a discussion of the high elasticity of demand for gasoline, the time it took for tankers to move gasoline from other parts of the country, or from overseas, and that historically such price spikes lasted about five or six days, much less than the fourteen days necessary to ship gasoline the requisite distance.  Later I wondered about my above musings, about the mobile inventory of gasoline.

 

Do people respond to gasoline price spikes by a partial depletion of their individual mobile inventory?  Does the average gas tank level drop from 7/8 to ¾ to ½, or even lower, by only having partial fill-ups?   After all, some newspaper articles included interviews of workers who changed their fueling practices in include partial fill-ups.

 

How could we estimate the size of the partial drawdown of this mobile strategic petroleum reserve?  Or even the size of the mobile strategic petroleum reserve before the drawdown?  Does EIA have sufficiently fine data to make these estimates?  And how would a drawdown of this mobile reserve effect the elasticity estimates that Adam identified?

Economic Failures Contribute to Indian Grid Blackouts

On 2012 July 30 and July 31, India experienced massive blackouts on its electricity grid. The first blackout was early in the morning of July 30 and affected only the Northern Region. The second failure was at midday on July 31 and affected the Northern Region, the Eastern Region, and the Northeastern Region. The Western Region, though interconnected with the Northern, Eastern, Northeastern Region in the NEW Area, survived, as did the Southern Region, which is not interconnected synchronously with the NEW Area.

Various comments have been made about the events that led up to the blackouts. This blog entry will only discuss some failures of the economic systems that contributed to the blackouts.

 

Prices of Inadvertent Didn’t Reflect Security Issues, That Is, No Locational Marginal Prices

Beginning in 2002, India implemented its Availability Based Tariff (ABT) that included the creation of a market for imbalances, Unscheduled Interchange (UI).  ABT pricing of UI has a geographically uniform price.  I said early on that the price in areas that could be at risk of a blackout due to a power deficit after a transmission failure should have higher prices than other regions, those that have a power surplus.

Other pundits have suggested that utilities in the Northern Region ignored operators’ requests to reduce load because the energy price was low enough, that there were no economic consequences of taking too much electricity. The economic system failed the Indian electric network by not providing sufficient monetary pain for ignoring operators’ request.

A rigorous locational pricing plan could have produced that monetary incentive. I have not heard that all of the NR utilities were drawing more power than the amount which had been scheduled. A feature of ABT pricing of UI is an incentive for some utilities to draw less power than the amount which has been scheduled, not just for utilities to reduce their draw to the scheduled amount. Thus, a locational pricing plan would have led to some NR utilities to under draw and help stabilize the system.

 

High Inadvertent Prices Could Not Incent Backup Generators to Assist the Grid

Customer owned back up generators can do double duty. The standard use of a back up generator is providing electricity to the owner when there is a rotating blackout that affects the owner. When rotating blackouts are not affecting the owner, stand by generators can provide electricity to the grid when the value of electricity on the grid is high enough to pay for the fuel cost of the back up generator.

This second use of back up generation requires

  • the back up generator being able to operate synchronously with the grid; and
  • metering to identify the amount of energy provided to the grid to displace the high value UI power that the utility would otherwise be purchasing on a real time basis.

 

Power Deliveries to Farmers Aren’t Structured so Farmers Can Help Save the System

Indian farmers do not participate in the market for electricity, generally receiving several hours of free service. Giving Indian farmers a fixed subsidy would allow them to participate in the UI market for the number of hours they need for electricity. This would give the farmers incentives to help the system when there is a larger shortage of electricity. From the farmer’s perspective, the farmer would be able to use electricity for more hours since the average price could be cheaper than that upon which the subsidy was predicated.

Grid Security in India

On 2011 February 26, S.K. Soonee, CEO at India’s Power System Operation Corporation Limited posted on LinkedIn’s Power System Operator’s Group a link to a paper written by the staff of India’s Central Electricity Regulatory Commission.   “Grid Security – Need For Tightening of Frequency Band & Other Measure” can be accessed at      http://www.cercind.gov.in/2011/Whats-New/AGENDA_NOTE_FOR_15TH_CAC_MEETINGHI.pdf  Through LinkedIn I provided the following comments.

 I am dismayed that the simple elegance of the UI pricing vector, as shown in the two diagrams for 2002-2004 and 2007-2009, will be replaced by the convoluted vectors on 2011 May 3. It seems that there is a potential for mischief with having the multitude of simultaneous prices, with an undue accumulation of money by the transmission grid as some UI out of the grid is priced at very high prices at the same instant that other UI into the grid is being priced at a lower price. This is an unwarranted arbitrage for the transmission system.

The HVDC links between S and NEW could provide a warranted arbitrage situation where the grid with lower frequency delivers to the grid with the lower frequency. The different frequencies would result in different prices, with the price differences providing some financial support for the HVDC links.

I was surprised that there was no mention made in regard to Figures 1 and 2 as to when UI pricing started, and how that UI onset resulted in a narrowing of the spread between daily high frequency and daily low frequency. I believe these figures could be well supplemented by a presentation of histograms of the monthly frequency excursions, and how those histograms have changed over time. A numeric approach would include monthly average frequency and monthly standard deviation from 50 Hertz, a statistic for which you have a special name that I forget.

Parts of the Agenda Note discuss the serious impact of very short periods of frequency excursions. These short periods of concern are much shorter than the 15 minute periods used for determining UI. The various parts of the Agenda Note could be harmonized by reducing the size of the settlement period for UI from 15 minutes to 5 minutes or 1 minute.

There is a discussion of limits on the amount of UI power that a participant can transact. I question the need for such limits. As a participant increases the UI power being transacted, the price will move in an unfavorable direction, providing an additional financial incentive for the participant to reduce UI power transactions. For example, a SEB that is short of power and is buying UI faces higher prices as the UI transaction amount increases. These higher prices provide a multiplicative incentive for the SEB to reduce its shortage and its purchase of UI.

Many systems plan for the biggest credible single contingency, which the report treats as the single largest unit. The report shows that entire plants have gone out at a same time, suggesting that the biggest credible single contingency is a plant not a generating unit.

As an aside, in the listing of the generating capacity by size of generating unit, my experience in the US suggests that the list understates the number of generators. There would be many times the identified number of plants if the list included captive generators such as backup generators, which may be as small as a few KW. Again, based on my experience in the US, the total capacity of those unidentified generators will rival the total capacity of the identified generators.

I wonder why the under frequency relays in the East are set lower than the relays in the other regions.

I don’t understand the terminology that “Nepal has several asynchronous ties with the Indian grid (AC radial links).” My interpretation is that Nepal has a disjoint system with each section tied synchronously to different locations of India, making the sections synchronous to each other through their links to India.