The Goldilocks Dilemma

An old posting about why intermittency is not a big deal came to my attention today.  I re-read some of what had been said, especially when I had just sent out a paper on the topic yesterday.

I believe that the value of electric “energy” is often overstated.  The author of the old posting, Chris Varrone, inadvertently acknowledges this when he wrote

However, the energy in wind is worth 100% of the energy in nuclear (or anything else) in the spot market; wind energy in the day-ahead market may be worth a little less, but this can be “firmed” using energy trading desks or by using other assets in the operator’s fleet.

If the day to day differential can be handled by firming with other assets, then the value of the electricity is not just energy.  It is not worth debating what to call this other value, but a substantial part of the value in the spot market is something other than energy.

As to The Goldilocks Dilemma, the paper I sent out yesterday, I began by asking

Is the price paid to dispatchable generation too high, too low, or just right for intermittent generation?

I then answer

Though intermittent generators often argue that they should receive the same price as dispatchable generation and some utilities argue that they should pay less to intermittent generators, sometimes intermittent generators should face a higher price than dispatchable generators, such as when intermittent generation is part of the market during instances of extreme shortage.

The entire paper is available on my web site, the companion to this blog site.  Look for the hot link to the library near the bottom of the first page.  A hot link for the article is near the bottom of library index in the section called drafts.

Making Electricity Ubiquitous, Ubiquitous But Not Necessarily Cheap

I attended the 37th International Association for Energy Economics International Conference 2014 June 15-18 in New York City.  During the Wednesday “Dual Plenary Session: Utility Business Model” Michel Derdevet, Secretaire General, Electricite Reseau Distribution France, the French distribution utility, raised the issue of getting electricity to the billion people in the world who don’t have access to electricity.

During the audience discussion period, I raised the concept of a different business model, unregulated micro-grids owned by non-utilities.  I mentioned that a friend thought this concept could be applied to the nearly ubiquitous cell phone tower.  Cell phone towers require a power supply.  My friend thought that the provider of a cell phone tower should be granted a 10-year license to provide power on an unregulated basis.

My thought was that owner of the cell phone tower should be allowed to provide electricity and compete against anyone else that wanted to provide electricity.  Competition can better drive down prices than can regulation.  Regulation in terms of getting electricity to be ubiquitous would just stifle innovation.

Over the years, newspapers and newsmagazines have had pictures of the electric grid in some countries that look like a spider’s web on LSD.  Local entrepreneur’s buy their own diesel generators and provide “backup” electricity to their neighbors over wires strung in parallel or across the wires of the local utility.  The electricity is “backup” electricity in that it is used only when the local utility doesn’t have enough central station power to provide electricity to the entire national grid.  The utility then blacks out some neighborhoods.

The neighbors buy only a small amount of “backup” electricity from entrepreneur because the “backup” electricity is so expensive, being produced by diesel generators, which are less efficient and use a premium fuel.  The “backup” electricity is used for lights and a fan at night, perhaps for a refrigerator, not for those devices that might otherwise by electricity guzzlers.[1]  When the utility once again has enough power, competition drives the price down, putting the high cost entrepreneur out of business.

These micro-grids, whether run by the owner of the cell phone tower or by a neighborhood entrepreneur, can make electricity ubiquitous, even if the electricity is not cheap.  After all, Michel Derdevet said to me after his panel was done that some people were pushing for ubiquitous power supplies so they could make money selling electricity, not just for an eleemosynary purpose.  Thus, the power might not be cheap.

During the plenary session, Jigar Shah, Founder, SunEdison, LLC, claimed that California, with the highest electricity rates in the U.S., does not have the highest energy bills, because California residential consumers use less electricity. [2]  This is consistent with my comment about the lower usage of “backup” electricity relative to central station power.  However, elasticity may not be the only explanation for the lower consumption in California.  There is also the issue of climate and rate design.

Standard rate design practices also result in higher prices for customers with smaller consumption levels. The standard residential rate design has a monthly customer charge (say $10/month) and a commodity charge (say $0.09/KWH).  These rate levels nominally reflect the way that a utility will incur costs, a fixed cost per customer and then a cost that varies with the amount of energy the customer consumes.  A customer using 100 KWH per month would have a monthly bill of $19 and an average cost of $0.19/KWH.  A customer using 1000 KWH per month would have a monthly bill of $100 and an average cost of $0.10/KWH.  Thus, an area of the country with lower electricity consumption can be expected to have higher average cost and lower overall bills.

The micro-grid could be operated by the above mentioned owner of the cell phone tower or an entrepreneur.  I like to think of innovators who decide to install their own electric systems and then share with their neighbors, which is a form of the first type of owner.  The generation is put into place for a purpose other than selling electricity but if the sales are lucrative enough, the owner may decide to upsize his generating capacity until the owner looks like a utility.

A friend built such a system during the 1980s while in the Peace Corps in Africa.  He had an engineering degree from MIT.  So he took the concepts learned dabbling in the MIT labs in Cambridge, Massachusetts and applied the concepts in the field in Africa.  Later my friend worked for a Westinghouse subsidiary building power supplies for remote micro-wave towers (the same concept as a cell phone tower) and remote schools.  His company used mostly renewable energy, such as solar and wind, because diesel was so expensive in the remote areas he electrified.  Diesel was used to top off the associated batteries when there had been insufficient renewable energy supplies for extended periods of time.  It was during this period of his life that I met this fellow MIT alumnus.

Yes, we can make electricity ubiquitous.  But it will take competition to make it cheap, or at least not quite so expensive.

[1] As an aside, the consumption reduction during periods when “backup” electricity is being used demonstrates the concept of the elasticity of consumption.  When prices go up, consumption goes down.

[2] Looking at Table 6 of the EIA’s utility data for 2012, the average price to residential consumers in California was $0.153/KWH, or 8th most expensive.  The average consumption for residential consumers in California was 6,876 KWH/year, or the 3rd lowest after Hawaii and Alaska.  The average bill for residential consumers in California was $1,053/year, or the 10th lowest in the U.S.

Electric Demand Charges: A Lesson from the Telephone Industry

The only ad with prices that I remember from 50 years ago was AT&T’s offering of a three minute coast to coast telephone call for $1.00.  With the inflation that we have seen over the last 50 years, one would expect that a coast to coast call would now be at least $10.00 for three minutes.  Instead, most telephone bills show a monthly service fee and no itemization for individual calls.  Automation has allowed the telephone companies to do away with most telephone operators, which was a significant portion of the variable cost of making long distance telephone calls.  The principal cost is now the investment in the wires, which doesn’t change with the number of calls that are carried.  So, most carriers now charge a monthly fee and little or no charge per call.  Perhaps it is time for the electric industry to go that way?


The restructuring of the electric industry has generally separated the distribution wires function from the generation[1] and transmission[2] function for most customers of investor owned electric utilities.  This restructuring puts such electricity customers into the same position as their counterpart customers of municipally and cooperatively owned utilities.  Municipally and cooperative owned utilities have generally been distribution only utilities, buying generation and transmission services from others, instead of being vertically integrated like most investor owned electric utilities.


The restructuring of the electric industry has resulted in most customers being served by a distribution company which has very little variable cost, much like the telephone companies.   A significant distinction is that telephone lines handle one call at a time.  The telephone line is either in use or is not in use.  In contrast, electric utilities provide a continuously variable service.  The customer may be taking 10 watts (a small light bulb) or 10 kilowatts (running the A/C, water heater, and stove at the same time), or any amount in between.  The telephone company has the wires to serve the customer’s demand, whenever that call occurs[3].  The electric distribution company similarly has the wires to serve the customer’s demand, whenever that demand occurs.  While the telephone company will have customers on a binary basis (they are either a customer or are not a customer), the electric distribution customer serves its customers on a continuous basis (they might be very small customers who never use more than 10 watts or a very large customer that might use up to 100 MW.)


The binary basis of telephony customers allows the telephone companies to charge their customers a specific amount on a monthly.  The continuous nature of the size of electric services suggests that electric distribution companies charge their customers a price based on the size of the electric service used by the customer.  For commercial and industrial customers, electric utilities have long included in their tariffs a demand charge that depends on the maximum power that the customer used during the billing period[4].  Typically such demand charges will be based on the average consumption for some 15 minute period.


Cost has been a significant factor that limited the use of demand charges to commercial and industrial customers.  Demand meters are more costly to manufacture, in that they do more than just accumulate the amount of energy that goes through the meter.  Demand meters are more expensive to read, in that the meter reader has to note two quantities and has to manually reset the demand register.  These two cost factors are lesser issues in regard to determining residential demand now that the industry has moved significantly to Advanced Meter Reading (AMR) and to Advanced Meter Infrastructure (AMI[5]), both of which automatically collect consumption data, including for 15 minute intervals.


Historically residential demand charges was thought to produce an insignificant shift of revenue among residential customers.  The reasoning was that, though residential customers are different in size, they have a similar load pattern.  A customer using 1,000 KWH a month would have ten times the demand as a customer using 100 KWH a month.  Implementing a demand charge that collected an amount equal to 20% of the energy revenue collected from the larger customer would also collect an amount equal to 20% of the energy revenue collected from the smaller customer.  There would be no revenue shift among these residential customer, at least for consumption.  However, the utility would have had to install more expensive meters, which would have increased the monthly customer charge of both customers without providing a significant benefit to the utility or to the customers.


The move to AMR and AMI has reduced the cost of determining the demand for residential customers.  Now the cost of determining metered demand is not an issue in differentiating between customers with different consumption patterns.  Customers who should be paying a demand charge equal to 30% of their energy payments can be distinguished from customers who should be paying a demand charge that is only 10% of their energy payments.  Further, on site generation has changed the paradigm that residential customers have similar load patterns, so that now the industry knows that there are the 30% customers versus the 10% customers and can bill them appropriately.  Indeed, for houses with sufficient on-site generation, the revenue from the demand charge could be several times the revenue from the energy charge, especially when the energy charge vanishes for a net zero home.

The growth in AMR and AMI along with the growth in residential on-site generation makes this an appropriate time for restructuring residential tariffs to include a demand charge to collect the cost of the distribution utility owning the power lines.  The energy charge should continue to collect the cost of generation and transmission, though the energy charge should be time differentiated to reflect the real time value of generation and transmission, as well as the associated energy losses.

[1] The creation of Independent System Operators (ISOs) is alleged to have brought competition to the generation sector of the electric industry.  However, many distributed generators, such as roof top solar, do not experience the real time market prices set by their local ISO.  This distorts the market for distributed generation.

[2] The creation of ISOs is also alleged to have brought competition to the transmission market.  But ISOs compensate many transmission lines on a cost of service basis, through a monthly fee, though they charge geographically differentiated prices based on line losses and line congestion and generally don’t compensate for loop flow or parallel path flows, such as PJM imposes on TVA and on the Southern Company, both of which have lines in parallel to PJM>

[3] Telephone customers occasionally receive a business signal, indicating that the called party is using his/her phone.  More rarely, customers will receive a circuits business signal, indicating that intermediate wires are in full use, not that the called party is using his/her phone.

[4] Demand charges come in a variety of forms including contract demand, thermal demand, and ratcheted demands, a distinction beyond the scope of this discussion.

[5] AMI is generally distinguished from AMR in that AMI generally includes the ability to communicate both ways, from the meter to the utility and from the utility to the meter/customer location.  The ability to communicate from the utility to the meter allows the utility to control devices that the customer has opted to put under the utility’s control such as electric water heaters, air conditioning compressors, and swimming pool pumps and heaters.

Net Metering–Morphing Customers Who Self Generate

The U.S. Public Utilities Regulatory Policy Act of 1978 started a flood of non-utility generation, initially a few very large cogeneration plants and recently a large number of small roof top solar generation.[1]  The rapid growth in the number of small roof top solar generators requires the electric industry to develop a pricing plan that is fair to traditional customers as well as to the hybrid customers, those still connected to the grid but with some self generation.

Electric utilities support their pricing decisions with class cost of service studies  (CCOSS).  The CCOSS allocates the utility’s revenue requirement[2] to groups of customers, called classes.  Classes of customers are claimed to be homogeneous, such as being of a similar size, but more often as having similar load patterns.

Some costs in a CCOSS are allocated based on the number of customers, perhaps weighted by the cost of meters and services.  Fuel costs are allocated based on energy through the meter, though often weighted by the losses incurred to reach the  meter.  A large portion of the costs are allocated based on demand, the amount of energy used by the class during the times of maximum stress on the utility, or at times of maximum stress upon portions of the utility, such as on the generation, the transmission, distribution.  Utilities are concerned about recovering these demand related costs as customers morph from being a full requirements customer to being hybrid customers.

Electric utilities have long alleged that the homogeneity of residential load patterns allowed the utility to use energy meters, often called watt-hour meters, to determine how much each residential customer should pay each month.  The logic is that the allocation process brought costs into the rate class based on the customer’s demand.  Further, homogeneity means that the amount of energy through the meter is proportional to the customer’s demand.  The utility could collect roughly the right amount of money from each residential customer by charging residential customers based on their energy consumption[3] instead of charging residential customers based on the demand.

Charging customers based on energy allowed utilities to reduce substantially the cost of owning and reading meters without significantly distorting how the revenue to cost ratio from each customer.  At least until roof top solar substantially reduced the amount of energy that goes through the meter without necessarily reducing the customer demand.  Thus, with roof top solar, the revenue collected from the customer goes down greatly while the costs brought in by the customer demand goes down only slightly.

The growth in roof top solar coincides with the growth of Advanced Metering Infrastructure (AMI).  AMI often includes automatic meter reading and interval metering .  Automatic meter reading generally means replacing the person walking door to door with equipment.  The carrying cost of the equipment is often less than the cost of the human meter reader, allowing AMI to pay for itself.  Interval metering means collecting the amount of energy delivered during small time intervals, generally one hour (24×7), though sometimes on an intra-hour basis.  These interval readings are the demands in the CCOSS.

The intra-hour meter readings made possible by AMI would allow electric utilities to charge all residential customers based on their maximum demands, the determinant used in CCOSS to allocate costs to customer classes.  No longer would the utility have to rely on homogeneity assumptions in regard to residential customers.  The demand charge permitted by AMI would reduce the disparity between the lower revenue to cost ratio for residential customers with roof top solar relative to the revenue to cost ratio of standard residential customers.

[1] See

[2] the amount of money the utility needs to collect each year to continue functioning

[3] with a very inexpensive watt-hour meter

Utility 2.0 or Just Utility 1.X

On Tuesday, 2013 October 29, I attended a discussion of the report “Utility 2.0: Building the Next-Gen Electric Grid through Innovation.”  I left feeling that the innovations discussed are just more of the same, just as I have often described the smartgrid as SCADA[1] on steroids.  The innovations are not creating Utility 2.0 as much as making slow changes to the existing utility structure, just varying the X in Utility 1.X.

Electric utilities began automating the electric system as soon as Edison started his first microgrid, the Pearl Street Station.  At one time, an operator would read a frequency meter to determine the balance between supply and demand.  In the earliest days, Edison had a panel of light bulbs that would be switched on and off to maintain that balance, which was a strange form of load management.  The operator would also be able to vary the generation by change the water flow to a hydro-turbine, the steam from the boiler, and the fuel into the boiler.  Edison invented control mechanisms that were cheaper than the labor costs of the operator, control mechanisms that his companies also sold to other utilities.  These control mechanisms can be considered to be some of the first SCADA systems.  As the control mechanisms and telephony got cheaper and labor become more expensive, more labor saving devices could be installed.  The policy of having an operator at every substation was replaced by remote devices, lowering the cost of utility service.  The smartgrid concept is just more of the same, as computers become cheaper and faster, remote metering less expensive, and remote control easier to accomplish.

The true quantum change in utility operations occurred in federal law.  PUHCA[2] effectively prohibited private individuals from selling electricity to a utility, by defining the seller to be a utility, subject to utility type regulation and to prohibitions on non-utility operations.  Because of PUHCA, Dow Chemical operated its chemical plants as the ultimate microgrid, running asynchronously and unconnected to local utilities.  Dupont installed disconnect switches that would separate its microgrid chemical plant from the local utility if power began to flow out of the plant.  International Power and Paper became International Paper.  Exxon intentionally underinvested in its steam plants, limiting its ability to produce low cost electricity.  PURPA[3] provided exemptions from PUHCA for cogeneration plants such as those mentioned here and for qualifying small producers using renewable resources.  The latter exemption was almost in anticipation to the growth of roof top solar photovoltaics (PV).  These facilities needed utility markets into which to sell their surplus, which generally resulted in individually negotiated contracts.  The creation of the ISO[4] concept could be considered to be an outgrowth of the desire by these large independent power producers (IPPs) for a broader, more competitive market, instead of the monopsony into which they had been selling.  ISOs now have a footprint covering about 2/3 of the lower US, excluding Alaska and Hawaii.

ISOs generally deal only with larger blocks of power, some requiring participants to aggregate at least 25 MW of generation or load.  ISO control generally does not reach down into the distribution system.  The continued growth of labor costs and the continued decline of automation costs has allowed the SCADA concept to be economic on the distribution grid, including down to the customer level.  This expansion of SCADA to the distribution system will soon require changes in the way the distribution system is priced, both for purposes of equity and for Edison’s purpose of controlling the system.

  • The growth in rooftop PV is dramatically reducing the energy that utilities transport across their distribution system.  This energy reduction generally reduces utility revenue and utility income.  Under conventional utility rate making, the result is an increase in the unit price charged by the utility for that service.  Some pundits point out that the owners of the rooftop PV panels are generally richer than the rest of the population served by the utility.  These solar customers are cutting the energy they consumer, though not necessarily their requirements on the utility to provide some service through the same wires.  The rate making dynamics thus result in other, poorer customers seemingly subsidizing the richer customers who have made the choice for rooftop solar.  This seems inequitable to some.
  • The growth in rooftop PV has outstripped the loads on some distribution feeders, with reports that the generation capacity has sometimes reached three times the load on the feeder.  These loading levels cause operating problems in the form of high voltages and excessive line losses.  During periods of high voltage and excessive line loss, prices can provide an incentive for consumers to modify their behavior.  The genie seems to be out of the bottle in regard to allowing the utility to exert direct physical control over PV solar, but real time prices could provide some economic control in place of the tradition utility command and control approach.

I have discussed the need for real time pricing of the use of the distribution grid in “Net Metering:  Identifying The Hidden Costs;  Then Paying For Them,” Energy Central, 2013September 20.[5] I have described a method in “Dynamic ‘Distribution’ Grid Pricing.”[6]

Changes in state regulations have also impacted this balance between labor costs and automation costs.  Some states now have performance incentives based on the number of outages and the typical restoration times.  The cost associated with the time of sending a line crew to close a circuit breaker now competes with the incentives to get that closure faster, through the use of automation.

In conclusion, the increase in utility automation is not so much innovation as it is a continuation of the historic utility practice of the economic substitution of lower cost technology for the ever increasing cost of labor.  The 1978 change in federal law led to the growth of ISOs and bulk power markets, but did not reach down to the distribution level, perhaps of the lack of non-utility industrial support.  The growth in rooftop PV will provide the incentives for expanding the real time markets down the distribution grid to retail consumers.  Though computers indeed have gone from 1.0 (vacuum tubes), to 2.0 (transistors), to 3.0 (integrated circuits), I don’t see the current changes proposed for utilities to be much more than following the competition between labor costs and automation costs.  We are still Utility 1.X, not Utility 2.0.

[1] Supervisory Control And Data Acquisition.

[2] Public Utility Holding Company Act of 1935

[3] Public Utility Regulatory Policies Act of 1978

[4] Independent System Operator

[6] A draft of this paper is available for free download on my web page,

A Romp Through Restructuring

Today I presided over the monthly lunch of the National Capital Area Chapter (NCAC) of the U.S. Association for Energy Economics, with Craig Glazer, Vice President-Federal Government Policy, PJM Interconnection.  Besides announcing future events and talking about the successful NCAC field trip of October 4-5[1], I got to ask questions and comment as the luncheon moderator and President of NCAC.  I include some of those questions and comments below, along with several that where beyond what I felt like imposing on the luncheon attendees.

I liked that Craig mentioned that code words were often used in the industry, though not the ones I sometimes point out.  But when one questioner commented about the growth in distributed generation (DG), I pointed out that I look at DG as a code word for non-utility generation.  Nominally DG should be any generation on the distribution grid, but is generally used to restrict the ownership options.

Craig identified “Rates significantly above the national average” as one of the issues that drove the restructuring movement.  Unlike the children of Lake Woebegone where children are all above average, retail rates can’t be above the national average everywhere.  Thus, there are some parts of the country where restructuring was not an issue and the utilities have not been restructured.

Craig used the term “Half Slave/Half Free” to address the case of Virginia, where the State Corporation Commission still regulates retail rates but the generation and transmission systems participate in the competitive PJM market.  I noted that the result of restructuring was that the market value of electricity in my home location of Eastern Kentucky went from very low prices to moderately low prices, at least according to one of Craig’s slides.  But Craig had already made me feel better about this by telling of his trips to Kentucky to persuade the regulators to let their utilities join PJM.  He told them that one result the Kentucky electric companies joining PJM would be higher utilization of Kentucky’s cheap power plants.

These power plants joining PJM could sell the very low cost generation (the pre-restructuring picture) at moderately low prices (the post-restructuring picture), with the differential being used to reduce the prices for Kentucky residents.  As I pointed out, this is an example of Craig’s term “Half Slave/Half Free” where he pushed the concept.  I also pointed out that a substantial portion of the country has not restructured, which was my initial thought when he mentioned the term.  So we went back to the issue that not all parts of the country would benefit from restructuring.

Craig stated that restructuring changed the risk allocation formula.  He made the point that there was no Enron rate case.  In other situations where utility investments were cratering, there were rate cases, but not with Enron in the restructured world.  Further, there was effectively not even a hiccup in the PJM bulk power market on the day that Enron collapsed, even though Enron had been a major player in the PJM bulk power market.

Craig says that capacity prices are too low.  I see capacity as being a multi-year issue, requiring a multi-year solution.  Pre-restructuring, the utilities handled the variations in the need for capacity, and the value of capacity, through long term rates.  They built what they thought was needed and didn’t worry that the bulk power market went up and down, the utilities kept on trucking as vertically integrated entities.  Indeed, one of the problems that caused the California debacle of 2000/2001 was that the entire market was forced to pay the spot price of electricity.  The Texas market seems to be greatly hedged in that when the bulk power market price went up by a factor of 10, on average, for the entire month of August 2011, the retail price hardly budged.

Craig made an excellent point in regard to the question of who decides what in the electric industry, providing a list of governmental entities.  I notice that he did not mention the U.S. Department of Energy (of course he was a substitute speaker who replaced Melanie Kenderdine, assistant to the Secretary of the U.S. Department of Energy, because Melanie thought she would not be allowed to speak because of the shutdown of the federal government that ended about 24 hours before the lunch.)  He also listed state legislatures but not Congress.  But then the other decision makers are the owners of the facilities.

A continuing issue that I have with regulation is tangential to Craig’s “Half Slave/Half Free” term.  His PJM operates in parallel with several other entities.  I have frequently pointed to the Lake Erie donut[2] , with is the path around Lake Erie that allows electricity to flow from Chicago to New York City along two major paths, north or south of Lake Erie.  I have said that when there is unscheduled loop flow, e.g., more going north of Lake Erie than has been scheduled, that there should be payment for that unscheduled flow.[3]  The same issue applies to PJM versus TVA, which have lines in parallel.  Sometimes one system is paid for the contract path but some of the electricity actually flows on the other system.  And just south of TVA is the Southern Company, providing a fourth east/west path for loop flows.  I say that a mechanism to pay for loop flows may be one of the ways to get around the transmission cost allocation and siting issues mentioned by Craig.

I note that I did not raise all of these issues during the lunch Question and Answer period, I spoke enough as it was.  Craig is certainly welcomed to comment on this blog, as are others.

[1] See “NCAC-USAEE Overnight Field Trip of 2013 October 4-5,” 2013 Oct 07,

[2] See my “Wide Open Load Following,” Presentation on Loop Flow to NERC Control Area Criteria Task Force, Albuquerque, New Mexico, 2000 February 14/15, on my web site, under publications under other publications.

[3] See my blog entry “Socializing The Grid: The Reincarnation of Vampire Wheeling,” 2011 Mar 17,

The Electric Transmission Grid and Economics

Tuesday, 2013 October 8, I went to the MIT Club of Washington Seminar Series dinner with Anjan Bose of Washington State University talking about Intelligent Control of the Grid.  Anjan began with giving two reasons for the transmission grid but then seemed to ignore the predicate in explaining what the government has been doing in regard to the grid.

The first slide identified two reasons for the electric transmission system.  The first was to move electricity from low cost areas (such as hydro-electric dams) to higher cost areas.  This is an obvious reference to economics.  The second was to improve reliability.  Anjan did not get into the discussion of how that is an economics issue, but it is.  Reliability is greatly improved by increasing the number of shafts connected to the grid.  We can produce the same amount of electricity with five 100 MW generator or one 500 MW generator.  The five units provide greater reliability but also higher costs.  The higher costs are associated  with various economies of scale, including higher installed cost per MW, less efficient conversion of the fuel into electricity, and the need for five sets of round the clock staffs.  A transmission system allows dozens of 500 MW units to be connected at geographically dispersed locations, achieving the reliability of many shafts and the lower cost of larger generators.

But, the presentation had little to do with the economics of the power grid, and the investigations into those economics.  I noticed that much of the discussion during the question and answer period did talk about the cost of operating the grid, so people were indeed interested in money.

Anjan said that the financial people used different models than did the engineers who operate the system.  I have long said that we need to price the flows of electricity in accord with the physics of the system, by pricing the unscheduled flows.  The engineers and operators may plan to operate the system in a prescribed way, but the flows of electricity follow the laws of physics, not necessarily the same was the way some people have planned.

Anjan said that deregulation[1] has caused a dramatic decline in new transmission lines, especially between regions such as into and out of Florida.  My feeling is that new transmission lines would be added more willingly if the owners of the new transmission lines would be paid for the flows that occur on the transmission lines.  For instance, twenty years ago a new high voltage transmission line in New Mexico began to carry much of the energy that had been flowing over the lower voltage transmission lines of another group of utilities.  The group of utilities called the service being provided “vampire wheeling” and refused to make any payment to the owner of the new transmission line.  The new line provided value in the reduced electrical line losses and perhaps allowed a greater movement of low cost power in New Mexico, but that value was not allowed to be monetized and charged.

I note that a pricing mechanism for the unscheduled flows of electricity would have provided a different mechanism to handle the 2011 blackout in Southern California, which began with a switching operating in Arizona.  Engineers swarmed to the area to find data to assess the root causes but were initially blocked by San Diego Gas & Electric’s attorneys who feared that any data could be used by FERC to levy fines pursuant to the 2005 electricity act.  I remember a discussion at the IEEE Energy Policy Committee on that proposed aspect of the bill.  The IEEE EPC voted to suggest creating mandatory reliability standards.  I was the sole dissenting vote, arguing that the better way was to set prices for the unscheduled flows of electricity.  Thus, SDG&E and the Arizona utilities would have been punished by the market instead of risking a FERC imposed fine.

[1] I prefer to use the more accurate term restructuring, since the entire industry is still regulated, even though generation is often subject to “light handed regulation” by FERC, which approves concepts instead of specific prices.

Price Pressure on Input Capital Costs

We all know about the high cost of building nuclear power plants.  However, the operating costs are so low that the total cost of power out of a new nuclear power plant is just about competitive in the US electricity market.  According to the World Nuclear Association, as of 2013 October 1[1], there are 100 operable nuclear reactors in the United States and 3 under construction, equivalent to just 3% of the existing fleet.   In overseas markets, where the cost of competitive fuels are much higher, the total cost balance seems to be swinging in favor of nuclear power.  Outside the United States, there are 332 operable nuclear reactors and 67 under construction, or 20% of the existing fleet.

In light of some of these and other statistics, a cynical friend has suggested that the high construction costs are only tolerated because of the low nuclear fuel costs.  He suggested that as we see other fuels become more competitive with the cost of nuclear fuel, we will see price pressure put on the manufacturers of nuclear plants and of their component parts.  For those working in the electric industry, this is almost heresy.  The electric industry and their suppliers have a cost of doing business, a cost that is then recovered in the prices charged to their customers.  A lower price would mean a loss to the manufacturer, a loss they cannot afford.  Thus, the conventional wisdom is that there is little, if any, ability for competition to force prices lower, especially for the prices of capital equipment such as a nuclear power plant.  At least that is the conventional wisdom.

However, the electric industry has always had some competition.  Even small isolated utilities with two or more generators have competition in that the generators have to compete against each other to produce electricity at the least total cost.  This is the ancient concept of joint optimization.  The internal competition carried over with the formation of power pools and now with independent system operators.

The competition was not just an internal optimization but was also external.  Utilities buy and sell electricity with their neighbors on a competitive basis.  Most investor owned utilities are interconnected with two or more other utilities, with the interconnected utilities always attempting to sell electricity to their neighbors, which requires the selling utility to be cheaper than the price being offered by other utilities.  These prices would often be quoted for large blocks of power[2], and until recently didn’t have the finesse that has been attributed to power pools and independent system operators.  But the external transactions are still forms of competition.

So the concept of competition is not foreign to electric utilities, competition in the construction of nuclear power plants just hasn’t been in the forefront of the minds of utility executives, perhaps because of the small number of power plants that have been built.

Another friend, perhaps also a cynic, claims that drilling rig operators set their prices to extract much of the consumer surplus out of gas and oil fields.  He claims that the charge for drilling wells is greatly influenced by the expected profitability of the well.  Quoted prices are always low enough so that the field owner can expect to earn a return of his investment in about five years but are high enough so that the field owner can’t expect a return of his investment in less than three years.  My gas and oil friend’s claim is essentially the same as my cynical nuclear friend, that the construction costs go up and down based on the investment level needed for the facility to be profitable, whether it is a nuclear plant or a well expected to produce oil or natural gas.

This cynicism suggests that the United States should defer committing to new nuclear plants until the overseas rush as died down.  The nuclear industry has some limits on the ability to build new power plants.  The high price of fuel in overseas locations has made these locations to be more tolerant of high capital costs, more tolerant than in the United States, explaining some of the disparity mentioned above between the 3% growth in the United States versus the 20% growth overseas.  As the overseas nuclear building boom declines, maybe the cost of new nuclear power plants will decline, making them once again very competitive in the United States.


[2] “Electricity Is Too Chunky:  The Midwest power prices were neither too high nor too low.  They were too imprecise,” Public Utilities Fortnightly, 1998 September 1.

Energy Interchange Markets–Often Designed to Fail

I participated on the NAESB IIPTF[1] while it met during 2003-2005.  I argued then that there should be a cash payment for inadvertent interchange and that the cash payment should be differentiated over time and across geography.[2]  About the same time I participated in the InPowerG discussion of ABT pricing of UI[3], making similar arguments.[4]

My concern before the IIPTF was that parties could game the market.  First the party could buying cheap electricity upstream of a constraint.  The party could then sell expensive electricity downstream of the constraint.  The party could then arrange a cheap but ineffectual parallel path around the constraint.  This issue was described in the title of my first published article[5] some 15 years earlier.

Most other markets would look at a set of transactions as forms of efficiency inducing arbitrage.  The purchases would raise the price in the cheap markets.  The sales would depress prices in the expensive market.  The transportation agreement would raise the price of transport, further lowering the price differential between the high priced area and the low priced area.  But, the rigid terms of most tariffs just produced a profit for those entities willing to operate in this shadowy market.  The name Enron evokes such shadowy images, especially when paired with the CaISO[6].

But CaISO was not the only advanced market that found itself subject to such arbitrage.  PJM suffered some of the same loop flow issues when Midwest generation contracted with AEP and VEP, effectively moving electricity south around the PJM internal constraints between low cost Pittsburgh and the high cost Washington/Baltimore area.  PJM provided a similar southern loop for marketers in New York, who bought cheap electricity at the Niagara frontier, moved it west and south and back east toward the New York City area.

In recent years, FERC has been advocating Memorandums of Understanding that create Energy Imbalance Mechanisms.  I believe these MOUs and EIMs will fail to improve the system, and could contribute to problems on the network unless the associated cash outs use geographically differentiated prices.  For instance, the disastrous 2012 July 30 & 31 blackouts in India have been attributed to the lack of geographic differentiation in India’s energy imbalance mechanism[7].  Customers and generators downstream of the constraint faced the same price (once high, once low) as customers and generators upstream of the constraint (again, once high, once low). [8]

From the discussions I have heard about the MOUs and the EIMs, they seem to be designed to fail, not learning from the experience in India of a similar pricing mechanism, ABT pricing of UI.  The MOUs and the EIMs need to price the energy imbalances on a geographically differentiated basis with a price that changes automatically with the spot conditions.


[1] North American Energy Standards Board Inadvertent Interchange Payback Task Force

[2] At least one party criticized my approach because I generally used an exponential formula, which nominally prevented the price from going negative.  My research for “Renewable Electric Power—Too Much of a Good Thing: Looking At ERCOT,” Dialogue, United States Association for Energy Economics, 2009 August, convinced me that negative prices could sometimes be appropriate.  Accordingly, I have recently used a hyperbolic sine as and a price adjustment factor.  The hyperbolic sine is the difference between two exponential formulas, one with a positive exponent, the other with a negative exponent.  See

[3] Availability Based Tariff and Unscheduled Interchange

[4] for a partial digest of those discussions.

[5] “Tie Riding Freeloaders–The True Impediment to Transmission Access,” Public Utilities Fortnightly, 1989 December 21.

[6] California Independent System Operator

[7] See my blog, which includes two separate comments added by Soonee, the CEO of the Indian grid operator.

[8] The price differential issues included reactive power generation and usage.  India’s ABT has a section that prices reactive energy, but not sufficiently to induce better responses from generators and loads.

FERC, Barclays, and Formulary Arbitrage

On 2013 July 16, FERC ordered Barclays bank to pay a half billion dollars for market manipulation.[1]  The next day Barclays responded by suing FERC in federal district court, forcing FERC to prove the allegations in a venue that Barclays feels would be a level playing field.  On July 22, a New York reporter called a friend of mine who is normally well versed in utility legal matters, having been a regulator and a utility executive, seeking to understand what Barclays did to get FERC upset, asking for a simplified explanation.  My friend suggested another industry expert and also called me.  I got back to my friend on July 23, heard the request, and wrote a message to him explaining what I understood Barclays to have done on a generic basis, without having read much more than articles in The Washington Post.  It is that message of yesterday that I am copying here.

Thanks for the call on Monday in response to a NYC reporter who was asking you for background or information about the Barclays spat with FERC.  I have not followed the details of the spat, but the way I find it easiest to describe is as a thinly traded formulary arbitrage.

An arbitrage is buying and selling related securities with the hope of making a profit on the difference.  For instance, one might buy oil for $90/bbl and sell at $100/bbl and make $10/bbl on the paired transactions.  If the transactions are for oil delivered in different locations, one might also have a transportation cost of $1/bbl, reducing the profit to $9/bbl.  There may be some insurance and other handling costs, but if they are minor, one makes a profit so long as the differential is greater than the cost of transportation.  The transactions can be for the same location but different times.  One might buy a May futures contract for $90/bbl and sell a June futures contract for $100/bbl and make $10/bbl on the paired transactions.  But one has to store the oil, which might again cost $1/bbl for going into and out of storage and for one month in storage.

My understanding is that futures contracts settle at the end of the month before at the average spot price of oil on the last few days of the month.  So, a May futures contract would be settled at the end of April based on the spot price on April 30, or some sort of average.  I call this a formulary settlement.

If one has bought a lot of May futures contracts, one would like to see them settle at a very high price.  So, one might buy lots of spot oil on April 30 to push up the price at which the May futures contracts would settle.  This would be a formulary arbitrage.

But considering that the spot market is very large, it is difficult to budge by buying a lot of spot oil on April 30.  One might need to buy enough oil to supply ExxonMobil.  That would be a thickly traded formulary arbitrage.

Some commodity exchanges offer electricity forwards markets which settle based on the actual spot price of electricity on some of the ISOs.  The ISO spot prices might be thinly traded.  A paired transaction on the electricity forwards market and the ISO market may be a thinly traded formulary arbitrage.  At least that is what I would have told the reporter had you directed him to me.

Hope this helps for the next time.

[1] The Federal Energy Regulatory Commission (FERC) today ordered Barclays Bank PLC and four of its traders to pay $453 million in civil penalties for manipulating electric energy prices in California and other western markets between November 2006 and December 2008. FERC also ordered Barclays to disgorge $34.9 million, plus interest, in unjust profits to the Low-Income Home Energy Assistance Programs of Arizona, California, Oregon, and Washington. FERC News Release