The Goldilocks Dilemma

An old posting about why intermittency is not a big deal came to my attention today.  I re-read some of what had been said, especially when I had just sent out a paper on the topic yesterday.

I believe that the value of electric “energy” is often overstated.  The author of the old posting, Chris Varrone, inadvertently acknowledges this when he wrote

However, the energy in wind is worth 100% of the energy in nuclear (or anything else) in the spot market; wind energy in the day-ahead market may be worth a little less, but this can be “firmed” using energy trading desks or by using other assets in the operator’s fleet.

If the day to day differential can be handled by firming with other assets, then the value of the electricity is not just energy.  It is not worth debating what to call this other value, but a substantial part of the value in the spot market is something other than energy.

As to The Goldilocks Dilemma, the paper I sent out yesterday, I began by asking

Is the price paid to dispatchable generation too high, too low, or just right for intermittent generation?

I then answer

Though intermittent generators often argue that they should receive the same price as dispatchable generation and some utilities argue that they should pay less to intermittent generators, sometimes intermittent generators should face a higher price than dispatchable generators, such as when intermittent generation is part of the market during instances of extreme shortage.

The entire paper is available on my web site, the companion to this blog site.  Look for the hot link to the library near the bottom of the first page.  A hot link for the article is near the bottom of library index in the section called drafts.

Posted in Electricity Economics | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Making Electricity Ubiquitous, Ubiquitous But Not Necessarily Cheap

I attended the 37th International Association for Energy Economics International Conference 2014 June 15-18 in New York City.  During the Wednesday “Dual Plenary Session: Utility Business Model” Michel Derdevet, Secretaire General, Electricite Reseau Distribution France, the French distribution utility, raised the issue of getting electricity to the billion people in the world who don’t have access to electricity.

During the audience discussion period, I raised the concept of a different business model, unregulated micro-grids owned by non-utilities.  I mentioned that a friend thought this concept could be applied to the nearly ubiquitous cell phone tower.  Cell phone towers require a power supply.  My friend thought that the provider of a cell phone tower should be granted a 10-year license to provide power on an unregulated basis.

My thought was that owner of the cell phone tower should be allowed to provide electricity and compete against anyone else that wanted to provide electricity.  Competition can better drive down prices than can regulation.  Regulation in terms of getting electricity to be ubiquitous would just stifle innovation.

Over the years, newspapers and newsmagazines have had pictures of the electric grid in some countries that look like a spider’s web on LSD.  Local entrepreneur’s buy their own diesel generators and provide “backup” electricity to their neighbors over wires strung in parallel or across the wires of the local utility.  The electricity is “backup” electricity in that it is used only when the local utility doesn’t have enough central station power to provide electricity to the entire national grid.  The utility then blacks out some neighborhoods.

The neighbors buy only a small amount of “backup” electricity from entrepreneur because the “backup” electricity is so expensive, being produced by diesel generators, which are less efficient and use a premium fuel.  The “backup” electricity is used for lights and a fan at night, perhaps for a refrigerator, not for those devices that might otherwise by electricity guzzlers.[1]  When the utility once again has enough power, competition drives the price down, putting the high cost entrepreneur out of business.

These micro-grids, whether run by the owner of the cell phone tower or by a neighborhood entrepreneur, can make electricity ubiquitous, even if the electricity is not cheap.  After all, Michel Derdevet said to me after his panel was done that some people were pushing for ubiquitous power supplies so they could make money selling electricity, not just for an eleemosynary purpose.  Thus, the power might not be cheap.

During the plenary session, Jigar Shah, Founder, SunEdison, LLC, claimed that California, with the highest electricity rates in the U.S., does not have the highest energy bills, because California residential consumers use less electricity. [2]  This is consistent with my comment about the lower usage of “backup” electricity relative to central station power.  However, elasticity may not be the only explanation for the lower consumption in California.  There is also the issue of climate and rate design.

Standard rate design practices also result in higher prices for customers with smaller consumption levels. The standard residential rate design has a monthly customer charge (say $10/month) and a commodity charge (say $0.09/KWH).  These rate levels nominally reflect the way that a utility will incur costs, a fixed cost per customer and then a cost that varies with the amount of energy the customer consumes.  A customer using 100 KWH per month would have a monthly bill of $19 and an average cost of $0.19/KWH.  A customer using 1000 KWH per month would have a monthly bill of $100 and an average cost of $0.10/KWH.  Thus, an area of the country with lower electricity consumption can be expected to have higher average cost and lower overall bills.

The micro-grid could be operated by the above mentioned owner of the cell phone tower or an entrepreneur.  I like to think of innovators who decide to install their own electric systems and then share with their neighbors, which is a form of the first type of owner.  The generation is put into place for a purpose other than selling electricity but if the sales are lucrative enough, the owner may decide to upsize his generating capacity until the owner looks like a utility.

A friend built such a system during the 1980s while in the Peace Corps in Africa.  He had an engineering degree from MIT.  So he took the concepts learned dabbling in the MIT labs in Cambridge, Massachusetts and applied the concepts in the field in Africa.  Later my friend worked for a Westinghouse subsidiary building power supplies for remote micro-wave towers (the same concept as a cell phone tower) and remote schools.  His company used mostly renewable energy, such as solar and wind, because diesel was so expensive in the remote areas he electrified.  Diesel was used to top off the associated batteries when there had been insufficient renewable energy supplies for extended periods of time.  It was during this period of his life that I met this fellow MIT alumnus.

Yes, we can make electricity ubiquitous.  But it will take competition to make it cheap, or at least not quite so expensive.



[1] As an aside, the consumption reduction during periods when “backup” electricity is being used demonstrates the concept of the elasticity of consumption.  When prices go up, consumption goes down.

[2] Looking at Table 6 of the EIA’s utility data for 2012, the average price to residential consumers in California was $0.153/KWH, or 8th most expensive.  The average consumption for residential consumers in California was 6,876 KWH/year, or the 3rd lowest after Hawaii and Alaska.  The average bill for residential consumers in California was $1,053/year, or the 10th lowest in the U.S.

Posted in Electricity Economics | Tagged , , , , , , , , , | Leave a comment

Disruptions, Energy Markets and “Joseph and the Amazing Technicolor DreamCoat”

On 2014 April 22 as this year’s president of the National Capital Area Chapter (NCAC) of the U.S. Association for Energy Economics (USAEE), I will preside over NCAC’s 18th Energy Policy Conference, which this year has the title “Disruptive Technologies Shock All Energy Sectors.”[1]  These disruptive technologies will require additional infrastructures, such as pipelines, wires, refineries, and generators.  And, since we operate a free market economy in the United States, we will need dynamic markets to handle the effects of these disruptive technologies as we see a change in the way energy flows in North America.

Wind pockets in the Great Plains and West Texas need high capacity lines to transport the energy across space to areas where the need for electricity is greater.  We need ways to pay for those transmission lines.  In response to the intermittency associated with wind, we will need fast response generators and ways to pay those generators to operate only a small fraction of the year.

Some fast response generators will be storage devices.  A high price for storage devices providing electricity for a small fraction of the year will be meaningless unless there are low prices during the portion of the year that the storage devices are being recharged.  This will move electricity across time, using cheap electricity during periods of fat to provide electricity during later periods of lean.

Oil production areas in North Dakota and Montana need pipelines and rail cars to move oil across space to market.  For years, the availability of low cost oil pipelines has reduced the price differential across the U.S.  The lack of sufficient pipeline capacity has depressed the well head price of oil in the Bakken fields, reflecting the higher cost of rail transportation to refineries.  New oil pipelines will reduce this price differential.

The natural gas system has many storage fields.  I mentioned earlier electricity’s growing need for storage.  And petroleum and its refined products also need storage.  During January 2014, there was not enough propane in storage in the Midwest and prices soared.  The shortage could have been handled by more refined products pipeline capacity, but additional storage would also have been an option, perhaps a cheaper option.

Though the conference is about technological disruptions, the shortage of propane in January can be thought of as a weather disruption.  Some people say that we are experiencing climate change.  My first experience with a claim of climate change was in 1990, when Edith Corliss, a physicist with the National Institute of Standards and Technology, a bureau of the U.S. Department of Commerce, told me was that the weather at that time more variable than weather had been since the time of Christ.  Our summers were alternately either (A) hotter and dryer or (B) cooler and wetter.  Or to put it mathematically, we were seeing a greater statistical variance and standard deviation in the measured temperature and the measured rainfall.  The el Niños were getting more intense, as were the la Niñas.  We were not having more of one and fewer of the other, just seeing more intensity in each.

I am reminded of the stage musical  ”Joseph And The Amazing Technicolor Dreamcoat.”  The DreamCoat refers to a vision by the pharaoh that Joseph interpreted as a climate disruption.  There were to be seven years of fat followed by seven years of famine.  Joseph then created a physical system and a market to handle this insider knowledge.  He stored grain during the years of fat and used the grain sparingly through the end of the years of famine.  In commercial parlance he bought low and sold high.  In legal parlance, he traded on insider information and made a killing.

We need new infrastructure to handle the growing disruptions created by technological changes.  But we also need dynamic markets and new market mechanisms in our free market economy.  At least that is my Technicolor dream.

[1] See the conference notice at WWW.NCAC-USAEE.org

Posted in Electricity Economics, NCAC, Petroleum--A non-utility aside | Tagged , , , , , , , , , | Leave a comment

Pricing Gasoline When the Pumps Are Running on Backup Electricity Supply

I attended the MIT Club of Washington Seminar Series dinner on Tuesday, 2014 February 11, which this year is on the topic of “Modernizing the U.S. Electric Grid,” listening to Michael Chertoff talk on “The Vulnerability of the U.S. Grid.”

Chertoff’s maguffin was a story about a hurricane hitting Miami in about 2005.  Electrical workers couldn’t get to work because they had no gasoline for their cars.  The gas stations had gasoline but no electricity to pump the gasoline.  Back-up electricity generators would have required an investment of $50,000 which was not justified on the razor thin margins on which most gas stations operate.

The gas station owners thought process was that the sales lost during the blackout would just be gasoline that would be sold after the power came back on.  Investment in a back-up generator would not change the station’s revenue and would just hurt its profitability.  My first comment during Q&A was that the same issues were raised after Hurricane Sandy[1] in the New York City area in 2012, and perhaps in many other areas that experience wide spread storm damage.

After the dinner I talked with Matthew, a friend from ExxonMobil who had learned about the Seminar Series from my advertizing it to people who attend events of the National Capital Area Chapter of the U.S. Association for Energy Economics.  Because of that linkage, he makes a point to search me out at each Seminar Series dinner.  Our after dinner discussion focused on how to make the $50,000 investment in a back-up generator profitable to the gas station owner.

Matthew said that many gas station permits including anti-gouging provisions, preventing the gas station owner from increasing the price during emergencies.  My thought was that the investment in back-up power supplies would mean that a temporary price increase could be justified to pay for such an investment.  After all, bulk electricity prices in Pennsylvania on the PJM grid during the cold snap associated with the 2014 January arctic vortex soared to $1,839.28/MWH ($1.84/KWH) from an average of only $33.06/MWH during 2012.  This was a temporary 55 fold (not 55%) change in the base price of electricity.[2]

I believe that prices are sticky.  Once set, prices tend to stay unchanged for significant periods of time.  The independent system operators (ISOs such as PJM) get around some of this stickiness by having elaborate models for setting prices every hour, with the basic mechanism setting a value every five minutes and then averaging those five minute values over an hour to get a price.  The basic mechanism includes (1) bids by suppliers as to the price they want if they are to provide specified amounts of electricity and (2) estimates of the demands that will occur each hour or that are occurring on a real time each five minutes.

Almost 25 years ago, long before the advent of ISOs, I published my first article[3] on using the measured real time imbalance between supply and demand to set the real time price for unscheduled flows of electricity.  Using the measured imbalance eliminated the need for bidding processes, bidding process that can lead to stickiness.  I proposed using the concurrent system frequency for setting the price, calling the concept Wide Open Load Following (WOLF).

For electricity, a surplus of demand will drag down system frequency, which I say warrants a higher price, at least higher than the nominal price.  A surplus of supply will push up system frequency, which I say warrants a lower price, at least lower than the nominal price.  Over longer intervals, imbalances will change the accuracy of wall clocks that use system frequency to determine the correct time.  Thus, WOLF includes the concept of time error in setting the nominal price for electricity imbalances.  The WOLF concept could similarly be used to set prices within each of the five minutes of an ISO dispatch period, or even on a sub-minute basis, modifying the ISO’s sticky five minute nominal dispatch value.

The State of California has variable pricing for its State Route 91 Express Lanes under the rubric of congestion management.

“On July 14, 2003, OCTA adopted a toll policy for the 91 Express Lanes based on the concept of congestion management pricing. The policy is designed to optimize 91 Express Lanes traffic flow at free-flow speeds. To accomplish this OCTA monitors hourly traffic volumes. Tolls are adjusted when traffic volumes consistently reach a trigger point where traffic flow can become unstable. These are known as “super peak” hours. Given the capacity constraints during these hours, pricing is used to manage demand. Once an hourly toll is adjusted, it is frozen for six months. This approach balances traffic engineering with good public policy. Other (non-super peak) toll prices are adjusted annually by inflation.

“Recent customer surveys indicate that 91 Express Lanes users lead busy lives with many hours dedicated to commuting to and from their jobs. About 85 percent of customers are married, with more than half raising children. Many customers choose the toll road only on days they need it most, joining general freeway lane commuters on other days. Customers emphasize they value a fast, safe, reliable commute and the toll policy strategy is designed to support this value.

“The toll policy goals are to:

  • Provide customers a safe, reliable, predictable commute.
  • Optimize throughput at free-flow speeds.
  • Increase average vehicle occupancy.
  • Balance capacity and demand, thereby serving both full-pay customers and carpoolers with three or more people who are offered discounted tolls.
  • Generate sufficient revenue to sustain the financial viability of the 91 Express Lanes.

“The effect of the toll policy has been an increase in customer usage with sufficient revenue to pay all expenses and also provide seed funding for general freeway improvements. Revenues generated by the toll lanes stay on the SR-91 corridor, a significant departure from past practices. Under the previous owner’s agreement with Caltrans, a “non-compete” provision restricted adding more capacity to the SR-91 corridor until 2030. When OCTA purchased the lanes, it opened the door for new improvements on SR-91 by eliminating the non-compete provision.[4]

The free flowing capacity of the 91 Express Lanes is 3400 cars per hour.  When average hourly volume exceeds 3200 cars per hour (about 94.1% of the free flowing capacity), the price increases by $0.75 at the beginning of the next six months.  When average hourly volume exceeds 3300 cars per hour (about 97.1% of the free flowing capacity), the price increases by $1.00 at the beginning of the next six months.  When average hourly volume is less than 2720 cars per hour (80% of the free flowing capacity), the price decreases by $0.50 at the beginning of the next six months.  The flow analysis is done for each hour of the week, producing 168 distinct prices each way on the 91 Express Lanes, that is, for 24×7 distinct hours each way.  But as of 2013 July 1, about 1/3 of the hours are charged the minimum price, that is, they are not considered to be super peak hours.   The flow analysis also is separately done for holidays, nominally as minor as Mother’s Day.

The 91 Express Lanes toll mechanism shows that some jurisdictions, including the notoriously protectionist State of California, allow incentive pricing for congestion management during critical periods, such as a wide spread blackout.  The 91 Express Lanes toll mechanism also provides a mechanism for automatic adjustment of the  price.  The 91 Express Lanes toll mechanism uses explicit measurements of the balance between supply and demand, much like the WOLF mechanism for electricity imbalances.  The 91 Express Lanes measurement is the fraction of the capacity of the 91 Express Lanes, changing the price when the hourly utilization is outside the band of 80.0% to 94.1%.

Based on a review of the 91 Express Lanes toll mechanism, there is some hope that gas stations will be able to afford the major investment in backup electrical supplies.  For gas stations, the measure of the imbalance between supply and demand can be as simple as the length of the line of cars waiting for gas or as complex as including the gasoline inventory compared to the desired level and the estimated time before the inventory is extinguished.



[1] Presentation of Adam Sieminski, Administrator of the U.S. Energy Information Administration at the 2012 October 19 lunch of the National Capital Area Chapter of the U.S. Association for Energy Economics (NCAC-USAEE.org).  Pursuant to its charter as an information agency, EIA created for Hurricane Sandy a real time display of gas stations with internet connectivity, a nominal measure of whether the gas station had electricity.

[2] PJM differentiates prices geographically.  Thus, one local price increased to $2,321.24/MWH and another fell to a negative $391.14/MWH because of transmission constraints.

[3] “Tie Riding Freeloaders–The True Impediment to Transmission Access,” Public Utilities Fortnightly, 1989 December 21.

[4] https://www.91expresslanes.com/policies.asp

Posted in Petroleum--A non-utility aside | Tagged , , , , , , , , , , | Leave a comment

Electric Demand Charges: A Lesson from the Telephone Industry

The only ad with prices that I remember from 50 years ago was AT&T’s offering of a three minute coast to coast telephone call for $1.00.  With the inflation that we have seen over the last 50 years, one would expect that a coast to coast call would now be at least $10.00 for three minutes.  Instead, most telephone bills show a monthly service fee and no itemization for individual calls.  Automation has allowed the telephone companies to do away with most telephone operators, which was a significant portion of the variable cost of making long distance telephone calls.  The principal cost is now the investment in the wires, which doesn’t change with the number of calls that are carried.  So, most carriers now charge a monthly fee and little or no charge per call.  Perhaps it is time for the electric industry to go that way?

 

The restructuring of the electric industry has generally separated the distribution wires function from the generation[1] and transmission[2] function for most customers of investor owned electric utilities.  This restructuring puts such electricity customers into the same position as their counterpart customers of municipally and cooperatively owned utilities.  Municipally and cooperative owned utilities have generally been distribution only utilities, buying generation and transmission services from others, instead of being vertically integrated like most investor owned electric utilities.

 

The restructuring of the electric industry has resulted in most customers being served by a distribution company which has very little variable cost, much like the telephone companies.   A significant distinction is that telephone lines handle one call at a time.  The telephone line is either in use or is not in use.  In contrast, electric utilities provide a continuously variable service.  The customer may be taking 10 watts (a small light bulb) or 10 kilowatts (running the A/C, water heater, and stove at the same time), or any amount in between.  The telephone company has the wires to serve the customer’s demand, whenever that call occurs[3].  The electric distribution company similarly has the wires to serve the customer’s demand, whenever that demand occurs.  While the telephone company will have customers on a binary basis (they are either a customer or are not a customer), the electric distribution customer serves its customers on a continuous basis (they might be very small customers who never use more than 10 watts or a very large customer that might use up to 100 MW.)

 

The binary basis of telephony customers allows the telephone companies to charge their customers a specific amount on a monthly.  The continuous nature of the size of electric services suggests that electric distribution companies charge their customers a price based on the size of the electric service used by the customer.  For commercial and industrial customers, electric utilities have long included in their tariffs a demand charge that depends on the maximum power that the customer used during the billing period[4].  Typically such demand charges will be based on the average consumption for some 15 minute period.

 

Cost has been a significant factor that limited the use of demand charges to commercial and industrial customers.  Demand meters are more costly to manufacture, in that they do more than just accumulate the amount of energy that goes through the meter.  Demand meters are more expensive to read, in that the meter reader has to note two quantities and has to manually reset the demand register.  These two cost factors are lesser issues in regard to determining residential demand now that the industry has moved significantly to Advanced Meter Reading (AMR) and to Advanced Meter Infrastructure (AMI[5]), both of which automatically collect consumption data, including for 15 minute intervals.

 

Historically residential demand charges was thought to produce an insignificant shift of revenue among residential customers.  The reasoning was that, though residential customers are different in size, they have a similar load pattern.  A customer using 1,000 KWH a month would have ten times the demand as a customer using 100 KWH a month.  Implementing a demand charge that collected an amount equal to 20% of the energy revenue collected from the larger customer would also collect an amount equal to 20% of the energy revenue collected from the smaller customer.  There would be no revenue shift among these residential customer, at least for consumption.  However, the utility would have had to install more expensive meters, which would have increased the monthly customer charge of both customers without providing a significant benefit to the utility or to the customers.

 

The move to AMR and AMI has reduced the cost of determining the demand for residential customers.  Now the cost of determining metered demand is not an issue in differentiating between customers with different consumption patterns.  Customers who should be paying a demand charge equal to 30% of their energy payments can be distinguished from customers who should be paying a demand charge that is only 10% of their energy payments.  Further, on site generation has changed the paradigm that residential customers have similar load patterns, so that now the industry knows that there are the 30% customers versus the 10% customers and can bill them appropriately.  Indeed, for houses with sufficient on-site generation, the revenue from the demand charge could be several times the revenue from the energy charge, especially when the energy charge vanishes for a net zero home.

The growth in AMR and AMI along with the growth in residential on-site generation makes this an appropriate time for restructuring residential tariffs to include a demand charge to collect the cost of the distribution utility owning the power lines.  The energy charge should continue to collect the cost of generation and transmission, though the energy charge should be time differentiated to reflect the real time value of generation and transmission, as well as the associated energy losses.



[1] The creation of Independent System Operators (ISOs) is alleged to have brought competition to the generation sector of the electric industry.  However, many distributed generators, such as roof top solar, do not experience the real time market prices set by their local ISO.  This distorts the market for distributed generation.

[2] The creation of ISOs is also alleged to have brought competition to the transmission market.  But ISOs compensate many transmission lines on a cost of service basis, through a monthly fee, though they charge geographically differentiated prices based on line losses and line congestion and generally don’t compensate for loop flow or parallel path flows, such as PJM imposes on TVA and on the Southern Company, both of which have lines in parallel to PJM>

[3] Telephone customers occasionally receive a business signal, indicating that the called party is using his/her phone.  More rarely, customers will receive a circuits business signal, indicating that intermediate wires are in full use, not that the called party is using his/her phone.

[4] Demand charges come in a variety of forms including contract demand, thermal demand, and ratcheted demands, a distinction beyond the scope of this discussion.

[5] AMI is generally distinguished from AMR in that AMI generally includes the ability to communicate both ways, from the meter to the utility and from the utility to the meter/customer location.  The ability to communicate from the utility to the meter allows the utility to control devices that the customer has opted to put under the utility’s control such as electric water heaters, air conditioning compressors, and swimming pool pumps and heaters.

Posted in Electricity Economics | Tagged , , , , , , , , , , , , , , | 2 Comments

Net Metering–Morphing Customers Who Self Generate

The U.S. Public Utilities Regulatory Policy Act of 1978 started a flood of non-utility generation, initially a few very large cogeneration plants and recently a large number of small roof top solar generation.[1]  The rapid growth in the number of small roof top solar generators requires the electric industry to develop a pricing plan that is fair to traditional customers as well as to the hybrid customers, those still connected to the grid but with some self generation.

Electric utilities support their pricing decisions with class cost of service studies  (CCOSS).  The CCOSS allocates the utility’s revenue requirement[2] to groups of customers, called classes.  Classes of customers are claimed to be homogeneous, such as being of a similar size, but more often as having similar load patterns.

Some costs in a CCOSS are allocated based on the number of customers, perhaps weighted by the cost of meters and services.  Fuel costs are allocated based on energy through the meter, though often weighted by the losses incurred to reach the  meter.  A large portion of the costs are allocated based on demand, the amount of energy used by the class during the times of maximum stress on the utility, or at times of maximum stress upon portions of the utility, such as on the generation, the transmission, distribution.  Utilities are concerned about recovering these demand related costs as customers morph from being a full requirements customer to being hybrid customers.

Electric utilities have long alleged that the homogeneity of residential load patterns allowed the utility to use energy meters, often called watt-hour meters, to determine how much each residential customer should pay each month.  The logic is that the allocation process brought costs into the rate class based on the customer’s demand.  Further, homogeneity means that the amount of energy through the meter is proportional to the customer’s demand.  The utility could collect roughly the right amount of money from each residential customer by charging residential customers based on their energy consumption[3] instead of charging residential customers based on the demand.

Charging customers based on energy allowed utilities to reduce substantially the cost of owning and reading meters without significantly distorting how the revenue to cost ratio from each customer.  At least until roof top solar substantially reduced the amount of energy that goes through the meter without necessarily reducing the customer demand.  Thus, with roof top solar, the revenue collected from the customer goes down greatly while the costs brought in by the customer demand goes down only slightly.

The growth in roof top solar coincides with the growth of Advanced Metering Infrastructure (AMI).  AMI often includes automatic meter reading and interval metering .  Automatic meter reading generally means replacing the person walking door to door with equipment.  The carrying cost of the equipment is often less than the cost of the human meter reader, allowing AMI to pay for itself.  Interval metering means collecting the amount of energy delivered during small time intervals, generally one hour (24×7), though sometimes on an intra-hour basis.  These interval readings are the demands in the CCOSS.

The intra-hour meter readings made possible by AMI would allow electric utilities to charge all residential customers based on their maximum demands, the determinant used in CCOSS to allocate costs to customer classes.  No longer would the utility have to rely on homogeneity assumptions in regard to residential customers.  The demand charge permitted by AMI would reduce the disparity between the lower revenue to cost ratio for residential customers with roof top solar relative to the revenue to cost ratio of standard residential customers.



[1] See

[2] the amount of money the utility needs to collect each year to continue functioning

[3] with a very inexpensive watt-hour meter

Posted in Electricity Economics | Tagged , , , | Leave a comment

Net Metering–Reducing the Cross Subsidies

The U.S. Public Utilities Regulatory Policy Act of 1978 started a flood of non-utility generation, initially a few very large cogeneration plants and recently a large number of small roof top solar generation.  The large cogeneration plants sold power to utilities under individual contracts, such as those using the Ernst & Whinney Committed Unit Basis which I designed in 1984 for the Texas Study Group for Cogeneration and which was adopted that year by name by the Texas Public Utilities Commission.  The concept was adopted in other jurisdictions though generally without the explicit reference used by the Texas PUC.

The large number of roof top solar projects required a generic approach to pricing the output of non-utility generation.  A simple expedient has been net metering.  When there is a single meter on a customer premise, the meter can only measure the net amount into the premise.  Any generation just reduces the amount of electricity that the customer takes from the utility.  At some times, the generation will exceed the customer consumption resulting in an export of electricity to the utility.  Some jurisdictions extend the net metering concept to allow the exported energy to be offset against energy draws during other periods.

A problem with net metering is that the value of electricity changes rapidly, perhaps by a factor of 100 in the span of a few seconds.  Further, the change in value can be between positive and negative, as utilities are increasingly stressed by surpluses, especially at night and during periods of rapidly changing meteorological conditions.  This confounding situation makes net metering less and less applicable to the energy meter that has been a standard for measuring domestic consumption.  The advent of the smart grid and an automatic metering infrastructure may alleviate some of this issue, at least once the utility industry adopts real time pricing for retail consumption.

I mentioned the time span of a few seconds previously.  Independent System Operators create new economic dispatch order every 5 minutes, in that ISOs re-evaluate the relative merits of the available generation based on their relative fuel costs, as well as transmission constraints that can occur in less than a second, such as with a bolt of lightning.  The rapid change in the transmission system also occurs on the distribution system, which should similarly impact the price charged to retail consumers.  Decreasing the time span for pricing mixed deliveries of electricity should reduce the subsidies that can occur with net metering.

Posted in Electricity Economics | Tagged , , | 1 Comment

Utility 2.0 or Just Utility 1.X

On Tuesday, 2013 October 29, I attended a discussion of the report “Utility 2.0: Building the Next-Gen Electric Grid through Innovation.”  I left feeling that the innovations discussed are just more of the same, just as I have often described the smartgrid as SCADA[1] on steroids.  The innovations are not creating Utility 2.0 as much as making slow changes to the existing utility structure, just varying the X in Utility 1.X.

Electric utilities began automating the electric system as soon as Edison started his first microgrid, the Pearl Street Station.  At one time, an operator would read a frequency meter to determine the balance between supply and demand.  In the earliest days, Edison had a panel of light bulbs that would be switched on and off to maintain that balance, which was a strange form of load management.  The operator would also be able to vary the generation by change the water flow to a hydro-turbine, the steam from the boiler, and the fuel into the boiler.  Edison invented control mechanisms that were cheaper than the labor costs of the operator, control mechanisms that his companies also sold to other utilities.  These control mechanisms can be considered to be some of the first SCADA systems.  As the control mechanisms and telephony got cheaper and labor become more expensive, more labor saving devices could be installed.  The policy of having an operator at every substation was replaced by remote devices, lowering the cost of utility service.  The smartgrid concept is just more of the same, as computers become cheaper and faster, remote metering less expensive, and remote control easier to accomplish.

The true quantum change in utility operations occurred in federal law.  PUHCA[2] effectively prohibited private individuals from selling electricity to a utility, by defining the seller to be a utility, subject to utility type regulation and to prohibitions on non-utility operations.  Because of PUHCA, Dow Chemical operated its chemical plants as the ultimate microgrid, running asynchronously and unconnected to local utilities.  Dupont installed disconnect switches that would separate its microgrid chemical plant from the local utility if power began to flow out of the plant.  International Power and Paper became International Paper.  Exxon intentionally underinvested in its steam plants, limiting its ability to produce low cost electricity.  PURPA[3] provided exemptions from PUHCA for cogeneration plants such as those mentioned here and for qualifying small producers using renewable resources.  The latter exemption was almost in anticipation to the growth of roof top solar photovoltaics (PV).  These facilities needed utility markets into which to sell their surplus, which generally resulted in individually negotiated contracts.  The creation of the ISO[4] concept could be considered to be an outgrowth of the desire by these large independent power producers (IPPs) for a broader, more competitive market, instead of the monopsony into which they had been selling.  ISOs now have a footprint covering about 2/3 of the lower US, excluding Alaska and Hawaii.

ISOs generally deal only with larger blocks of power, some requiring participants to aggregate at least 25 MW of generation or load.  ISO control generally does not reach down into the distribution system.  The continued growth of labor costs and the continued decline of automation costs has allowed the SCADA concept to be economic on the distribution grid, including down to the customer level.  This expansion of SCADA to the distribution system will soon require changes in the way the distribution system is priced, both for purposes of equity and for Edison’s purpose of controlling the system.

  • The growth in rooftop PV is dramatically reducing the energy that utilities transport across their distribution system.  This energy reduction generally reduces utility revenue and utility income.  Under conventional utility rate making, the result is an increase in the unit price charged by the utility for that service.  Some pundits point out that the owners of the rooftop PV panels are generally richer than the rest of the population served by the utility.  These solar customers are cutting the energy they consumer, though not necessarily their requirements on the utility to provide some service through the same wires.  The rate making dynamics thus result in other, poorer customers seemingly subsidizing the richer customers who have made the choice for rooftop solar.  This seems inequitable to some.
  • The growth in rooftop PV has outstripped the loads on some distribution feeders, with reports that the generation capacity has sometimes reached three times the load on the feeder.  These loading levels cause operating problems in the form of high voltages and excessive line losses.  During periods of high voltage and excessive line loss, prices can provide an incentive for consumers to modify their behavior.  The genie seems to be out of the bottle in regard to allowing the utility to exert direct physical control over PV solar, but real time prices could provide some economic control in place of the tradition utility command and control approach.

I have discussed the need for real time pricing of the use of the distribution grid in “Net Metering:  Identifying The Hidden Costs;  Then Paying For Them,” Energy Central, 2013September 20.[5] I have described a method in “Dynamic ‘Distribution’ Grid Pricing.”[6]

Changes in state regulations have also impacted this balance between labor costs and automation costs.  Some states now have performance incentives based on the number of outages and the typical restoration times.  The cost associated with the time of sending a line crew to close a circuit breaker now competes with the incentives to get that closure faster, through the use of automation.

In conclusion, the increase in utility automation is not so much innovation as it is a continuation of the historic utility practice of the economic substitution of lower cost technology for the ever increasing cost of labor.  The 1978 change in federal law led to the growth of ISOs and bulk power markets, but did not reach down to the distribution level, perhaps of the lack of non-utility industrial support.  The growth in rooftop PV will provide the incentives for expanding the real time markets down the distribution grid to retail consumers.  Though computers indeed have gone from 1.0 (vacuum tubes), to 2.0 (transistors), to 3.0 (integrated circuits), I don’t see the current changes proposed for utilities to be much more than following the competition between labor costs and automation costs.  We are still Utility 1.X, not Utility 2.0.



[1] Supervisory Control And Data Acquisition.

[2] Public Utility Holding Company Act of 1935

[3] Public Utility Regulatory Policies Act of 1978

[4] Independent System Operator

[6] A draft of this paper is available for free download on my web page, www.LivelyUtility.com

Posted in Electricity Economics | Tagged , , , , , , , , , , , | Leave a comment

A Romp Through Restructuring

Today I presided over the monthly lunch of the National Capital Area Chapter (NCAC) of the U.S. Association for Energy Economics, with Craig Glazer, Vice President-Federal Government Policy, PJM Interconnection.  Besides announcing future events and talking about the successful NCAC field trip of October 4-5[1], I got to ask questions and comment as the luncheon moderator and President of NCAC.  I include some of those questions and comments below, along with several that where beyond what I felt like imposing on the luncheon attendees.

I liked that Craig mentioned that code words were often used in the industry, though not the ones I sometimes point out.  But when one questioner commented about the growth in distributed generation (DG), I pointed out that I look at DG as a code word for non-utility generation.  Nominally DG should be any generation on the distribution grid, but is generally used to restrict the ownership options.

Craig identified “Rates significantly above the national average” as one of the issues that drove the restructuring movement.  Unlike the children of Lake Woebegone where children are all above average, retail rates can’t be above the national average everywhere.  Thus, there are some parts of the country where restructuring was not an issue and the utilities have not been restructured.

Craig used the term “Half Slave/Half Free” to address the case of Virginia, where the State Corporation Commission still regulates retail rates but the generation and transmission systems participate in the competitive PJM market.  I noted that the result of restructuring was that the market value of electricity in my home location of Eastern Kentucky went from very low prices to moderately low prices, at least according to one of Craig’s slides.  But Craig had already made me feel better about this by telling of his trips to Kentucky to persuade the regulators to let their utilities join PJM.  He told them that one result the Kentucky electric companies joining PJM would be higher utilization of Kentucky’s cheap power plants.

These power plants joining PJM could sell the very low cost generation (the pre-restructuring picture) at moderately low prices (the post-restructuring picture), with the differential being used to reduce the prices for Kentucky residents.  As I pointed out, this is an example of Craig’s term “Half Slave/Half Free” where he pushed the concept.  I also pointed out that a substantial portion of the country has not restructured, which was my initial thought when he mentioned the term.  So we went back to the issue that not all parts of the country would benefit from restructuring.

Craig stated that restructuring changed the risk allocation formula.  He made the point that there was no Enron rate case.  In other situations where utility investments were cratering, there were rate cases, but not with Enron in the restructured world.  Further, there was effectively not even a hiccup in the PJM bulk power market on the day that Enron collapsed, even though Enron had been a major player in the PJM bulk power market.

Craig says that capacity prices are too low.  I see capacity as being a multi-year issue, requiring a multi-year solution.  Pre-restructuring, the utilities handled the variations in the need for capacity, and the value of capacity, through long term rates.  They built what they thought was needed and didn’t worry that the bulk power market went up and down, the utilities kept on trucking as vertically integrated entities.  Indeed, one of the problems that caused the California debacle of 2000/2001 was that the entire market was forced to pay the spot price of electricity.  The Texas market seems to be greatly hedged in that when the bulk power market price went up by a factor of 10, on average, for the entire month of August 2011, the retail price hardly budged.

Craig made an excellent point in regard to the question of who decides what in the electric industry, providing a list of governmental entities.  I notice that he did not mention the U.S. Department of Energy (of course he was a substitute speaker who replaced Melanie Kenderdine, assistant to the Secretary of the U.S. Department of Energy, because Melanie thought she would not be allowed to speak because of the shutdown of the federal government that ended about 24 hours before the lunch.)  He also listed state legislatures but not Congress.  But then the other decision makers are the owners of the facilities.

A continuing issue that I have with regulation is tangential to Craig’s “Half Slave/Half Free” term.  His PJM operates in parallel with several other entities.  I have frequently pointed to the Lake Erie donut[2] , with is the path around Lake Erie that allows electricity to flow from Chicago to New York City along two major paths, north or south of Lake Erie.  I have said that when there is unscheduled loop flow, e.g., more going north of Lake Erie than has been scheduled, that there should be payment for that unscheduled flow.[3]  The same issue applies to PJM versus TVA, which have lines in parallel.  Sometimes one system is paid for the contract path but some of the electricity actually flows on the other system.  And just south of TVA is the Southern Company, providing a fourth east/west path for loop flows.  I say that a mechanism to pay for loop flows may be one of the ways to get around the transmission cost allocation and siting issues mentioned by Craig.

I note that I did not raise all of these issues during the lunch Question and Answer period, I spoke enough as it was.  Craig is certainly welcomed to comment on this blog, as are others.



[1] See “NCAC-USAEE Overnight Field Trip of 2013 October 4-5,” 2013 Oct 07, http://www.livelyutility.com/blog/?p=233

[2] See my “Wide Open Load Following,” Presentation on Loop Flow to NERC Control Area Criteria Task Force, Albuquerque, New Mexico, 2000 February 14/15, on my web site, under publications under other publications.

[3] See my blog entry “Socializing The Grid: The Reincarnation of Vampire Wheeling,” 2011 Mar 17,  http://www.livelyutility.com/blog/?p=83

Posted in Electricity Economics | Tagged , , , , , , , , , , , , | Leave a comment

The Electric Transmission Grid and Economics

Tuesday, 2013 October 8, I went to the MIT Club of Washington Seminar Series dinner with Anjan Bose of Washington State University talking about Intelligent Control of the Grid.  Anjan began with giving two reasons for the transmission grid but then seemed to ignore the predicate in explaining what the government has been doing in regard to the grid.

The first slide identified two reasons for the electric transmission system.  The first was to move electricity from low cost areas (such as hydro-electric dams) to higher cost areas.  This is an obvious reference to economics.  The second was to improve reliability.  Anjan did not get into the discussion of how that is an economics issue, but it is.  Reliability is greatly improved by increasing the number of shafts connected to the grid.  We can produce the same amount of electricity with five 100 MW generator or one 500 MW generator.  The five units provide greater reliability but also higher costs.  The higher costs are associated  with various economies of scale, including higher installed cost per MW, less efficient conversion of the fuel into electricity, and the need for five sets of round the clock staffs.  A transmission system allows dozens of 500 MW units to be connected at geographically dispersed locations, achieving the reliability of many shafts and the lower cost of larger generators.

But, the presentation had little to do with the economics of the power grid, and the investigations into those economics.  I noticed that much of the discussion during the question and answer period did talk about the cost of operating the grid, so people were indeed interested in money.

Anjan said that the financial people used different models than did the engineers who operate the system.  I have long said that we need to price the flows of electricity in accord with the physics of the system, by pricing the unscheduled flows.  The engineers and operators may plan to operate the system in a prescribed way, but the flows of electricity follow the laws of physics, not necessarily the same was the way some people have planned.

Anjan said that deregulation[1] has caused a dramatic decline in new transmission lines, especially between regions such as into and out of Florida.  My feeling is that new transmission lines would be added more willingly if the owners of the new transmission lines would be paid for the flows that occur on the transmission lines.  For instance, twenty years ago a new high voltage transmission line in New Mexico began to carry much of the energy that had been flowing over the lower voltage transmission lines of another group of utilities.  The group of utilities called the service being provided “vampire wheeling” and refused to make any payment to the owner of the new transmission line.  The new line provided value in the reduced electrical line losses and perhaps allowed a greater movement of low cost power in New Mexico, but that value was not allowed to be monetized and charged.

I note that a pricing mechanism for the unscheduled flows of electricity would have provided a different mechanism to handle the 2011 blackout in Southern California, which began with a switching operating in Arizona.  Engineers swarmed to the area to find data to assess the root causes but were initially blocked by San Diego Gas & Electric’s attorneys who feared that any data could be used by FERC to levy fines pursuant to the 2005 electricity act.  I remember a discussion at the IEEE Energy Policy Committee on that proposed aspect of the bill.  The IEEE EPC voted to suggest creating mandatory reliability standards.  I was the sole dissenting vote, arguing that the better way was to set prices for the unscheduled flows of electricity.  Thus, SDG&E and the Arizona utilities would have been punished by the market instead of risking a FERC imposed fine.



[1] I prefer to use the more accurate term restructuring, since the entire industry is still regulated, even though generation is often subject to “light handed regulation” by FERC, which approves concepts instead of specific prices.

Posted in Electricity Economics | Tagged , , , , , , , , , , , , , , , | Leave a comment