Thursday, October 22, 2015

Solar System: A First Look at the Data

My new solar panels, installed two months ago, have been working hard during the beautiful days of late summer and early fall.

Although the days have gotten shorter, the noon sun faces the panels most directly at this time of year—thanks to my steep roof. I can now report that under a clear sky and direct sunlight, the output of my system is typically about 950 watts. That’s the alternating current coming out of the microinverters, as reported by the monitoring system. For comparison, the nameplate rating on the panels themselves is 280 watts each, or 1120 watts total. I’m not sure how much of the difference between 1120 and 950 is due to atmospheric conditions, and how much is due to the losses in the DC-to-AC conversion.

To get an idea of the variability of the power output, you can look at the data on the Enphase Enlighten site. Here’s a plot of all the data from September on a single horizontal axis (click to enlarge):

This graph shows instantaneous power in watts. To calculate the total energy produced, you need to multiply the power by the time elapsed and then add that up for each time interval (the system records data in five-minute intervals). If the time is expressed in hours, then the energy will be in watt-hours; divide by 1000 to convert to kilowatt-hours (kWh), the power company’s billing unit.

On my system’s best day so far, September 18, its total energy output was 6.5 kWh. On its worst day, just two days earlier, the output was only 0.3 kWh. Fortunately, I live where the skies are not cloudy all day—at least not very often—so the system is averaging about 5 kWh per day.

I use some of that solar-generated electricity as it comes off the panels, but most of it gets pushed onto the grid for my neighbors to use. Then, at night and at other times when I need more power than the panels are producing, I pull what I need off the grid. The power company’s meter, on the back of my house, separately measures the power flowing in both directions, records both amounts of cumulative energy, and blinks between displaying the two amounts:

I took these photos on the morning of October 17, when the incoming energy (since the meter was installed on August 27) had reached 100 kWh (left) and the outgoing energy had reached 200 kWh (right).

By combining the solar monitor data with the net meter readings, I can construct a comprehensive picture of the energy flows through my house. Here’s the picture for the calendar month of September:

During this time period the solar system produced 151 kWh of energy, while the net meter reported that I pushed 114 kWh onto the grid. Therefore I must have used the other 37 kWh directly, as it was being produced. Meanwhile, the net meter reported that I pulled another 58 kWh off the grid, so my total household use was 95 kWh. (My usage is lowest in spring and fall, higher in the summer, and highest in the winter.)

Fortunately, the power company (under direction from the Utah Public Services Commission) lets me accumulate credits for energy pushed onto the grid, and apply them toward future months when I’ll use more energy than I produce. Here’s a copy of my first net-metering bill, covering the end of August and the beginning of September:

As you can see, they actually applied 32 kWh of my credits to the final reading off the old meter (which couldn’t distinguish incoming from outgoing energy, so it “charged” me for some of the energy I produced from August 19-27). Even so, I ended the billing month with 16 kWh of credits, and I have quite a bit more than that now.

I’m still getting billed the $6 “basic charge” that everyone pays for being connected to the grid, plus a $2 “minimum charge” for not using any (net) electricity. (So in effect, the basic charge is really $8 and they give you your first $2 worth of electricity for free. That’s not much electricity, but this practice still bugs me.) Add on the taxes and surcharges and my total bill comes to just over $9.

It’s only fair that I have to pay to be connected to the grid, because I really do depend on it. Here, for example, is a detailed plot of my solar production on the best day so far, with my “typical” electricity use superimposed:

The big spikes are from cooking: a pancake breakfast, toasting bread for the lunch I packed in the morning, and a pretty big meal in the evening. The little bumps that repeat about once an hour are from the refrigerator cycling on and off. There’s a bunch of miscellaneous activity in the evening, mostly from lights and my computer. Last but not least, there’s a baseline of about 40 watts that I'm using 24/7, for my modem, router, clock, smoke alarms, smart thermostat, solar monitor, and the electricity monitor that took this data.

(That electricity monitor is the Efergy Elite Classic and Engage hub system, which I installed soon after the solar panels. It’s a marvelous tool, and I really wish I had installed it earlier. But I also wish I had paid another $25 for the version that measures true power, because my microinverters have a nontrivial power factor that fools the Efergy Elite Classic, especially at night. Unfortunately, even Efergy’s “true power” meter apparently can’t measure the direction of energy flow, so it would give confusing data when my solar panels are active during the day. There are competing brands that lack this drawback but I haven’t tried them. In any case, I’ve had to manipulate my Efergy data quite a bit to produce the “typical” usage graph shown above.)

Because I use so much electricity when sunlight is scarce or absent, I can hardly claim that my home is 100% solar powered. I still depend very much on Rocky Mountain Power’s coal- and gas-fired power plants, which are steadily pumping carbon dioxide into the atmosphere and contributing to global warming. Consequently, I don’t consider my solar panels to be a license to waste electricity. Rather, they’ve inspired me to better understand and minimize my electricity use.

Here, then, is an estimated breakdown of my daily household electricity use, averaged over the seasons:

I obtained these estimates through a variety of measurements using my power company’s meter, my Efergy monitor, and a few handy Kill-a-watt meters. Even so, there’s a lot of guess-work involved in getting these annual averages, especially for seasonal contributions like heating and fans. I’ll have better data on heating after my first winter with the new smart thermostat.

My total household electricity use, as reported earlier, averages about 4 kWh per day. That’s quite a bit lower than the per-capita average here in the U.S., but not so different from most other industrialized countries. Notably absent from my household are such unnecessary luxuries as air conditioning, a second refrigerator or freezer, an electric clothes dryer, a television, or a hot tub.

Not everyone is in a position to invest in rooftop solar panels, but everyone can work to cut their unneeded electricity use—and save money in the process. As Mr. Money Mustache says, “Measure everything, then get angry at waste.

Sunday, September 6, 2015

Solar System Installation

Until very recently I never considered myself a candidate for a rooftop solar photovoltaic system, because my electricity use is so low by U.S. standards. Surely, I figured, there are fixed costs that are the same for PV systems of any size, so a system that produces only four kilowatt-hours a day wouldn’t be economical. Better to just pay the power company a few extra dollars a month for wind-generated electricity. Besides, my greater home energy need is for heat—not electricity—so if anything, my steep south-facing roof should (I thought) be used for solar thermal panels that feed some kind of space-heating system.

But nobody in Utah seems to be in the business of retrofitting old houses with practical solar space-heating systems (and designing such a system from scratch, though tempting, would be incompatible with holding down a day job for a clumsy tinkerer like me). Meanwhile, PV keeps getting cheaper, and Utah has a generous 1:1 net-metering policy, plus a 25% state tax credit on top of the 30% federal tax credit. The last straw was the Susie Hulet Community Solar program, which offers attractive pricing that scales down linearly (except for the city permit fee) to arbitrarily small installations. With the encouragement of my colleague John Armstrong and the good people at Utah Clean Energy, I signed up as soon as the program got up and running, at the end of May.

(At about the same time, I also got a bid from another reputable installer who apologized for not being able to offer me a decent price on such a small system, and suggested I look into the Susie Hulet program instead.)

Apparently I wasn’t the only one who signed up as the program began, because it took the contractor (Gardner Energy) several weeks to process all the applications, conduct site visits, and prepare contracts. On July 9 they gave me my installation date: August 19. Then I patiently waited while the summer sun beat down on my roof.

Finally the day arrived, and the Gardner truck pulled up to my curb with four solar panels strapped to the bed and a trailer full of tools in tow:

The crew of three wasted no time getting to work. Chad and Chase got up on the roof, tied themselves to the chimney, and began installing mounting brackets:

Meanwhile Patrick, the electrician and crew leader, ran the wires from the attic down to my electrical panel:

Back on the roof, the mounting rails came next:

By lunch time the mounting hardware was all in place, along with most of the electrical components:

Each of the four panels gets its own Enphase M250 microinverter:

After lunch, Patrick installed the second electrical box:

And before long it was time to hoist up the first of the four SolarWorld Sunmodule Plus 280-watt mono black panels:

The three remaining panels quickly followed:

With all four panels installed and connected, the crew’s work was done before 4 pm. Hooray for Chase, Patrick, and Chad!

The system came with this cool monitoring unit, which reads data from the inverters off the power line, displays the current power level, and beams it via wifi onto the internet:

But I had to get a new wifi router, because we couldn’t figure out how to get the Enphase monitor to talk to my Apple Airport Express. I’ll try to post some of the data later. Meanwhile, you can view it here.

The solar system connects to a new 240-volt breaker in my electrical panel:

The city inspector came to check the wiring just five days after the installation. Then, after three more days, Rocky Mountain Power installed my new net meter:

The meter’s LCD display blinks between showing the energy I’ve pulled off the grid and the energy I’ve pushed onto it. So far, after ten days, those numbers are 25 and 37 kilowatt-hours, respectively. But the Enphase monitor data says I’ve generated a total of 51 kWh during this time, so I must have used another 14 kWh as it came off the solar system, which the meter never saw.

Gardner predicts that this system will generate a total of 1657 kWh per year, and I’ve been using only about 1400 kWh/year, so in a sense I can now claim that “all” of my home’s electricity is solar. But only a fraction of the solar energy is being produced when I need it, so I’m still very much dependent on the grid, and on the coal- and gas-fired power plants that power that grid through the nights and cloudy days.

What about cost? The sticker price of my solar system came to $4251.41, including $260.61 for the Ogden City permit. But I expect to recover 55% of the cost through the federal and state tax credits, so my net up-front cost should be a little over $1900. Under the current rates and net-metering policy I should save about $10/month on my electricity bill (I’ll still pay the $8 minimum monthly fee), so the system would pay for itself in 16 years if rates and policies don’t change. Inevitably the rates and policies will change over that time, so my $1900 investment is rather risky.

If you’re thinking of installing your own solar system, be aware that the return-on-investment calculation depends on all sorts of details that will vary from one installation to another. In all cases, however, we’re talking about thousands of dollars. Before you even consider spending that kind of money, I would strongly urge you to invest the effort to find and eliminate wasteful electricity uses in your home. Mr. Money Mustache has a great article on how to do that. Get yourself a Kill-a-Watt meter at the very least!

Finally, from a broader perspective, let me point out that it’s not very efficient to pay young men to risk their lives up on roofs, installing solar panels a few at a time. At least in Utah where electricity is cheap, the rooftop solar business is viable only because of the tax incentives—and even then, it works only for homeowners with suitable, unshaded roofs and cash to invest (or at least good credit). If the goal is to reduce carbon emissions, it would be far more efficient for society to invest in utility-scale solar farms. Then the economy of scale, ease of installation, and optimized siting would make government subsidies superfluous. But here in Utah our elected officials don’t even believe global warming is real, while they’re happy to provide government subsidies to well-off rugged individualists. So for now, rooftop is the only solar game in town.

Sunday, August 30, 2015

Why the Cost of College Has Tripled

It’s back-to-school time, so again people are talking about the rising cost of college. I wrote about this issue two years ago, and produced a plot showing how college tuition has increased faster than virtually any other component of the U.S. Consumer Price Index. Here’s an updated version of that plot, showing the relative cost of various types of goods and services compared to the overall CPI, since 1978 (the first year for which college tuition has its own CPI category):

As I said before, it’s not hard to understand the basic economics shown in this plot. Manufactured goods have become cheaper over time, as manufacturing has been automated and outsourced. The cost of professional services has therefore risen in comparison. College is often the ticket into high-paying service professions, so the demand for college and the willingness to pay for it have risen even more.

But even if we understand why people are willing to pay ever-higher tuition, this fact doesn’t tell us where all that money is going. Has the actual cost of educating a student more than tripled since 1978 and if so, how is that possible?

The answer to this question depends on whether we’re talking about public or private colleges (and universities). We can separate the two sectors, and also look 15 years farther back in time, by going to the Education Department’s Digest of Education Statistics. Here’s the Digest’s tuition data in constant (2013-14) dollars:

Obviously the private colleges charge much higher tuition than the public ones. Notice also that tuition gradually decreased, in real dollars, from the mid-1970s through the early 1980s, probably because colleges lagged in keeping up with the double-digit inflation of that era.

If you look closely at this second graph, you’ll see that since the 1970s tuition has increased slightly faster, in percentage terms, at the public schools than at the private ones. And even at the public schools the increase has been only about 200%, slightly less than what’s shown on the CPI graph. I don’t know the reason for this slight discrepancy, but the fact remains that tuition has roughly tripled over the last 35 years. Again, where is all this money going?

Let me first answer the question for the public colleges, which currently enroll 72% of all students and 69% of full-time students. Based on the data I’ve found (described below), it appears that the cost of an education at these schools has increased since the late 1970s, but only by about 20% (after accounting for inflation). However, these schools receive a great deal of their revenue from state appropriations, and that revenue, on a per-student basis, has declined by about 25%. Amazingly, the combination of these two 20-25% effects has resulted in a tuition increase of roughly 200%.

To show how this is possible, let me present a grossly simplified “toy” model that uses rounded numbers and ignores a variety of complications as well as all the little bumps and dips in the actual data:

In today’s dollars, the actual annual cost of educating a full-time student was about $10,000 back around 1980 and has increased about 20%, to about $12,000 today. Meanwhile, state funding of higher education has declined, on a per-student basis, by about 25%, from $8000 to $6000. This means that the average tuition has had to triple, from about $2000 to $6000. Simple arithmetic has combined 20% and 25% to yield 200%.

To construct this toy model I relied on the tuition data shown above, along with data from The College Board’s annual Trends in College Pricing reports. Figure 18A of the latest Trends report shows that state and local appropriations currently cover about half the cost of education at public colleges (more at two-year schools but less at four-year schools), and that this share has been decreasing in recent years. Figure 16B shows the history of state appropriations in more detail back to 1983-84, and the corresponding figure in the 2010 Trends report goes back to 1979-80. Here I’ve plotted state funding relative to its value in 1979-80, comparing the total amount to the amount per student:

The decrease in per-student funding from 1979-80 to 2013-14 was almost exactly 25%, so that’s the number I used in my toy model. But the bumps in the data (caused mostly by economic ups and downs) have been large, so you can get very different overall changes by choosing slightly different starting and ending years.

It’s important to note, meanwhile, that total state funding of higher education has increased over time, even after allowing for inflation. As you can see, the increase since 1980 has been about 25%. The decrease in per-student funding has been caused by a combination of two further effects. First, the U.S. population has grown by about 40% since 1980, and the working-age population has grown by about the same amount, so state funding for higher education has not kept up with the growth in the population or the tax base. Second, college enrollments have grown faster than the overall population (and also faster than the college-age population). Here is a graph of full-time-equivalent enrollments as a percentage of the total population, since 1950:

Whereas attending college was once the privilege of a small elite fraction of Americans, it is now commonplace among the middle class. And while most of us celebrate this transformation, we need to realize that it doesn’t come for free. The increasing number of college students has caused the total cost of educating these students to grow to become a substantial chunk of the U.S. economy. Somehow society has to pay that cost.

In any case, the toy model shown above is based on actual (rounded) data for the current levels of tuition and state funding, the decline in state funding per student, and the observed growth in tuition. From those numbers it’s a simple matter to calculate that state funding provided about 80% of the total cost in 1980, and that the total per-student cost of education has increased by about 20% since then. (It would be nice, of course, to corroborate these results with independent data, but I don’t know where to find such data.)

And why has the per-student cost of education increased, even if only by 20%? Probably for many reasons, which I hope to explore more carefully in a later article. In brief, it appears that expenditures for faculty salaries have been almost unchanged (on a per-student basis, after allowing for inflation), although there has been a significant rise in the number of part-time faculty. Meanwhile, there has also been a steep rise in the number of professional staff, as well as a steep rise in the cost of medical insurance for all full-time employees. Other possible factors are non-staff expenses such as academic and nonacademic buildings, library books, journals, computers, software, and student financial aid. The important thing to remember is that even small increases in any of these expenses have had amplified effects on tuition (or on mandatory student fees, which are included in the tuition statistics), because state funding has not increased to absorb any of the increases.

Finally, what about the private colleges and universities? Given that they never had any state funding to begin with, you might expect their tuition to have increased by only about 20%, to absorb the same increased expenses as at the public schools. Yet they’ve actually raised tuition nearly as much as the public schools: about 150% (above inflation) since the late 1970s. Where is all that money going?

There’s good data to show that faculty salaries have been increasing faster than inflation at the private colleges, so that’s one difference. It also seems likely that the private schools have been spending increasingly more than the public ones on almost everything else: professional staff, buildings, computers, and so on. It would be interesting (but difficult) to explore whether these disparate expenditures have affected the relative quality of private vs. public education over the years.

A critical difference, meanwhile, is that the more expensive private colleges tend to provide large amounts of need-based financial aid to many of their students. In other words, the advertised “sticker price” applies only to those who can afford to pay it, and these wealthy families subsidize students who are more needy. Perhaps one could construct a toy model of the interplay between this practice and rising costs and tuition over time.

But let’s not lose sight of the big picture here. Private colleges enroll only 30% of all college students, and they couldn’t get away with raising tuition by 150% if the public colleges weren’t raising it by 200%. That increase is being driven by a variety of modest cost increases, amplified and greatly exacerbated by the decline in state funding per student.

Wednesday, July 15, 2015

Beyond Coal: U.S. Energy in Historical Perspective

I just read a fascinating article on the so-called “war on coal” that has shut down a significant fraction of U.S. coal-fired power plants over the last several years. What was almost unthinkable just a few years ago has become a reality, thanks to a confluence of technology (shale gas extraction, wind power, and efficiency), economics (the great recession), government regulations (thanks, Obama!), and environmental activism (the Sierra Club’s “Beyond Coal” campaign, funded by Michael Bloomberg).

The article is accompanied by a graph that shows all the sources of U.S. electricity over the last 30 years, highlighting the dramatic (roughly 20%) decline of coal since 2007—even while coal remains larger than any other electricity source.

I love graphs like that, but I wanted a longer-term perspective and I also wanted to visualize the data a little differently. So I pulled the data from the EIA web site and plotted it up as a stacked area chart, going back to 1950:

The recent decline of coal is all the more striking when juxtaposed with its remarkably steady rise over more than 50 years. Though if you look closely, you’ll see that the rise had already flattened out before 2007.

The advantage of the stacked area chart is that it also shows the total electricity generation at a glance—and the behavior of the total is also striking. After an almost uninterrupted rise from 1950 through 2007 (with just a couple of hiccups due to the oil price spikes of the 70s and early 80s), U.S. electricity generation (and consumption) stopped growing in 2008. Even though our economy has recovered in most respects since 2009, our electricity use hasn’t quite regained its pre-recession peak. I won’t try to predict whether it will do so in the coming years.

Meanwhile, there’s so much more to notice on that graph. Look at the rise and fall of petroleum as an electricity source. Marvel at the rapid rise of nuclear power and how steady it has remained in recent decades. And don’t overlook that expanding sliver of green at the top, which now comes mostly from wind energy (4.5% of total U.S. electricity in 2014).

To get a better view of wind energy and the other minor contributors, here I’ve plotted the same data on a logarithmic scale (with no stacking):

On this graph, a straight, upward-sloping line corresponds to exponential growth (a fixed percentage increase each year). It’s interesting to look at how each electricity source has experienced a period of approximately exponential growth at some time in the past, but these periods always end when that growth runs up against practical limits. The exponential growth of wind has recently slowed, but now solar-generated electricity is in a period of dramatic exponential growth. Let’s hope this period lasts a little longer!

I find it remarkable, though, that the log-scale graph of total U.S. electricity generation is almost entirely concave-down. The very rapid exponential growth of the early 1950s slowed somewhat in the 60s, then slowed a lot more after 1973, then slowed to a crawl after 2000, and has now more or less stopped.

Of course, electricity isn’t the same as energy. For a bigger-picture view we should also include fuels used for heating and transportation and industrial machinery. The energy sources used for all these things, including electricity generation, are called “primary” energy, and EIA actually has estimates of primary energy use, by source, going back to the founding of the American colonies. For the first 200 years the only important source (besides muscle power, which EIA doesn’t count) was wood. I’ve started the following graph in 1850, when coal makes its first appearance:

The units on this graph are quadrillions of British thermal units, or “quads” for short. One quad equals 293 billion kilowatt-hours, but the inherent inefficiency of heat engines means that a quad can generate only about 100 billion kWh of electricity. Roughly, therefore, the current annual total of about 4000 billion kWh on the electricity graphs requires about 40 quads of primary energy. The other 60 or so quads of primary energy go toward transportation, heating, and industry. (To see a careful breakdown of how each of these energy sources is used, look at the latest energy flow chart from Lawrence Livermore National Lab.)

(A couple of technical notes on the primary energy data: First, the numbers from before 1949 are estimated from various sources and are provided by EIA at only 5-year intervals, so there could be important details that are missing. Second, for non-thermal electricity sources like hydro, wind, and photovoltaic solar cells, EIA defines the “primary” energy to be the amount of some other fuel that would produce (on average) the same amount of electricity. This fictitious accounting allows for fair comparisons between thermal and non-thermal electricity sources.)

Looking at the graph above, notice that coal provided more than half of all U.S. energy from about 1885 through 1940. During that era our cities were badly polluted with soot. My own house, built in 1935, was originally heated with coal; the coal room in the basement now stores assorted outdoor equipment and other hardware. Nowadays, coal burning occurs almost exclusively at electric power plants, mostly outside major cities.

Again it’s also useful to plot the same data on a logarithmic scale, with no stacking:

Here you can see the early growth of each major energy source in detail, notice how they were affected by the Great Depression and the 1970s, and mentally extrapolate to the right to envision a variety of possible energy futures. Petroleum remains our largest single energy source, a distinction it has held since 1950. Biomass is making a bit of a comeback, thanks mostly to ethanol added to motor fuels. Wind and solar are tiny in comparison to the fossil fuels, but their extremely rapid growth is encouraging. The recent flattening of total energy use is even more apparent than for electricity alone, extending back to the late 1990s when all forms of energy are included.

For an even bigger picture I should really plot energy use for the entire world, rather than just the United States. One of the best sources of worldwide energy data is the BP Statistical Review of World Energy. The data in the BP Review goes back only to 1989, but at least it gives the big picture since then.

According to the BP Review, Europe’s coal use was on the decline already in 1989, though it has been fairly stable in recent years. Far outweighing the declines in Europe and the U.S., however, has been the phenomenal increase of coal use in China, especially during the 2000s. China now uses approximately half of the world’s coal, and its per-capita use is now about the same as in the U.S. (although its per-capita use of petroleum and natural gas are much less than ours). Even China’s use of coal, however, was fairly stable for the last couple of years and now seems to be decreasing. And it should be pointed out that a significant fraction of energy use in the developing world goes toward manufacturing products for export to wealthier countries. The coal used to make your iPhone is not included in the graphs on this page.

Saturday, June 27, 2015

Air Travel

After carefully tallying up my home energy use and the associated carbon emissions, I realized that for context (and out of curiosity) I should do the same for my personal travel.

For daily commuting and most short errands I pedal a bicycle: no fossil fuels used there, and no more carbon emissions than if I were merely exercising for health and enjoyment.

For most longer trips (and some short ones) I drive, and I’ve kept track of the odometer readings and approximate fuel economy of all 2.5 of the cars I’ve ever owned. But I usually don’t drive alone, and I’ve never kept records of exactly how often I do, so it would be tricky to figure out my personal share of the associated gasoline and CO2. I’ll try to make an estimate anyway, but not today.

I’ve occasionally ridden on buses and trains, but not often enough for either to have made a significant contribution to my energy/carbon footprint.

That leaves air travel, which in many ways is the most interesting. It didn’t take me long to go through old credit card statements and other records, to reconstruct a list of every trip I’ve ever taken by plane. With just a bit of guess-work I count 71 trips over 35 years. Here’s a plot of my air travel history:

I never flew at all as a child; my first four flights were trips home from college (to St. Louis from Minnesota). Then in 1984 I flew to visit graduate schools on both coasts, and chose to attend one in California. That choice left me making regular flights back east to visit family and friends over the next seven years (including one year in my first full-time job). In 1991, after three flights for job interviews, I moved to Iowa—within driving distance of my immediate family but now a long flight from professional collaborators back in California. In 1993 there were more job interviews, plus my longest trip ever, to a conference in Hawaii. But I ended up in Utah, from which I’ve regularly flown to visit family and to attend professional conferences and workshops. Recently, since my dad’s final illness in 2011, my personal air travel has declined.

The mileages in the chart are somewhat uncertain because I don’t remember the locations of all the intermediate stops and transfers. But by my best estimate I’ve flown a little under 200,000 miles, in a little over 200 separate up-and-down flight legs. Over the last ten years I’ve averaged 3800 miles per year, and my lifetime average (since birth) is about the same. But as you can see from the chart, I was averaging twice that amount during grad school and for several years afterwards.

So is 3800 miles/year a lot or a little? The answer is both, depending on the standard of comparison:
  • The total number of passenger-miles for all U.S. domestic flights is about 600 billion per year. If we divide that by the U.S. population (320 million), we get an average of about 1900 miles/year per person. So my 3800 miles/year is about twice the national average. (If you include international flights, then the average American probably flies somewhat more than 1900 miles/year—but nowhere near twice as much.)
  • World-wide, annual air traffic comes to about 4 trillion passenger-miles, or about 550 miles per person. So my 3800 miles/year is nearly seven times the world average.
  • Among my friends, on the other hand, 3800 miles/year seems to be on the low side. Most of my friends are well-educated, upper-middle-class professionals who, like me, travel for professional reasons and to visit families and friends scattered across the U.S. Unlike me, however, most of them also travel overseas occasionally. And many of them just seem to fly more often than I do. My guess is that most of my friends fly about twice as much as I do today, or about as much as I did 20-30 years ago. A few of them fly significantly more than that. One of my acquaintances has accumulated nearly two million frequent-flyer miles on a single airline.
And what about my CO2 emissions from flying? The most helpful resource I’ve found for calculating this is a 2008 report from the Union of Concerned Scientists titled Getting There Greener: The Guide to Your Lower-Carbon Vacation. This report compares the carbon emissions from flying, driving, and riding buses and trains, for trips of different lengths and for different numbers of travelers. Appendix B, in particular, lists average per-passenger emissions for two dozen types of commercial aircraft, broken down into per-flight and per-mile contributions. The numbers include an additional 20% to account for emissions associated with the production and distribution of jet fuel.

Based on these numbers, I calculate that for my typical mode of flying (coach class on a narrow-body jet with an average flight leg of 950 miles), the average emission rate is 0.415 pounds of CO2 per mile. Multiplying by 3800 miles/year, I find that my flying contributes 1600 pounds of CO2 to the atmosphere in an average year. (It was much higher 20-30 years ago, when I was flying twice as much and planes were less efficient—mostly because they tended to be less full.)

As always with such estimates, this result is rather fuzzy because of all the approximations and assumptions that went into it. Even if all of my calculations are perfectly “accurate,” I haven’t included the emissions associated with manufacturing the aircraft, or operating the airports, or ground transportation. Also, aircraft have other climate impacts besides CO2 emissions, and I’ve applied no enhancement factor to account for these effects.

In any case, 1600 pounds per year is a significant contribution to my total carbon footprint—probably about 10% of the total—but not as large as the contributions from driving or food production or heating my home. For the average American, who flies only half as much as I do but uses much more gasoline and electricity, flying is actually a pretty small fraction of the total carbon footprint. And the same is true worldwide, because most people fly so much less than Americans do.

Although air travel may not currently seem like the biggest carbon concern, it will inevitably become a bigger issue in the future. Global passenger air transportation is currently growing at a rate of about 6% per year, five times as fast as the population growth rate. Further efficiency gains in air transportation will be small, and there’s currently no alternative to petroleum-based jet fuel.

The real issue with flying is that it’s so unequal. Rich people tend to fly a great deal, and increasing numbers of the middle class are becoming wealthy enough to fly 10,000 miles a year if they want to. If the world average ever gets close to that level, petroleum prices will soar and the impact on earth’s climate will be catastrophic.

Monday, May 25, 2015

Home Energy Use

Several of my friends have been receiving home energy use reports for the last few months, comparing their electricity and natural gas use to the average of their neighbors. I wasn’t selected to participate in this program/study, but I’m glad it has generated so many discussions about energy conservation. Meanwhile, folks are talking more and more about rooftop solar photovoltaic systems, which are now more or less paying for themselves even in Utah where electricity is cheap.

As a numbers guy, I’ve always paid attention to my own utility bills, trying to understand (at least in broad outline) how much energy I was using and how I could use less. And I’ve saved my utility bills for many years, so I can document exactly what I’ve used.

Here’s a plot of my monthly electricity use for the last 16 and a half years, since I bought my house. The vertical scale is in kilowatt-hours per day, plotted for each billing month, so multiply by 30.4 to get the typical monthly use, or divide by 24 to get the average power in kilowatts:
There’s quite a bit of information in this graph:
  • The three highest spikes are from when I had renters or guests (one to three at a time) living in my basement.
  • Soon after the first of these renters moved in, in September 2001, I bought a new refrigerator for the kitchen and moved the old refrigerator into the basement for the renter to use. The old fridge used about 3 kWh/day and the new one uses only 1 kWh/day (as measured with a handy power meter), so when the renter moved out in early 2002 and I unplugged the old fridge, my household electricity use dropped by about 2 kWh/day from what it had been before. (The new fridge cost $650, but it saves me about $70 a year, so it paid for itself in nine years.)
  • There are some pretty reliable seasonal cycles. I use the most electricity in the winter, thanks to the furnace fan, a space heater, an electric blanket, and having more lights on. I also use somewhat more in July and August than in the spring and fall, because the refrigerator works harder then and I use fans to keep cool.
  • Finally, there’s been a gradual increase in my electricity use over the last 13 years. I need to make some measurements to figure out exactly why, but I suppose I’m using the fans and heaters more as I become old and soft, and my laptop computers have gotten greedier for power over time. Also, since the beginning of 2012 I’ve been spending about half of every work week at home, helping to edit the American Journal of Physics.
At present, my electricity use averages just under 4 kWh/day, or about 160 watts. For comparison, the average U.S. household uses about 30 kWh/day, or 12 kWh/day per person. I use less than average because my house has no air conditioning, and because my refrigerator and lights and computer are all pretty efficient. I do cook with electricity, but I hang my clothes (indoors) to dry. And I don’t indulge in power-hungry extravagances like a second refrigerator or freezer or hot tub.

Still, my home electricity use is far from negligible. It’s pretty close to the household average (counting only electrified households) in China and Mexico; it’s nearly twice the total per-capita use (including all commercial and industrial uses) in India; and it’s a hundred times greater than the total per-capita use in some African countries.

If all my electricity came from coal, the resulting CO2 emissions would be about 3000 pounds per year. The actual carbon footprint is less than this by an amount that’s ambiguous, because of the way electricity from natural gas and renewables is mixed into Utah’s grid. I actually pay Rocky Mountain Power an extra $3.90 per month to participate in their Blue Sky program, nominally buying 200 kWh of wind-generated electricity—enough to cover 170% of what I use. For about $1500, after federal and state tax incentives, I could install enough rooftop solar panels to cover my household use, and thereby reduce each of my monthly bills by about $10. Neither wind nor sunshine, however, is always available at the times when I’m using electricity, so neither provides complete freedom from the fossil fuels that dominate Utah’s electrical grid.

Meanwhile, there’s a carbon-emitting elephant in the room that I haven’t yet mentioned: natural gas, which my house uses for space heating and water heating. Here is a plot of my monthly gas use over the last 16 and a half years:
I’ve again plotted my average daily use for each billing month, in millions of BTUs (the gas company’s billing unit, also called decatherms). Along the right side I’ve multiplied by 300 to convert this unit to approximate kilowatt-hours (the more accurate conversion factor would be 293), to facilitate comparison to my electricity use. Notice the following:
  • Nearly all of my natural gas use is in the winter. Water heating in the summer is small by comparison.
  • My 2001-2002 renter produced a significant spike, as we kept the basement warmer than usual. My other renters/guests don’t show up on this graph because they weren’t around in the winter.
  • In December 2003 my old (from 1980 or so) furnace died, and the house was without heat for a week or two before I had a new one installed. The new one is a “condensing” furnace, rated at 92% efficiency because it sends less heat up the chimney. At the same time, I moved the thermostat from the front room to the back of the house, so I could close off the front room and avoid heating it for most of the winter. These changes reduced my gas use by more than 40%. The new furnace has just about paid for itself in the 11 years since it was installed, so it would have been a good investment even if the old one hadn’t died.
  • Any other changes or trends (such as the new storm windows that I got in 2011) are indiscernible due to the weather-caused variations.
  • Even with the new furnace, my average daily gas use is about 0.08 MBTU, or 23 kWh: six times as much energy as I use from electricity.
I use a lot of natural gas because my house, though small, is 80 years old and poorly insulated. But the factor of 6 is somewhat misleading, because when electricity is generated from fossil fuels (or nuclear fuel, for that matter), only about a third of the energy in the fuel is actually converted to electricity. (The rest is given off as waste heat at the power plants, and the second law of thermodynamics says there’s not much we can do about it.) So instead of a factor of 6, we could say that my natural gas use is only about twice the amount of fuel that I cause to be burned for electricity. Perhaps coincidentally, the amount that I pay for natural gas is also close to twice what I pay for electricity (about $20/month on average vs. $10), if you neglect the flat fees that are charged just for being hooked up to these utilities.

Burning one MBTU of natural gas emits 117 pounds of CO2, so my annual CO2 emissions from burning natural gas come to 3370 pounds—unambiguously more than the emissions from my electricity use. Thus, even if I reduce my electricity-related carbon emissions to zero, I shouldn’t feel too proud of myself unless I also reduce gas use. Unfortunately, I may have no good cost-effective ways to do that. One option might be to turn the thermostat down and rely more heavily on electric heating pads and blankets and space heaters—and then invest in a rooftop solar system that’s big enough to offset the electricity used by these appliances.

Before wrapping up this article, I should mention that my overall carbon footprint includes quite a few other contributions besides home energy use. There are significant emissions from driving, from flying, from growing and transporting the food that I eat, and from making the stuff that I buy. Perhaps I’ll detail my estimates of these in a future article. For now I’ll just say that each of these four is very roughly comparable to the footprint of my electricity or natural gas use; no one of them seems to be so large that it makes my home energy use negligible in comparison.

In any case, the graphs above make it pretty clear that the new furnace and new refrigerator were the “low-hanging fruit” for reducing my utility bills and the associated carbon emissions. I hope others can learn from these examples, even as I ponder which fruit to reach for next.

Saturday, April 18, 2015

How Grad School Made Me Rich

First let me be clear: I did not go to graduate school in order to get rich. I went because I loved physics and wanted to learn more physics and wanted to have a career in physics.

Besides, how could anyone get rich by going to grad school? Even if, as in my case, you have teaching and research assistantships that pay your tuition plus a stipend, that stipend is far less than what a college graduate “should” be earning.

And yet, as a side effect, grad school made me rich. It did so partly by enabling me to get good-paying academic jobs ever since. But far more important was the way grad school taught me how to happily live on a grad student’s stipend.

I’ve come to appreciate this fact so much that I went back through my old check registers and credit card statements, to see in detail how I did it. My average annual stipend while in grad school, from 1984 to 1990, was about $12,000 (less for the first couple of years and more later on, after I got a research assistantship). Meanwhile, my annual expenses averaged only about $10,500, so I actually accumulated a five-figure bank balance over those six years. (The consumer price index has approximately doubled since then, so double these numbers to convert to today’s dollars.)

Here’s a breakdown of where that $10,500 went:

I kept my housing expenses down by living in on-campus apartments, shared with one to three other grad students. The apartments were furnished, and the rent included utilities.

I kept my other expenses down by eating home-cooked meals (my roommates and I usually took turns cooking dinners) and by not owning a car (a choice that put me in the minority among my classmates).

These savings freed up substantial sums to spend on extravagant luxuries: more books than I would ever make time to read; two Macintosh computers that allowed me to work from home much of the time; a $900 Miyata touring bike; all sorts of other “toys” including backpacks, tents, other outdoor equipment, two nice pairs of binoculars, a telescope, and a new camera; and roughly two trips a year back east to visit family and friends. (The “miscellaneous” category in the chart includes small amounts for clothes and household items, but consists mostly of cash expenditures that I didn’t keep track of, probably including some groceries, plus occasional restaurant meals, concerts, movies, and cash spent while traveling.)

The fact is, I could have lived on 30% less if I’d had to. Or I could have blown that discretionary 30%, and more, on rent or cars or eating out, and ended up feeling like I had no spending money at all.

I did enter grad school with several advantages. My student loans from college totaled only $4500, with payments and interest deferred until after I was out of school. My parents never spoiled me with big-ticket gifts or large sums of cash, but they did make sure I got started with enough clothes and kitchen utensils. My health was always very good, and health insurance (my only significant medical expense) was cheap back then.

The moral of the story, for today’s grad students and anyone else who’s interested, is simple: Minimize your major expenses (housing, meals, transportation), try to avoid other expensive habits (smoking, drugs, debts, children), and you can live extremely well on a graduate student’s stipend.

After I got my degree my take-home pay instantly doubled, and it has continued to rise steadily ever since. But my expenses remained flat, because I was already living an extravagant lifestyle and never had the least desire to spend more. My spending shifted away from books and outdoor toys, since my need for those things was pretty much saturated. I spent less on computers as their prices dropped. I bought a used car in 1991 and bought one and a half new cars (a massive extravagance) more recently. I eventually bought a house, and quickly paid off the mortgage, so my biggest housing expenditures are now for major maintenance and upgrades. I still ride around town on my Miyata touring bike, and I still prefer home-cooked meals to eating out.

My current living expenses, as near as I can figure them, look like this:

The total comes to a little over $20,000 per year, or slightly less than what I spent in grad school when you account for inflation. However, the chart doesn’t include health insurance premiums, which are paid by and through my employer. If I didn’t have employer-provided insurance I would probably buy a “bronze” Obamacare plan and end up paying roughly an additional $4000 a year for premiums and deductibles.

The “miscellaneous” category in this chart includes clothes, household items, books, subscriptions, toys, entertainment, and bike accessories. I’ve tried to average big-ticket expenditures, like car purchases and home improvements, over a suitable number of years. And I’ve mostly tried to separate my own expenses from those of my better half, which isn’t too hard since she has her own financial accounts and her own house.

And where does the rest of my income go? Three places: income tax, savings, and donations to a long list of good causes that I’m proud to support. I won’t detail the breakdown among these three categories, but with a little arithmetic you can safely infer that I could have afforded to retire years ago. I’ve become wealthy without ever trying, and, although I know everyone’s situations and priorities are different, I hope my example can help others do the same.

[For more advice on living a happy life on not much money (or “financial freedom through badassity,” as he puts it), I highly recommend Mr. Money Mustache.]