Sunday, June 14, 2020

Coronavirus in Utah: The First Three Months

Three months ago, when my university campus and so much more of Utah shut down due to the pandemic, I was hopeful.

For one thing, we were lucky. On March 12 the number of known COVID-19 cases in Utah stood at only four, out of 1500 nationally. The initial onslaught in the U.S. had mainly hit the coasts, giving Utah more time to prepare.

Also, Governor Herbert and other state officials were taking the virus seriously. They shut down the public schools, university campuses, and other large gatherings more quickly than I had thought possible. The state was putting out good public information on how to stop the virus’s spread, and efforts to ramp up testing were well underway.

Meanwhile, the Church of Jesus Christ of Latter Day Saints shut down its large gatherings with equally stunning abruptness. In doing so it not only prevented untold numbers of superspreader events on Sundays, but also sent a clear message to its two million Utah members that they had a responsibility to protect themselves and their neighbors.

Exponential growth

Then, while I scrambled to teach my classes online, I watched the numbers climb.

Of course they would climb. The experiences in China and elsewhere had taught us that the virus was highly contagious and could spread unnoticed, with an incubation time of about a week before symptoms appeared. Even then, many victims had mild enough symptoms that they mistook COVID-19 for a common cold or flu. And testing, in Utah in mid-March, was available only to those with the most severe symptoms.

But even knowing all this, and even being familiar with the mathematics of exponential growth, I found it morbidly breathtaking to watch the number of known cases in Utah grow to more than 1000 by the beginning of April—doubling eight times in only 20 days. (Some of this growth was due to actual spread of the virus over time, while some was due to the expansion of testing.)

And then, also predictably, the exponential growth stopped. It stopped because of the shutdowns enacted in mid-March, plus a testing capacity that by early April exceeded 2000 per day, plus the tireless contact tracing carried out by the heroes at Utah’s health departments. The rate of newly confirmed COVID-19 cases in Utah stabilized at about 150 per day. While the virus was raging out of control in many parts of the U.S., Utah had flattened the curve!

And there was even more cause for hope. On April 2 state officials and the Silicon Slopes folks announced a program of even more testing, to not just flatten but “crush the curve.” By mid-April Utah was testing about 4000 people per day, and I eagerly watched for when the daily case numbers would begin to drop.

The long plateau

But the drop never happened. April came to an end, along with my spring semester classes. As the weeks of May went by, the rate of new cases held steady. Tragically, Utah’s coronavirus death toll rose steadily as well, reaching 100 by Memorial Day.

Why weren’t we crushing the curve? Health department officials know the answer to this question in detail, because they’ve interviewed nearly every known victim and traced the sources of most infections. Those details are confidential, so the rest of us can only piece together a partial answer from statistical data and news reports. But the broad picture seems pretty clear.

You see, the virus arrived in Utah by infecting cruise passengers, ski vacationers, and other travelers. These people were mostly white and well-off, like Utah’s elected officials and health department administrators. So understandably, these officials targeted their response to white, well-off people like the early victims and themselves.

If you thought you might be infected, they told you to contact your primary health-care provider. They set up test sites in suburban neighborhoods, for drive-through access. They put out information mostly in English, through media channels that white and well-off people use.

But by early April, it was no longer us white, well-off people who were most at risk. Most of us were able to do our office jobs from home, avoiding nearly all human contact. Our homes also tend to be spacious enough that we can isolate ourselves from family members if necessary.

Meanwhile, the virus continued to spread in places where isolation was difficult or impossible: nursing homes, homeless shelters, meatpacking plants, and the more crowded home environments of lower-income Utahns. Many of the people at risk had no primary health-care provider to call. Many couldn’t get to a drive-through testing center. Many weren’t tuned in to the government’s information channels. Many were immigrants who understood little English.

Of course our public officials knew about these risks from the start, and they’ve made well-intended efforts to better target at-risk populations. Many of these efforts have been successful. When I look at that long plateau through April and May on the chart of new case numbers, I see it as a succession of dozens of overlapping local outbreaks among a wide variety of at-risk communities, with health officials rushing in to put out each fire as soon as they learn about it.

What I don’t see, unfortunately, is enough effort by health officials to prevent these local outbreaks among at-risk populations from happening in the first place. I’ve read almost nothing about testing at-risk workers in locations where there isn’t yet a known outbreak, or about inspecting workplaces and punishing employers who don’t maintain safe working conditions, or even about publicly disclosing the specific locations of known outbreaks. Perhaps there’s some of this going on in Utah (and again I’m not in a position to know most of the details), but it’s obviously not enough.

A new surge

We know it’s not enough because we haven’t crushed the curve. And now, since late May, the curve is again rising. The number of new cases reported each day has again doubled, to more than 300. On a per-capita basis we’re now reporting more new cases than all but five of the other states.

And we’ve just witnessed Utah’s biggest outbreak so far: 800 new cases reported in the Bear River district over the last 16 days, when the district had previously been averaging only three new reported cases per day. Nearly all of these 800 new cases seem to be tied to the JBS meatpacking plant in Hyrum, where most of the employees are immigrants from Latin America, Asia, and Africa.

An outbreak of that size does not develop in just 16 days: it must have been in progress for several weeks before the authorities became aware of it. And yet it seems they were completely unaware until approximately May 29, when they reported the first big jump in positive test results.

I know only one way to describe this kind of blindness on the part of public officials: institutional racism.

As if to underscore this description, right in the midst of this outbreak the all-white Cache County Council voted to petition the state to go to “green” status, removing most of the remaining measures to protect public health.

The sad irony is that if public officials had done more to address the threats to Utah’s disadvantaged populations back in April and May, they would have crushed the curve by now and we probably could take most of the state to “green” status. More importantly, we could have saved many lives, and we could be confident that reopening schools and universities in the fall won’t put undue numbers of students, teachers, and their families at risk. But with new cases being discovered at a rate of 300 per day, I fear that the contact tracers won’t be able to keep up, and the only way to prevent another period of exponential growth may be a return to “orange” or even “red” status.

Let me hasten to add that I’m not a big advocate of draconian population-wide restrictions as the main way to control the virus, so long as the case load remains low enough for testing and contact tracing to keep up. What we need (as far as I can determine as an amateur outsider who merely reads news reports) are more efforts focused on high-risk populations and high-risk workplaces. Utah’s white and well-off public officials need to work harder to understand these risks and develop more aggressive ways to prevent outbreaks. And Utah’s white and well-off voters need to understand that their personal situations during this pandemic are very different from those of the workers who are keeping food on their tables.

Update, 17 June 2020

On the same day that I posted this armchair analysis, the Salt Lake Tribune published an in-depth article with plenty of real reporting on how “Utah wasn’t prepared to reach out to its Hispanic residents when the virus struck.” This article also mentions a challenge that I hesitated to speculate about: “Another fear Hispanics harbor, particularly those who are undocumented, is that information they give while at a testing site could be sent to immigration officials.”

And today (three days later), the Trib has a summary of some very troubling comments from state epidemiologist Dr. Angela Dunn to a legislative committee: Utah is already at the point where contact tracing is falling behind, due to the high rate of new infections and the large number of contacts that must be traced for each infected person. “As long as that’s going on, it’s not realistic to focus restrictions only on specific ‘hotspots.’”

The Tribune is providing free public access to its fantastic reporting on the pandemic, but someone has to pay for all this work. Please, if you can, sign up for a subscription to the nonprofit Salt Lake Tribune.

Monday, May 6, 2019

Five Years of Driving

It’s been five years since I bought my Subaru. Time for an assessment.

The odometer now reads 12,575, so I’ve driven the car about 2500 miles per year. To most Americans that won’t sound like much, but it would be a long way to walk, and it’s about twice the mileage I put on my bicycle.

Today’s cars are made to be driven hundreds of thousands of miles, so I feel kinda ridiculous for investing $25k in a new one and then using it so much less than I could. It seemed like the best of several bad options at the time, and I still can’t really think of a better one.

Unsurprisingly, the car has been virtually trouble-free. I get its oil changed once a year whether it needs it or not. The battery ran low a couple of times this last winter, while the car sat in the driveway unused for weeks at a time. The only other service it’s needed was also due to lack of use: a warranty-covered replacement of the fuel line vent valve, which had gotten clogged with spider webs.

I do nearly all of my commuting, grocery shopping, and other short errands by bicycle, so generally I use the car around town only when I need to carry a passenger or some other large cargo. Most of the miles on the car are from recreational trips: up into the mountains to hike or to ski, plus a couple of trips each year to neighboring states. It’s never been farther from home than northwestern New Mexico.

I chose a Subaru Crosstrek for its high clearance, and I’ve taken it a few places where high clearance was necessary, but only a few. I’m conflicted over whether those few trips were worth the added expense and/or added carbon emissions, compared to (say) a low-clearance economy hatchback.

Fuel economy

So far I’ve filled the car’s gas tank 31 times, for a total of 412 gallons. At the last fill-up the mileage was 12,218, so the overall fuel economy comes to 29.6 miles per gallon. Here is a chart that shows the variability from one fill-up to the next:

As expected, the best fuel economy has been on summer road trips, while the worst has been in winter city driving. But this tank-by-tank data doesn’t provide the precision one might like, because the tank size is pretty generous and I typically drive about 400 miles before each refill. Except on long trips, those 400 miles always include quite a mix of driving conditions.

In principle I could get more detailed information from the dashboard fuel economy display. But care is required, because its calibration is off. As the next chart shows, the displayed fuel economy is higher than the calculated-at-the-pump fuel economy by an average of 2.6 mpg:

With this calibration inaccuracy in mind, I’ll report that on one occasion—a round trip from Ogden to Salt Lake City in September 2014—the dashboard reported a fuel economy as high as 40.8 mpg.  I’ve seen higher numbers only for one-way partial trips that were mostly downhill.

My car’s official EPA-estimated fuel economy is 25 mpg in the city, 32 on the highway, and 28 overall. So I’ve been doing slightly better than the official estimate. That’s mostly because I do proportionally more highway driving than the EPA assumes, and very little of my highway driving is at the absurdly wasteful (and dangerous!) freeway speeds that Utah allows.

Carbon footprint

Burning a gallon of gasoline produces just under 20 pounds of carbon dioxide, so at 2500 miles per year and 30 miles per gallon, my Subaru has been emitting roughly 20×2500/30 = 1660 pounds of CO2 per year, or 0.75 metric tons. The EPA estimates that upstream emissions from producing and transporting the gasoline add on another 24 to 31 percent, so my car’s annual carbon footprint is probably about 2100 pounds or 0.95 tons of CO2. (This doesn’t include the substantial emissions from manufacturing the car in the first place.)

My personal driving-related carbon footprint isn’t the same as my car’s, because the car sometimes carries other passengers and I sometimes travel in other cars. I haven’t kept the records I’d need to determine which of these effects is larger, so let’s just assume they cancel each other out. Then it’s meaningful to compare my Subaru’s carbon footprint to my own carbon emissions via other means, and to national and international per-capita averages.

Even though my carbon footprint from driving is several times smaller than the U.S. average, I don’t feel like I’m sacrificing anything to keep it so small. I’ve always disliked driving, so I’ve always naturally chosen to live within biking distance of where I work, and to just say no to most of the driving opportunities that continually present themselves. It helps that I also dislike shopping. Rarely, on a cold, rainy night, I’ll give in to the temptation to jump in the car when I need some groceries. But as a modern American who sits on his ass indoors most of the time, I rarely want to sit on my ass, wrapped up in a tin can, even when I’m outdoors.

Of course the future of cars is electric, but it’s hard to guess when an electric car might be in my future. Electric cars are best for daily commuting—precisely the type of driving that I never do. Charging stations are still rare to nonexistent along Utah’s two-lane highways, not to mention remote trailheads. Subaru actually just came out with a plug-in hybrid version of the Crosstrek, but its range on battery power is only 17 miles (not even enough for a round trip to the upper Ogden Valley), and it costs an extra $10k. Finally, a full 70% of Utah’s electricity still comes from coal, so there’s little or no CO2 reduction from driving an electric car around here. All these things are bound to change, but that change may take a while.

Saturday, April 13, 2019


The other day I finished my taxes for 2018.

As a result of the Tax Cuts and Jobs Act of 2017, my federal income tax went up by roughly $500. (Yes, I computed what it would have been this year under the old rules and tax tables.)

I don’t mind the increase. Actually I think my taxes should be still higher. But I don’t like the way they did it, lowering the bracket rates and then reducing much of the incentive to make charitable contributions.

Instead they should do away with the distinction between wages and investment income, lowering the tax rate on wages and raising the tax rate on investments. Don’t ever believe politicians who say they value work while they continue to support taxing wages at a higher rate than dividends and capital gains. And don’t even get me started on inherited wealth.

Treating all income in the same way would also have simplified my tax calculations quite a bit, saving me a couple of hours of time. The new tax law simplified my filing process only slightly. The paid tax preparers and software vendors are still, I’m sure, very happy.

Incidentally, although I do think they should restore the old incentive to make charitable contributions, I’d also be fine with greatly restricting the definition of “charitable” to include only true charities—not churches or elite schools or thinly disguised political organizations.

Wednesday, November 29, 2017

Six Ways to Measure Your Electricity Use

Maybe you want to save money. Maybe you want to save the planet. Maybe you just want to understand what’s going on inside your home. Or maybe, like me, you’re motivated in all three of these ways. Whatever the reason, let’s talk about how you can measure your household electricity use.

In this article I’ll describe six practical electricity measurement methods, starting with the simplest and progressing toward those that require more effort. Beginners will want to get comfortable with each method before moving on to the next. More advanced readers should feel free to skip ahead to the methods they don’t already know.

Ready? Here we go...

1. Look at your bills.

You probably receive an electricity bill every month. Of course the bill shows how much money you owe, but it also shows how much electricity you’ve used. (If your bill gets sent to a landlord who doesn’t let you see it, then you’ll have to skip this method and go on to the next one.)

Even if all you really care about is money, it’s not enough to look only at the dollar amount on your bill because that amount might not be a good measure of how much electricity you’ve used. It probably includes a base rate that you pay even if you use no electricity, and it might include other utilities besides electricity. Worse, your utility company might have you on an “equal billing” plan that averages your bill over the course of a year, hiding the interesting seasonal changes.

So you want to look on your bill for a number that’s not in dollars but rather in kilowatt-hours, or kWh for short. That number is the actual amount of electrical energy you used during the month. For example, here’s my bill from February 2014, during which I used 146 kWh:

Don’t be shocked if your monthly usage is a lot more than mine! According to official government data, the average American household uses nearly 900 kWh per month.

Besides comparing your monthly electricity use to the average American household (or, if you prefer, to my own), you can learn a lot by comparing to your own usage in other months. Look at a whole year’s worth of bills if you can, to see the seasonal patterns. Many Americans use the most electricity in the summer, when they use their air conditioners; others use the most in the winter, for heating and lighting.

What’s a kilowatt-hour anyway?

A kilowatt-hour is a unit for measuring energy, just as a mile is a unit for measuring distance and a dollar is a unit for measuring money. As with those other units, you’ll develop an intuitive feel for kilowatt-hours as you encounter more examples. Here are a few common household uses that typically consume approximately one kWh each:
  • Running a central air conditioner for 20 minutes
  • Running an electric space heater for 40 minutes
  • Running a modern no-frills refrigerator for one day
  • Baking a batch of cookies in an electric oven
  • Drying 1/3 of a load of laundry in an electric dryer
  • Leaving an LED light bulb on for a few days
  • Fully charging a laptop computer battery 10 times
And what does each of these activities cost? Most Americans pay between 10 and 20 cents for a kWh of electrical energy.

At some point you may want to compare electrical energy to other forms of energy, such as chemical energy (in food or fuels), or thermal energy (heat). Because we can convert one type of energy into another, we really should use the same unit to measure all types—but we don’t! Our inconvenient tradition is to measure food energy in Calories (abbreviated Cal, which scientists call large calories or kilocalories) and, here in the U.S., to measure heat in British thermal units (Btu). You can convert between kWh, Cal, and Btu using Google or various other web sites. The approximate conversion factors are
1 kWh = 860 Cal = 3400 Btu.
So the typical American consumes enough food to provide two to three kWh of energy each day (1700 to 2600 Cal), and a typical household furnace can provide about 22 kWh of heat each hour (75,000 Btu). A gallon of gasoline, if you’re curious, provides about 31,000 Cal, or 120,000 Btu, or 36 kWh of energy.

2. Read your meter.

The main problem with electricity bills is that you get only one per month! But the power company determines your billed usage by reading your meter, and you can read it yourself just as easily, as often as you like. (The exception would be if you live in a multi-unit building in which the electricity isn’t metered separately for each unit. In that case you’ll have to go on to method 3.)

Reading the old dial-style meters used to be a bit tricky, but nowadays nearly everyone has a digital meter with a simple numerical readout:

The number on the display, 24362 in this case, is the number of kWh of electricity used since some time far in the past—probably whenever the meter was first installed. (The number may blink off and back on every few seconds, in which case you may need to wait a moment to see it.)

So all you need to do is write down the number from the meter (and the time when you read it), then read it again an hour or a day or a week later, and subtract the two values to get the electrical energy usage during that time period. It’s a great exercise to read your meter once a day for a few weeks or months, and to keep a log of the readings, like this:

From this kind of data you can get a very good idea of what kinds of activity use the most electricity: When did you run your air conditioner? When did you do laundry? How much energy does your house use on days when nobody is home?

3. Multiply power by time.

Some electrical devices always use energy at the same rate, whenever they’re turned on. The most familiar example is an ordinary (non-dimmable) light bulb. The rate of energy use is what scientists call power, and we measure it in units of watts. Old incandescent light bulbs commonly used 60 or 100 watts, but modern LED bulbs put out just as much light while using only 10 or 15 watts.

To determine the amount of energy used by a device, you multiply its rate of energy use (that is, the power, in watts) by the amount of time that it’s on:
Energy = Power × Time.
If we measure the power in watts and the time in hours, then we get the energy in units of watt-hours. A kilowatt-hour is 1000 watt-hours, so we divide by 1000 to get the energy in kWh. For example, the energy consumed by a 10-watt bulb left on for 24 hours would be
Energy = (10 watts)(24 hours) = 240 watt-hours = 0.24 kWh,
where I divided by 1000 in the last step. You can similarly estimate the energy use of a 40-watt ceiling fan running for six hours, or of a 1500-watt hairdryer that’s turned on for 10 minutes. Look for power consumption ratings printed on the backs of appliances, or in the owner’s manuals or on the manufacturers’ web sites. Or consult an online list of typical power consumption values. The only catch is that many appliances use less than their nominal power rating under most conditions, or they cycle on and off automatically so that it’s hard to measure exactly how long they’re actually on.

4. Get a plug-in appliance meter.

For a mere $20 or so, you can buy a Kill A Watt P4400 meter, which makes it easy to measure the energy use of any plug-in 120-volt appliance. Use it for a few days to track down unnecessary energy use, and it can easily repay your investment many times over. (There are a number of competing products on the market, but the Kill A Watt is the most common, and is very affordable, so that’s the one I’ll describe. I’ve never seen one in a store, but you can purchase it through many online retailers.)

To use the Kill A Watt meter you simply plug it into a wall outlet (through an extention cord if necessary), then plug your appliance into the meter.  Initially it just displays the line voltage (120 or so), but if you press the rightmost button once, it will display the total energy used since you plugged it in, in kWh. Press the same button again and it displays the time since you plugged it in, so you don’t even need to write that down.

You’ll definitely want to use the meter to test your refrigerator(s), preferably for a day or longer. Other good candidates for testing include televisions, computers, washing machines, and electric blankets.

For some devices you may also want to try pressing the meter’s middle button. Then the display will show the instantaneous rate of energy use (power), in watts or kilowatts. This number will probably fluctuate, especially for something like a refrigerator that periodically cycles on and off. But if the power is reasonably steady and you already know how long the device will be in use, then a quick power reading can save you from having to wait for the energy measurement to build up. Just multiply the power by the time, as described above in method 3.

Don’t forget to test low-power devices that are on all the time, such as clocks and WiFi routers and televisions that never go completely off.

5. Time the little blinking squares.

The main drawback of a plug-in meter is that you can’t use it to measure hard-wired devices or 240-volt appliances. For these, and for those times when you’re caught without a plug-in meter within reach, you can go back out to the power company’s meter, equipped with a stopwatch (probably the one on your smartphone).

This time, instead of looking at the numbers on the display, you want to watch the little blinking squares at the bottom. They should go on and off following a six-step pattern:

(The pattern is meant to mimic the horizontal rotating disk in an old mechanical meter, as if half the disk’s edge is dark and the other half is light, with the front turning from left to right.) Each change in the pattern—a square going on or off—indicates one watt-hour of energy usage. Use your stopwatch to time how long it takes between one change and the next. Or, if the pattern is changing quickly, measure the time for the entire six-step cycle and divide by six. Either way, you can now calculate the power being used in your home as follows:
Power in watts = 3600 / (measured time in seconds).
Explanation: The energy used during your measured time interval was one watt-hour, or 3600 watt-seconds (since an hour is 3600 seconds). But energy = power × time, so to calculate the power, you divide the energy by the measured time.

You’ve now measured the rate at which all the electrical devices in your home are using energy at a particular moment. The trick, then, is to make this measurement with everything except the device(s) you care about turned off. Try it once with all the major appliances turned off, and the refrigerator unplugged or turned off at the breaker panel, to get a power value for all the little stuff in the home that’s using a small amount of power 24 hours a day. Then turn on a major appliance like the furnace or air conditioner or electric dryer, and make another measurement.

Once you know the power of some device of interest, calculate its total energy use by multiplying by how long it’s on, as in method 3.

6. Install a fancy monitoring system.

The five simple methods described above are more than enough to give you the big picture of your home electricity use, including the information you need to save a lot of money (and help save the planet). But if you want to understand every detail of what’s going on in your home, and you’ve exhausted what you can reasonably learn from the first five methods, then the next step is to install a home energy monitoring system. These systems start at about $150, and the installation process is nontrivial.

Electricity monitoring systems are available in several varieties, from several vendors. I have the Efergy Engage Elite Hub System (recommended by Mr. Money Mustache), which is one of the most affordable and easy to use. But I wish I had spent a little more for Efergy’s True Power Meter, which would be more accurate.

The main components of these systems are a pair of clamp-around sensors that you install on the main feed wires coming into your breaker panel. To install them you need to turn off the electricity (otherwise you may die!), open up the panel, and then hope that there’s enough room to fit the clamps around the stiff wires. (I had a tough time with one of them, but finally managed.) If you have any doubts about your ability to do this installation safely, you should hire an electrician.

For a true power meter there would also be a wire to make an electrical connection inside the panel. Either way, the Efergy sensors connect to a transmitter just outside the panel, which beams the data wirelessly to one or two receivers. The data is simply an instantaneous power measurement for your whole house (or at least as much as is powered by this particular panel), equivalent to what you measured in method 5 above. But the monitoring system makes these measurements continually, day and night, with no need for you to use a stopwatch or a calculator.

One type of Efergy receiver contains a digital display for immediate readout, updating every ten seconds. This can sometimes be handy, but in my opinion it’s not worth the price or the installation effort by itself. The other type of receiver, though, is a “hub” that uploads the data over your internet router to Efergy’s web site, where you can look up (and even download) minute-by-minute power levels at any later time, from any location, through your web browser. It’s a data junkie’s dream. Here’s a sample of my own data as viewed on the Efergy web site, showing a steady base load, the refrigerator and furnace cycling on and off, and a big spike from cooking breakfast on my electric stovetop:

As I mentioned above, my basic Efergy sensor isn’t always accurate. Specifically, it’s accurate for “resistive loads” like the stove and other heating appliances, but it reads too high a value for anything with a motor in it, like a furnace blower or a washing machine. The reason has to do with the intricacies of alternating current, and the best solution would be to use a slightly more sophisticated system such as the Efergy True Power Meter or The Energy Detective (a competing product that costs a bit more). The power company’s meter also makes accurate measurements, as does a Kill A Watt meter, so I’ve simply used those to calibrate my interpretation of the Efergy data.

Saturday, April 15, 2017

Qubits or Wave Mechanics?

A few days ago Sean Carroll tweeted a poll:

As someone who’s been wrestling with this question for 30 years, I perked up at this tweet, and not only voted but even tweeted a couple of responses. It’s a fascinating question! 

The second answer is the traditional one, and there are many good arguments for it: a solid experimental basis in phenomena that are easy to demonstrate; vivid images of wavefunctions for building intuition from classical waves; and a huge array of practical applications to atomic physics, chemistry, and materials science. The down-side is that the mathematics of partial differential equations and infinite-dimensional function spaces is pretty formidable. Mastering all this math takes up a lot of time and tends to obscure the logical structure of the subject. Especially if your main interest is in the new field of quantum information science, this is a long and indirect road to take.

Hence the alternative of starting with two-state systems, which are mathematically simpler, logically clearer, and directly applicable to quantum information science. The difficulty here is the high level of abstraction, with an almost complete lack of familiar-looking pictures and, inevitably, no direct connection to most of the traditional quantum phenomena or applications.

A fundamental challenge with teaching quantum mechanics is that it’s like the proverbial Elephant of Indostan, with many dissimilar parts whose connections are difficult for novices to discern. From various angles, quantum mechanics can appear to be about Geiger counters and interference patterns, or differential equations and their boundary conditions, or matrices and their eigenvalues, or abstract symbol-pushing with kets and commutators, or summing over all possible histories, or unitary transformations on entangled qubits. Stepping back to get a view of the whole beast is challenging even for experts, and bewildering for “blind” beginners.

I think most physicists would agree that an undergraduate degree in physics should include some experience with both wave mechanics and two-state systems. Carroll’s Twitter poll, though, asks not what a degree program should include, but how we should introduce physics students to quantum mechanics. That’s a hard question, and one’s answer could easily depend on any number of further assumptions:
  • Who exactly are these “physics students”? Students taking an introductory course, which may be their last course in physics? Typical undergraduate physics majors? Undergraduate physics majors at Caltech? What’s their math background?
  • How long an introduction are we talking about here? A single lecture, or a few weeks, or an entire course?
  • Will this introduction be followed by further study of quantum mechanics? In other words, is the question merely about the order in which we cover topics, or is it also about the totality of what we should teach, and what we can justifiably omit, when we design a course or a curriculum?
  • Are we constrained to use existing resources, including textbooks, instructor expertise, and locally available lab equipment? Or are we dreaming about an ideal world in which any resources we might want are magically provided?
Due to all these ambiguities, we should interpret the poll results with caution. Carroll’s interpretation was that the winning second option “probably benefits from familiarity bias. I’ll call it a tie”—so I infer that his own preference is to start with two-state systems. I agree that some respondents were probably biased in favor of what’s familiar, but I also suspect that Carroll’s Twitter followers have more interest in fundamental theory, and less interest in atoms and molecules, than would a random sampling of physicists.  I also wonder if some respondents weren’t biased in favor of what’s unfamiliar: it’s easy to suggest a radical curricular change if you’ve never actually tried it out and had to live with the unintended consequences. Carroll himself is currently teaching an advanced quantum course that emphasizes two-state systems, but as far as I can tell he has never taught a first course in quantum mechanics for undergraduates.

No professional quantum mechanics teacher should be completely unfamiliar with the two-state-systems-first approach, because it’s used, more or less, in Volume III of the Feynman Lectures on Physics, published in 1965 (thirty years before Schumacher and Wootters coined the term qubit!). I say “more or less” because Feynman actually starts with two-slit interference and other wave phenomena, and then he introduces a three-state system (spin 1) before settling into a lengthy treatment of spin 1/2 and other two-state systems.

There are also some well-known graduate-level texts that begin with two-state systems:  Baym’s Lectures on Quantum Mechanics (1969) and Sakurai’s Modern Quantum Mechanics (1985).

At the upper-division undergraduate level, the earliest text I know of that takes the two-state-systems-first approach is Townsend, which first appeared in 1992. Several others have appeared more recently: Le Bellac (2006), Schumacher and Westmoreland (2010), Beck (2012), and McIntyre (2012). Instructors who want to take this approach in such a course can no longer complain about the lack of suitable textbooks.

But at the lower-division level, where most students first encounter quantum mechanics, the pickings are still slim. Nobody actually teaches out the Feynman Lectures. You could try to use a few chapters out of one of the more advanced books (McIntyre would probably work best), or you could use Styer’s slim text The Strange World of Quantum Mechanics (2000, written for a course for non-science majors), or you could use the new (2017) edition of Moore’s introductory Six Ideas textbook (which inserts three short chapters on spin and “quantum weirdness” in between electron interference and wavefunctions), or you could try Susskind and Friedman’s Theoretical Minimum paperback (2014, an insightful tour of the formalism with little mention of applications—see Styer’s review here).

I suspect that the time is ripe for someone to write an otherwise-conventional sophomore-level “modern physics” textbook that introduces quantum mechanics via two-state systems and qubits before moving on to wave mechanics. I really wish Moore would expand his Units R and Q into a more complete “modern physics” text!

Personally, I’ve had a soft spot for spin ever since I took a quantum class from Tom Moore in 1982, at the end of my sophomore year (after a conventional “modern physics” class) at Carleton College. This half-term class was mostly based on Gillespie’s marvelous little book, which lays out the logic of quantum mechanics for a single spinless particle in one dimension. But Moore departed from the book to introduce us to two-state and three-state spin systems as well, even writing a simple computer simulation of successive spin measurements for us to use in a homework exercise. The following year I saw more spin-1/2 quantum mechanics in the philosophy of science course that I took from David Sipfle, using notes prepared by Mike Casper, probably inspired by the Feynman Lectures. So when I took Casper’s senior-level quantum course after another year, I was well prepared.

A few years later, while procrastinating on my thesis work during graduate school, I converted and expanded Moore’s computer simulation into a graphics-based Macintosh program. Moore and I published a paper about this program, and how to use it at various levels, in 1993. From there the concept made its way into Moore’s Six Ideas course, and also into the Oregon State Paradigms curriculum and McIntyre’s book. Last year I ported the program to a modern web app.

I recount this history mainly to establish my credentials as an experienced advocate for, and contributor to, the teaching of quantum mechanics via two-state (and three-state) spin systems. So you may be surprised to know that on Carroll’s quiz I actually voted against this approach and in favor of starting with the traditional wave mechanics. And in my own teaching I’ve actually never started with spin systems: I’ve always started with one-dimensional wave mechanics in both upper-division quantum mechanics and sophomore-level modern physics. In calculus-based introductory physics I teach a little about wave mechanics and don’t really cover two-state systems at all. My reasoning is simply that for these students, in these courses, the balance of the pros and cons listed above seems to weigh in favor of starting with wave mechanics.

Meanwhile, I think there are opportunities to improve on the way we teach wave mechanics. One serious drawback with most wave mechanics text materials is their relative neglect of systems of more than one particle. As a result, students tend to develop some misconceptions about multiparticle systems, and don’t hear about entangled states—an important and trendy topic—as early as they could. I’ve recently written a paper on how to address this deficiency, with some accompanying software to help students visualize entangled wavefunctions.

My bottom-line opinion, though, is that the best answer to Carroll’s question depends on both the students’ needs and the instructor’s inclinations. Back in 1989, Bob Romer published an editorial in the American Journal of Physics titled “Spin-1/2 quantum mechanics?—Not in my introductory course!” But he hastened to clarify: “not in my course, thank you, but maybe in yours”—enthusiastically encouraging instructors to innovate and to follow whatever teaching plan they believe in. I wholeheartedly agree.

Sunday, October 9, 2016

Could Clinton Win Utah?

There’s been plenty of speculation this election season that Utahns’ distaste for Donald Trump might drive them so far as to “turn the state blue” in November, giving Hillary Clinton a plurality of the vote. I never took this speculation seriously, figuring that however much they dislike Trump, most Utahns are deeply loyal to the Republican Party and would therefore rationalize their way to hating Clinton even more.

But the fallout from Trump’s latest scandal has changed the landscape incredibly fast: his bragging in vulgar terms about habitually committing sexual assault has pushed many Utahns over the edge. Governor Herbert and several other prominent Utah Republicans have withdrawn their endorsements, and several who were on the fence have finally taken a stand against Trump, joining Mitt Romney, who has been a never-Trumper all along. Senator Hatch and my own Rep. Bishop are still supporting Trump, but they’re undoubtedly feeling a bit lonely at the moment. Most remarkable of all, the Deseret News has just published an editorial calling on Trump to drop out of the race, while expressing the hope that Congress will keep President Clinton in check.

Of course Utah won’t be the state that tips the balance of the Electoral College. But it’s still fun to consider whether Clinton could actually win Utah, so let’s take a look at the polling data. Here’s a screen capture from, listing the nine Utah polls that weigh most heavily in that site’s Utah forecast:

The polls are listed in descending order by their FiveThirtyEight-assigned weights, based on the quality of the pollster, the sample size, and how recently the poll was conducted. The range of polling results is remarkably wide, but notice that the overall quality of the polling is poor: all of the polls are substandard in at least one of the three respects. Even the highest-weighted poll is by a pollster (Dan Jones) with only a C+ grade, and is now more than two weeks old. The highest-quality poll, conducted by SurveyUSA for the Salt Lake Tribune and the Hinckley Institute, is now four months old.

Nevertheless, FiveThirtyEight has combined all the Utah polls into a weighted average, then done some further processing to obtain a predicted most-likely outcome. Here’s a summary of the calculation:

The first four adjustments made to the polling average are small and, in my opinion, should be uncontroversial. One of these, the “trend line” adjustment, tries to update the older results based on trends in other states (and the nation as a whole) for which there is abundant recent polling. In principle, this adjustment should account for Clinton’s rise in the polls since the September 26 debate, up to but not including the events of the past two days.

But the adjusted polling average allocates only 81.9% of the vote to Clinton, Trump, and Johnson. The next step then assumes that nearly all of the remaining 18.1% will end up split evenly between Clinton and Trump, and here’s where I think the FiveThirtyEight model makes a Utah-specific error. The problem is Utah-based minor candidate Evan McMullin, who entered the race only two months ago yet seems to be polling almost as well as Johnson: 12% in the top-weighted Dan Jones poll, and 9% in the second-place PPP poll. It seems to me that if Johnson is allowed to retain his 12.6% share at this stage of the calculation, then McMullin should also retain his 10% or so.

FiveThirtyEight’s final adjustment is to mix in a prediction based not on polls but on a demographic regression model, which uses past voting patterns (broken down by region, race, religion, and educational level) to try to compensate for inadequate polling in states like Utah. (This is done even for the site’s “polls only” model, which is the one I’m working from.) But this adjustment could also be problematic, because of Utah’s (and Mormons’) peculiar affinity for Romney in 2012 and distaste for Trump in 2016.

So let’s back up to the “adjusted polling average” but tentatively give McMullin a share that’s 2% behind Johnson:
  • Clinton 28.8%
  • Trump 40.5%
  • Johnson 12.6%
  • McMullin 10.6%
  • Other/undecided 7.5%
And now let’s ask how these numbers are likely to change over the next month, in light of the events of the last two days.

My guess is that a certain fraction of Trump’s 40.5% will follow Gov. Herbert’s lead and withdraw their support—some in direct reaction to the recent news and others because they now have “permission” from authorities they trust. Also, I doubt that Trump can now gain from any defections of Johnson, McMullin, or other/undecided voters. So unless there are further unexpected developments, it looks to me like Trump will end up with only 30% to 35% of the Utah vote.

Can Clinton’s share exceed this? If Trump gets only 30% then the answer is almost certainly yes: Clinton would then have to gain only a tiny fraction of the undecideds, Trump defectors, and perhaps defectors from minor candidates. If Trump can keep his vote share near 35% then it will be harder for Clinton, but still not out of the question. Let’s also remember that the percentages listed above are pretty uncertain, and you could make a case for discarding the weird outlying CVOTER International poll results; then Trump’s support would have already been below 40% even before the latest scandal.

Is there any chance that Johnson or McMullin could win? I think that would be a long shot, because they seem to be splitting the conservative anti-Trump vote so evenly. Only if one of them drops out, or otherwise implodes, would the other have a decent chance of surpassing Clinton.

The bottom line, in my opinion, is that Clinton is now a slight favorite to defeat Trump in Utah and carry the Beehive State. I say “slight” because of the large uncertainties in the past polling data, in the impact of the recent developments, and in what could still happen during the next 30 days. In any case, I can hardly wait to see what upcoming polls of Utah show, and to see how Utahns actually vote in such an extraordinary election.

Update, 16 Oct 2016: During the week since I wrote this article we’ve gotten three new Utah polls, and FiveThirtyEight has updated its Utah model to include Evan McMullin. Here’s their summary table of the polls that include McMullin, which are the only ones the model now uses:

The Y2 Analytics poll, first reported late on the night of the 11th, caused a flurry of excitement because it shows Clinton and Trump tied at only 26%. Equally remarkable is that McMullin is just behind at 22%, even though only 52% of respondents were aware of his candidacy. This result immediately made me question my earlier dismissal of McMullin’s chances. It also prompted articles covering the race in the New York Times, Washington Post, and FiveThirtyEight.

The subsequent polls from Monmouth and YouGov confirm that McMullin’s support is around 20%, but contradict the earlier indication that his gain has come entirely at the expense of Trump, whose support remains in the mid-30s. If these polls are a reasonably accurate predictor of the final results, then Trump will still win Utah by a safe margin.

After combining all six polls and making the minor adjustments described above, FiveThirtyEight now obtains the following “adjusted polling averages”:
  • Clinton 24.1%
  • Trump 33.8%
  • Johnson 10.7%
  • McMullin 19.4%
  • Other/undecided 12.0%
Although Trump’s support has fallen about as much as I predicted a week ago, he remains comfortably ahead of Clinton because her support has also fallen somewhat (or at least is lower in polls that include McMullin). Could she or McMullin still win? Yes, because the uncertainty in these numbers is fairly large and the situation in Utah still seems pretty volatile. On the other hand, many Utahns will receive mail-in ballots during the coming week, so the clock is starting to run out. For what it’s worth, the PredictIt betting market, as translated by ElectionBettingOdds, currently has the odds of winning Utah at Trump 71.5%, Clinton 20.0%, and Other (presumably McMullin) 8.5%.

Update, 8 Nov 2016: Polls of Utah have been coming thick and fast over the last three weeks, but the picture hasn’t changed much over this time. Here’s another screen capture from FiveThirtyEight showing nearly all of the polls that include McMullin:

The general picture here is pretty clear: Trump is ahead in almost every poll, though there’s disagreement over whether his lead is by single or double digits. McMullin is the frontrunner in just one poll, and Clinton in none. Johnson has collapsed. Here are FiveThirtyEight’s averages and adjustments, to obtain its final prediction for the Utah presidential election:

In the adjusted polling average, Trump comes out ahead of Clinton by nearly ten percentage points, while McMullin is behind Clinton by a point and a half. But then FiveThirtyEight assigns most of the remaining undecided voters to McMullin (presumably there’s a precedent for this), so McMullin ends up in second place in the final projection. The calculated win probabilities are Trump 82.9%, McMullin 13.5%, and Clinton 3.6%.

Meanwhile, Election Betting Odds has Trump at 87% likely to win, Clinton at 7%, and Other at 6%. Clinton’s higher odds here may reflect a recent report that she is ahead among early voters. It wouldn’t especially surprise me if Clinton beats her polls by a few points due to the early vote advantage, especially because many Utahns haven’t gotten used to Utah’s new mostly-by-mail voting system, and the number of physical polling locations has been greatly reduced since the last presidential election. Republicans who have hesitated this long because they’re unenthusiastic about all the candidates may have little motivation to find their polling locations and wait in the potentially long lines.

Still, it seems highly unlikely that either Clinton or McMullin will make up the roughly ten-point polling deficit to catch Trump, who will probably win Utah with less than 40% of the vote.

Just as Trump’s potential national victory says a lot about the state of American politics, so also his ability to win Utah tells us that our state isn’t as different as many would like to believe. Although many prominent Utah politicians have denounced Trump, Reps. Chaffetz and Stewart ultimately backtracked and said they would vote for him anyway. Governor Herbert and Mitt Romney have remained silent about whom they’re voting for. (A McMullin endorsement from either of them, which I was half expecting four weeks ago, might have put McMullin in the lead.) The bottom line is that even though most Utahns fully understand that Trump is a lying, bigoted, asshole who’s absolutely unqualified for the job, their allegiance to the Republican Party drives them to dislike Clinton even more. Many Utahns will explain that at least Trump will (he says) appoint anti-abortion justices to the Supreme Court. Few of them, I suppose, have carefully thought through the risks that America and the world will face if Trump actually wins.

Update, 19 January 2017: Before the inauguration of President Trump I suppose I should finish this saga with the actual Utah election results:
  • Trump 45.5%
  • Clinton 27.5%
  • McMullin 21.5%
  • Johnson 3.5%
  • Others 2.0%
Comparing to the final FiveThirtyEight polling averages above, we see that not only did essentially all of the undecided voters apparently end up voting for Trump, but he also picked up a fair number of McMullin and Johnson defectors in the final days before the vote. This result fits in nicely with the conventional wisdom about what happened in the decisive swing states, with the further complication that a larger percentage of Utah voters was up for grabs. Of course, it’s also possible that there was a systematic polling error in Utah, such as an under-sampling of white voters without college degrees. In any case, I was obviously wrong to predict that Trump would end up with under 40% of the vote. As for Clinton, she did over-perform her polls as I more or less predicted, but only by about a point.

Despite my poor numerical predictions, I think the overall tone of my final election-day paragraph holds up pretty well. Of course the important question now is what will happen during Trump’s presidency. The nation is headed into uncharted territory, with a vast range of possible outcomes ranging from reasonably normal to absolutely catastrophic. I don’t see how anyone could possibly predict what will happen.

Monday, September 12, 2016

A Year of Solar Data

My solar panels were installed in August of last year, and two months later I reported on how they were performing. Now, after a full year of operation, it’s time for a more comprehensive report.

The bottom line is that the panels produced a little less electrical energy than the installer predicted, but still quite a bit more than I used over the course of the year. Here’s a diagram showing the overall energy flows:

Here and throughout this article I’ll present data from the year that began on 1 September 2015 and ended on 31 August 2016. During that time the panels produced 1558 kilowatt-hours (kWh) of electrical energy, and I used 349 kWh of that energy directly. The other 1209 kWh went onto the grid for my neighbors to use. But I also pulled 813 kWh of energy off the grid, at night and at other times when I needed more power than the panels were producing. My total home usage from both the panels and the grid was 1162 kWh. (I got the solar production amount from my Enphase solar monitoring system, and the amounts going to and from the grid by reading my electric meter. From these three numbers I calculated the other two.)

Because I used less energy than my panels produced, I’ve paid no usage charges on my electric bills since the system was installed; I pay only the monthly minimum charges, which come to about $9.00 per month including taxes. Under Utah’s net-metering policy (which could change in the future), each kWh that I push onto the grid can offset the cost of a kWh that I pull off of the grid at some other time. But I don’t get to make a profit from the 396 kWh excess that I pushed onto the grid over the course of the year; that was effectively a donation to Rocky Mountain Power, worth about $40 at retail rates.

Monthly and daily details

So much for the yearly totals. But the picture varies quite a bit with the seasons, as shown in this graph of my panels’ monthly output:

The total energy generated in July (165 kWh) was twice as much as in January (81 kWh), with a pretty steady seasonal rise and fall in between. On the other hand, my installer estimated significantly higher production in winter and spring, plotted on the graph as green squares. (I get a similar over-estimate of the winter and spring production, relative to summer and fall, when I use the NREL PVWatts calculator, with weather data from the Ogden airport. So maybe my location is cloudier than the airport, and/or maybe last winter was cloudier than the 30-year average that the calculator uses.) The actual annual production of 1558 kWh was 91% of the estimated total of 1713 kWh. (An earlier, less formal estimate from the installer was 1657 kWh for the year, and not broken down by month; my annual production was 94% of that estimate.)

You might think the factor-of-2 seasonal variation in my solar energy production was a direct result of the varying length of the days and/or the varying solar angles. In fact, however, it was mostly due to varying amounts of cloud cover. You can see this in a plot of the daily energy generated:

The energy output on sunny days varied only a little with the seasons, and was actually lowest in the summer. But summer days in northern Utah are consistently sunny, whereas a full day of sunshine can be uncommon in mid-winter. Incidentally, my best day of all was February 23 (6.7 kWh), while my worst day was January 30 (0.0 kWh, because it snowed throughout the day).

Although the seasonal variations among sunny days are relatively small, they’re still interesting. The output drops off in mid-winter because the days are shorter, and also because the mountains block the early morning sunlight. On the other hand, the output drops off in the summer because of the steep angle of my roof. The panels face the noon sun almost directly throughout the fall and winter, but they face about 37 degrees too low for the mid-summer noon sun, reducing the amount of solar power they receive by about 20% (because the cosine of 37° is 0.8). The following plot shows all these effects:

Notice that the vertical axis on this plot is power, or the rate of energy production. To get the total energy generated you need to multiply the power by the time elapsed, which is equivalent to calculating the area under the graph. As you can see, the June graph is lowest at mid-day but extends farther into the early morning and late afternoon, while the December graph is highest but narrowest. The total energy (area) is largest for the March graph. The asymmetry in the December graph, and in the lowest part of the March graph, is from the mountains blocking the rising sun. The smooth “shoulders” on either side come from the shadow of the pointy gable in the middle of my roof.

With all of these effects in mind, as well as the day-to-day variations in cloud cover, let me now show all of my solar data for the year in a single image. Here the day of the year is plotted from top to bottom, and the time of day from left to right. The power level in watts is represented by color, with brighter colors indicating higher power levels:

In the upper-left portion of this image you can more or less see the shape of the mountains, with a reflection at the winter solstice. The dark stripes are cloudy days, with the exception of a power outage during the wind storm of May 1 (that’s right—a standard grid-connected photovoltaic system produces no power when the grid goes out). Subtle astronomical effects cause some further asymmetries from which, with enough analysis, you could probably extract the shape of the analemma.

Details aside, the big picture is that the steepness of my roof is almost ideal for the winter months. It even ensures that snow slides off the panels as soon as the sun comes out. But the steep angle hurts my solar production more in the summer than it helps in the winter, mostly because so many winter days are cloudy anyway.

Effect of temperature

Looking back at the previous graph for the three sunny days in different seasons, you might have noticed that the noon power level drops from winter to summer by more than the 20% predicted by the solar geometry. The discrepancy on any particular day could be due to variable amounts of haze, but there’s another important effect: temperature.

To isolate the effect of temperature, I took the noon power level for every day of the year and divided it by the (approximate) cosine-theta geometrical factor to get what the power would have been if the panels were directly facing the sun. Then I plotted this adjusted power level vs. the ambient temperature (obtained from Ogden-area weather reports) to get the following graph:

The data points cluster along a line or curve with a negative slope, confirming that the panels produce less power at higher temperatures. Very roughly, it appears that the power output is about 15% less at 90°F than at 20°F. For comparison, the data sheet for the solar panels indicates that the power should drop by 0.43% for each temperature increase of 1 degree Celsius, or about 17% for an increase of 70°F. But this specification is in terms of the temperature of the panels, which I wouldn’t expect to vary by the same amount as the ambient temperature.

(In the preceding plot, the outlying data points below the cluster are from days when clouds reduced the solar intensity; most such points lie below the range shown in the graph. I’m pretty sure that the outliers above the cluster are from partly cloudy days when the panels were getting both direct sunlight and some reflected light from nearby clouds.)

Electricity usage

Now let’s look at the seasonal variation in my home electricity usage, compared to the solar panels’ output. Here’s a graph of the monthly data, with the solar data now plotted as blue squares and the usage plotted as columns, divided into direct-from-solar usage and from-the-grid usage:

Unfortunately, my electrical usage peaks in mid-winter, when the solar production is at a minimum! But even during the bulk of the year when the solar production exceeds my total use, well over half of the electricity I use comes off the grid, not off the panels.

The good news is that I’ve actually reduced my total electricity use by about 15% since the panels were installed. I did this through several small changes: running the furnace less when I was away from home; cooling my house in the summer with a super-efficient whole house fan instead of smaller fans sitting in windows; and unplugging an old computer and a portable “boom box” stereo that were drawing a few watts even when turned off. I’m still using more electricity than I did a decade ago, when I had no home internet service and no hard-wired smoke detectors. But if you look just at what I’m using off the grid, it’s slightly lower even than back in those simpler times. Here’s an updated plot of my average daily usage during every month since I bought my house 18 years ago (as explained more fully in this article from last year):

What would it take to live off the grid?

I’ve repeatedly emphasized the electrical energy that I continue to draw from the grid, because I want readers to understand that virtually all of the solar panels being installed these days are part of the electrical grid—not an alternative to it. Even though my panels generate more electrical energy than I use over the course of a year, they will not function without a grid connection and of course they generate no power at all during most of the times when I need it.

But what would it take to live off the grid entirely? The most common approach is to combine an array of solar panels with a bank of batteries, which store energy for later use when the sun isn’t shining. For example, there’s been a lot of talk recently about the new Tesla Powerwall battery, which stores 6.4 kWh of energy—enough to power my home for about two days of average use. A Tesla Powerwall sells for $3000, which is somewhat more than the net cost (after tax credits) of my solar panels. If I were to make that further investment, could I cut the cord and live off the grid?

To answer this question, I combined my daily solar generation data with a data set of nightly readings of my electric meter. (The latter data set is imperfect due to inconsistent reading times, missed readings when I was away, and round-off errors, but day-to-day errors cancel out over longer time periods so it should give the right picture overall.) I then calculated what the charge level of my hypothetical Tesla Powerwall would be at around sunset on each day, and plotted the result:

For most of the year the battery would hold more than enough energy to get through the nights, but in this simulation there were 42 evenings in the late fall and winter when the level dropped to zero, and several more evenings when it dropped low enough that it would surely be empty by morning. Simply getting a Tesla Powerwall is not enough to enable me, or most other households with solar panels, to disconnect from the grid.

What if I added a second Tesla battery? Unfortunately, that would reduce the number of zero-charge nights by only eight, from 42 to 34. In fact, it would take thirteen Tesla batteries, in this simulation, to completely eliminate zero-charge nights, because there is a period of a few weeks during mid-winter when the average output of my solar panels is barely over half what I’m using.

The better solution, therefore, would be to add more solar panels. For example, if I were to double the size of my solar array and install two Tesla Powerwalls, then the simulation predicts that I would run out of electricity just one night during the year. Of course this scenario is still extremely wasteful, because I’d be using less than half the capacity of the panels and only a small fraction of the capacity of the batteries during most of the year. That’s why people who actually live off the grid tend to have backup generators that run on chemical fuels, and don’t rely on electricity for most of their heating or cooking.

Similar calculations would apply to our society as a whole. A massive investment in both solar panels and batteries could conceivably get us to the point where most of our electricity, for most of the year, is coming from the sun. But it will never be economical to get that “most” up to 100%, because so much over-building would be needed to get through periods of cloudy weather, and it will be much less expensive to use other energy sources at those times.