Wednesday, April 16, 2014

Fuel Economy vs. Power

The recent experience of buying a new car left me angry and bewildered over the meager choices for those of us who care about fuel economy. Here we are in 2014, more than 40 years after the OAPEC oil embargo, and in the U.S. you still can’t buy a liquid-fueled car with an EPA combined rating above 50 miles per gallon. Only a handful of cars exceed 40 mpg, and your selection is pretty limited until you get down to mpg ratings in the low 30s. The most efficient pickups and minivans get 23 and 24 mpg, respectively. (Throughout this article I’m using city/highway “combined” fuel economy values under the current, less generous, EPA rating system.)

To some extent the limitations on fuel economy are due to basic physics: air and rolling resistance, braking losses, and thermodynamic limits on engine efficiency. But the existence of the 50 mpg Prius and of high-efficiency cars sold outside the U.S., not to mention the 47 mpg Geo Metro from a generation ago, raises the question of why more cars aren’t comparably efficient. The short answer is that most American car buyers don’t care. Or rather, they care much more about other factors such as size, price, appearance, and power. The most interesting of these is power.

Each year the EPA publishes a report under the cumbersome title Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends. All 135 pages of the latest Trends report are informative, but I’ll highlight just Figure 2.3, which shows some trends in fleet-wide averages for new vehicles sold in the U.S.:


First look at the line for weight, which decreased sharply by 20% in the late 1970s (as more people bought small cars), then began creeping upward in the late 1980s (as SUVs became popular). By 2004 the average vehicle weight was back to its 1975 value, and it has stayed there since.

Simple physics predicts that fuel economy should increase by about the same percentage that weight decreases, all other things being equal. But all other things have not been equal. Since 1975 we’ve seen steady improvements in engine and drive train efficiency, as well as in aerodynamics. Today’s new vehicles are 80% more efficient than in 1975, with essentially no change in average weight.

My point, though, is that the efficiency gain could have been significantly more than 80%, even with the same weights and the same technologies. Look at the trend in horsepower, which has been rising much faster than weight for the last 30 years. A higher power/weight ratio translates into faster acceleration, but (other factors being equal) lower fuel economy. Let’s quantify these effects.

A thorough statistical analysis of the acceleration performance of U.S. vehicles was published a couple of years ago (in both report and poster formats) by MacKenzie and Heywood of MIT. Their data set of 1500 cars and light trucks came from tests done by Consumer Reports, and was representative of the U.S. auto fleet as a whole. The results are striking:


Whether you look at the median (black curve), the slowest vehicles (red curve), or the fastest vehicles (green curve), 0-60 mph acceleration times are barely over half what they were in the early 1980s. To quote from MacKenzie and Heywood, “Acceleration performance that was typical in the early 1990s would put a vehicle among the slowest on offer today. Even the slowest end of the market (95th percentile) today delivers performance that was reserved for the fastest vehicles (5th percentile) in the mid-1980s.” Some of this increased performance has come from improvements in drive trains and aerodynamics, but most of it is a direct result of higher power/weight ratios. MacKenzie and Heywood found that with other factors held fixed, each 1% increase in power reduces the 0-60 mph acceleration time by about 0.7% for lower-power vehicles and about 0.58% for higher-power vehicles. (I think these values are less than 1% because limited traction prevents a vehicle from using its full power at low speeds.)

But why should engine power affect fuel economy? The answer lies not so much in basic physics as in the practicalities of engine operation. My amateur’s understanding is that running a gasoline engine at less than its full power means filling the cylinders at less than atmospheric pressure. You then get less force during the power stroke, with a proportional reduction in fuel consumption, while there’s no reduction in the friction between the piston and the cylinder wall—and that friction lessens the efficiency. In other words, a smaller engine running closer to full throttle is more efficient than a larger engine that’s being throttled back to produce the same power output.

So much for qualitative understanding. What about some numbers? 

I couldn’t easily find a quantitative analysis of the effect of engine power on fuel economy, so I did a quick empirical analysis of my own. Lacking the time to study every vehicle on the U.S. market, I started with a list of the 30 best-selling models in 2013. I then looked up each of these models in the 2014 EPA database, and picked out those that come with more than one engine option. I further pruned the list down to pairs of vehicles with different engines but the same (or nearly the same) transmission and drive type, and I eliminated duplicate pairs (e.g., same two engine options with different drive trains, or similar vehicles sold under different names). I also eliminated vehicles with turbocharged engines, which are generally more efficient but add a lot of noise to the data. Finally I was left with 14 vehicle pairs (5 cars, 3 SUVs, and 6 pickups) to compare, and I looked up the engine power for each on the manufacturers’ web sites. Here’s the final list:

Vehicle (transmission)   Engine 1   HP   MPG     Engine 2   HP   MPG     ΔHP   ΔMPG
Chevrolet Impala (auto 6) 2.5L 4cyl 195 24.5 3.6L 6cyl 305 21.4 56% −13%
Honda Accord (manual 6) 2.4L 4cyl 185 27.7 3.5L 6cyl 278 21.6 50% −22%
Hyundai Elantra (auto 6) 1.8L 4cyl 145 31.5 2.0L 4cyl 173 27.9 19% −11%
Nissan Altima (auto CVT) 2.5L 4cyl 182 31.2 3.5L 6cyl 270 25.3 48% −19%
Toyota Camry (auto 6) 2.5L 4cyl 178 28.7 3.5L 6cyl 268 24.8 51% −14%
Chevrolet Equinox AWD (auto 6) 2.4L 4cyl 182 23.5 3.6L 6cyl 301 18.9 65% −19%
Jeep Grand Cherokee 4WD (auto 8)    3.6L 6cyl 290 19.5 5.7L 8cyl 360 15.9 24% −18%
Jeep Grand Cherokee 4WD (auto 8) 5.7L 8cyl 360 15.9 6.4L 8cyl 470 14.9 31% −7%
Chevrolet Silverado 2WD (auto 6) 4.3L 6cyl 285 19.8 5.3L 8cyl 355 18.6 25% −6%
Chevrolet Silverado 2WD (auto 6) 5.3L 8cyl 355 18.6 6.2L 8cyl 420 17.0 18% −9%
Ford F150 4WD (auto 6) 3.7L 6cyl 302 17.5 5.0L 8cyl 360 15.9 19% −9%
Ford F150 4WD (auto 6) 5.0L 8cyl 360 15.9 6.2L 8cyl 411 13.4 14% −16%
Ram 1500 2WD (auto 8) 3.6L 6cyl 305 19.7 5.7L 8cyl 395 17.3 30% −12%
Toyota Tacoma 4WD (manual 5/6) 2.7L 4cyl 159 19.2 4.0L 6cyl 236 17.0 48% −11%

The last two columns show the differences in power and fuel economy, respectively, between the first and second engine options. Here’s a plot of these two columns, showing that there’s quite a bit of scatter in the data but the decreasing trend is clear:


On average, the percentage decrease in fuel economy is about 1/3 of the percentage increase in engine power. So, for example, a 30% increase in power typically results in a 10% decrease in fuel economy.

On one hand, these results help explain why consumers are so inclined to choose power over fuel economy: In percentage terms, you typically get about three times the added power for every bit of fuel economy you’re willing to sacrifice! On the other hand, MacKenzie and Heywood’s analysis shows that your 0-60 mph acceleration time drops by only about 2/3 as much as the power gain (in percentage terms), or about twice the percentage that the fuel economy drops. And of course, bigger engines are also more expensive. Given that Americans were happy to buy much less powerful vehicles only a generation ago, it’s hard to believe that most consumers are behaving rationally when they choose more powerful vehicles.

(In some places you’ll read that cars with slower acceleration are less safe—though I’ve never seen any actual evidence for this claim. Leaving aside the likelihood that powerful cars encourage stupid people to drive stupidly, I suppose the argument is that you need fast acceleration to safely merge onto a freeway where traffic is moving rapidly. Yet somehow we still share freeways with heavy trucks and buses and RVs and vehicles towing trailers and quite a few 25-year-old economy cars, all with accelerations much slower than that of any of today’s light-duty vehicles. In practice, slow acceleration just means you sometimes need to wait a little longer before it’s safe to merge. It’s really a question of incremental convenience, not safety.)

Hypothetically, if Americans were willing to go back to the acceleration performance of vehicles made in 1985, we could immediately increase the average fuel economy of new cars by more than 30%. Realistically, that’s not going to happen unless there’s another oil crisis or similar shock to the economy. The most we can probably hope for is that acceleration performance (and vehicle weight) will plateau, so future technological improvements will translate fully into better fuel economy.

Meanwhile, I wish the auto makers would offer just a few more extra-efficient vehicles of various designs, to give consumers more choice. By combining power levels that were typical of the early 1990s with the best current technologies for engines, transmissions, hybrid systems, and aerodynamics, it shouldn’t be hard to produce a 40 mpg small SUV, a 35 mpg minivan, a 30 mpg pickup, and a 60 mpg subcompact. They might not become the instant market leaders, but they would still get plenty of attention, sell to the niche market of sane consumers, and perhaps raise everyone’s expectations for the future.

Wednesday, March 5, 2014

Little Blue and Big Blue

I don’t especially like cars. They’re too big and too fast and too dangerous and too polluting and too isolating and too seductively comfortable and especially too ubiquitous. For everyday commuting and most errands I’ll stick to my trusty bicycle.

Still, I have to admit that cars are useful. I bought my first one in 1991 when I moved to a small town in Iowa, because I knew I would occasionally need a way to escape. And I still have that car: a 1989 Toyota Tercel hatchback, now known affectionately as Little Blue. I’m a bit embarrassed to admit that I’ve grown attached to it.

My parents helped me pick out Little Blue from the classified ads: automatic transmission, 17,460 miles, $6000. A nice practical car for a young single visiting assistant professor, and an easy car to drive and maneuver and park, for someone who didn’t have much experience behind the wheel.

Oh, the places I went in Little Blue. During my two years in Iowa there were monthly supply runs to Iowa City, occasional trips to St. Louis to see the folks, a canoe outing with five students who all had to squeeze in for the return drive after the other car broke down, a big camping vacation to southern Utah after school was out in 1992, and a spring break (1993) hiking trip to Arkansas (anywhere warmer than Iowa!) when, on the way home near Joplin, Missouri, the differential somehow ran dry and ground itself into smithereens.

After the move to Utah there were road trips all around the West, mostly to hike and camp in the mountains. I made some of these trips alone, but more often brought a friend or two. Little Blue still reminds me of companions from long ago, including the greatly missed friend who put that big dent near the left front wheel.

Little Blue has accumulated several bumper stickers over the years: Radio Free Utah, Save Our Canyons, Kill Your Television, Down the Hatch (24 years is too long!), Transit First, Obama ’08, FOrward, and =.  But the rear bumper faces the noon sun from Little Blue’s parking space, so the stickers that haven’t completely disintegrated are well on their way.

Since we bought a Prius at the end of 2004, Little Blue hasn’t gotten much use. I no longer feel very safe in such a small car without airbags, and of course the Prius gets much better fuel economy. (Its nickname is the Patriot Car, since you don’t have to attack Iraq to get enough gasoline to run it.) So Little Blue’s odometer has only gradually crept beyond 100,000, even as the passage of time has taken a toll on more and more of its parts. Still, we do occasionally need a second car around town, and the Patriot Car is pretty lousy on snow and on Utah’s unpaved back roads.

So I’ve just taken the plunge and bought a replacement for Little Blue: a 2014 Subaru XV Crosstrek, known for the time being as Big Blue. It dwarfs Little Blue, even though by today’s standards it’s not especially big. But it’s about the most modest (and most efficient) vehicle you can buy that has high clearance, which I want for those trips into the backcountry. It also has all-wheel drive, so we’ll use it in town when the roads are icy.


I have extremely mixed feelings about buying a new car. It was a stupid decision financially, especially since I don’t plan to drive it more than 5000 miles a year. It would have been far cheaper to fix up Little Blue, or to buy a used Subaru or perhaps a Ford Escape hybrid or some other small SUV. But the Crosstrek, which came out only a year ago, is closer to what I really want than any of those older models, in terms of capability and fuel economy. And I’m not enthusiastic about spending the time to shop for and maintain a used car. At this point in my life, my money is worth less than my time.

It’s fun and informative to compare some of the specifications of Little Blue, the Patriot Car, and Big Blue. Here for each is the curb weight, engine power (including the hybrid drive for the Prius), and city/highway fuel economy under the current EPA rating system:
1989 Tercel: 2085 lbs, 78 hp, 24/29 mpg
2005 Prius: 2921 lbs, 110 hp, 48/45 mpg
2014 Crosstrek: 3175 lbs, 148 hp, 25/33 mpg
Although the power/weight ratio is about the same for the two Toyotas, the Prius accelerates much faster—presumably because of its better transmission (CVT vs. three-speed automatic). In practice, both Little Blue and the Patriot Car have consistently beaten their EPA mpg ratings on the highway, but fallen short of them in the city. Probably the same will hold true for Big Blue, but we’ll see.

I have more to say about power and fuel economy, but that will have to wait for a future blog.

Saturday, October 19, 2013

Inventory of Physics Simulations in HTML5/JavaScript

When I discovered last winter how useful JavaScript and the HTML5 canvas element can be for physics simulations, I was astonished that there seemed to be so few examples of such simulations out there. That situation is rapidly changing. Here’s an inventory of the examples I’m aware of at this time.

My own portfolio of three simulations is unchanged, except for a few bells and whistles added to each of them:
Over the summer, with support from the Weber State University Beishline Fellowship, our student Nathaniel Klemm ported five of the simulations that my colleague Farhang Amiri uses in his general education physics course:
(The original versions of these simulations were written by Farhang Amiri and Brad Carroll in Adobe Director, and runnable through the Shockwave browser plugin. These were part of a much larger collection of simulations that they developed in the 1990s. Unfortunately, support for the Shockwave plugin is becoming problematic.)

Andrew Duffy of Boston University has created some simple mechanics simulations:
These would be good examples for a beginning HTML5 developer to learn from, since their code is short and straightforward. (Andrew and I will be giving a workshop on beginning HTML5 simulation development at the 2014 AAPT summer meeting next July.)

Dan Cross of Haverford College has created two extremely elegant simulations that I especially recommend:
John Denker has a simulation that draws hydrogen wavefunction scatter plots:
Dan Styer and Noah Morris of Oberlin College have created a nice demonstration of two-slit interference:
This simulation uses the jQuery UI library for its slider controls, which unfortunately makes them unusable on devices that rely on touch events.

GlowScript is a 3D graphics library built on WebGL, created by David Scherer and Bruce Sherwood who modeled it on their earlier VPython system. GlowScript is accompanied by a web-based development environment that eliminates the need to write HTML, although it can also be used as an ordinary JavaScript library. Some collections of GlowScript examples are posted here:
Unfortunately, the use of WebGL makes GlowScript simulations runnable only under certain browsers. As of this writing they will not run on most mobile devices, although some mobile devices offer partial support.

Francisco Esqembre of the University of Murcia has created a high-end development environment for quickly creating physics simulations, called Easy Java Simulations. Although EJS is a Java program, the new version 5 beta release can output stand-alone HTML5/JavaScript code. More than a dozen examples are now posted here:
The PhET group at Colorado has recently gone public with its first six HTML5 simulations for introductory physics and chemistry:
As you would expect from PhET, these simulations are extremely professional and hence the code, which relies on a vast collection of libraries, is unreadable by mortals.

All of the simulations listed above were created by (or under the supervision of) academic physicists, primarily for the purpose of physics education. But of course the vast majority of graphics-intensive HTML5 development is being done by game developers. Here are a couple of these efforts that contain good physics and are fun to play with:
And that’s my list for now. If any readers out there would like to add to this list, feel free to leave links (noncommercial, please) in the comments.

Tuesday, August 27, 2013

College Tuition Has Outpaced Inflation by 237% Since 1978

Everyone from President Obama on down seems to be talking about how expensive college has become. Amidst all this talk you hear plenty of statistics, usually quoted without much context, by people who have a political agenda. The Democrats want to make college more affordable for the poor, while the Republicans want to help the rich tap into the tuition gravy train. College professors and administrators want to protect their own salaries and budgets. Everyone, it seems, is an expert, and indeed, there is a vast body of literature on the economics of higher education.

So, being a typical curious physicist, I decided to ignore all this literature and try to get the big picture directly from the most obvious place: Consumer Price Index data from the U.S. Bureau of Labor Statistics. The CPI has included a college tuition (and fees) component since 1978, and it’s easy to download the data and compare it to the prices of other goods and services. The results are striking.

To visualize what has happened since 1978, I chose several other CPI data sets and divided each by its 1978 value to get a consistent baseline. Then, to more or less cancel out the effects of overall inflation, I divided each number for a specific CPI category by the “all items” value. Here, without further ado, are the results:


College tuition has risen far more quickly than any other CPI component that I looked at, with the exception of pre-college tuition (which tracks college tuition very closely). You think medical care has gotten more expensive? In the last 35 years the medical care CPI has exceeded overall inflation by only 92%, while college tuition has outpaced inflation by 237%.

Shelter (buying or renting a home) has risen in price only a little faster than the overall CPI during this time. The price of energy has been quite volatile, also rising somewhat on average. Food prices have not quite kept pace with the overall CPI. Virtually all categories of manufactured goods, from apparel to household furnishings to new cars, have become much more affordable than they were in 1978.

In fact, this one graph tells much of the story of the U.S. economy over the last 35 years. Manufactured goods are now cheap because the manufacturing has been either outsourced or automated—and the retailers who sell these goods don’t pay high wages. The money is in professional services like law and finance and medicine and education that can’t easily be outsourced or automated. These professions require a college education, so the demand for college has risen, further driving up its price.

But where is all that tuition money going? That’s an excellent question, which I’ll try to address in a subsequent post.

[Addendum: Of course I’m not the first to produce a graph like the one above. Here’s one that appeared online just yesterday, although it doesn’t show as wide a variety of CPI categories and it doesn’t divide by the overall CPI as I did.]

Wednesday, July 24, 2013

Java vs. JavaScript vs. Python


At last week’s AAPT meeting I presented a poster showing how fun and easy it is to do fluid dynamics simulations using current personal computers and software. These simulations are computationally intensive, so every bit of performance counts. On the other hand, students and hobbyists and other nonexperts like myself rarely have time to write code that will squeeze every last drop of performance out of our machines. Also, it’s hard to share code written in C or Fortran—the languages of professionals—with other nonprofessionals.

So in recent years I’ve been writing all my physics simulations in Java or Python or JavaScript. And for my poster presentation I decided to clock these languages against each other. I had already written two-dimensional lattice-Boltzmann fluid simulations in all three languages (code posted here), so all I had to do was run them with the same lattice dimensions and measure the number of calculation steps per second. Here are the benchmark results:


This algorithm performs about a hundred basic arithmetic calculations per lattice site per time step, so the Java version, at over 1000 steps per second for 16,000 lattice sites, is performing well over a billion calculations per second on my 2.3 GHz i7 MacBook Pro. I suspect that C or Fortran would run about twice as fast.

All three versions of this simulation include animated graphics, but I was careful to do the graphics in ways that didn’t significantly slow down the computation. In Java, the graphics automatically runs in a separate thread, presumably on a separate processor core.

Since my laptop has four processor cores, I could have sped up the Java version about threefold by parallelizing most of the computation. But that would violate the spirit of this test, which is to see what can be done without getting too fancy with the code. Perhaps Python can also be configured to make better use of my hardware, but I have no clue how to do so (I’m simply using the EPD Free Python installation). I did use NumPy to vectorize everything in Python, since Python loops are glacially slow.

The JavaScript results are shown for only the two fastest browsers, Chrome (version 27) and Firefox (version 22). It seems that Chrome has gotten slightly faster since my earlier tests in March, while Firefox has sped up significantly since then but still hasn’t caught up to Chrome. Both, in any case, offer remarkably good performance at more than half the speed of Java. Safari (6.0.5) is no faster than the previous version, coincidentally about the same speed as Python/NumPy. Opera (12.16) is still far too slow to bother with for this kind of simulation.

Of course, these three languages serve rather different purposes. JavaScript is for web deployment, while Python is for tinkering around on one’s own workstation. Java is fading away as a web platform, but I still like it for personal use when speed and/or interactivity is important.

Friday, June 7, 2013

Meter That Water!


One of the bewildering ironies of life in Utah is that although we live in the nation’s second driest state, many of us have access to unmetered water for our lawns and other summer irrigation needs. We pay a fixed (per acre) fee on our annual property tax bill, and then use as much or as little of the water as we like, from mid-April through mid-October.

The widespread availability of unmetered secondary water in suburban neighborhoods is apparently unique to Utah. Of course it’s also ironic that this bit of socialism is found in one of the nation’s most Republican states.

Most of our secondary water systems are remnants of the agricultural irrigation systems that were here generations ago, when our suburban neighborhoods were still farms and orchards. I’m not sure why similar systems aren’t found in the suburbs of other Western cities, but it could be simply because our water sources are relatively close to our population centers, making our water infrastructure inherently less expensive. Or it could be due to peculiar laws and practices that grew out of Utah’s early Mormon settlements.

The obvious down-side of unmetered water is that people have no economic incentive to use less of it. So Utahns plant enormous green lawns as if they lived in Kentucky, and then give the grass even more water than it needs. Those who use little or no water end up subsidizing those who waste it.

But the lack of a meter has another disadvantage: It deprives us even of the knowledge of how much water we’re using. As a scientist and educator, I live by the principle that knowledge is inherently beneficial, even when it has no practical consequences.

So, of course, I found a way to measure my secondary water use. First I tried holding my lawn sprinkler over a bucket and measuring how long it took to fill. Then, for convenience, I bought a little $29 meter that screws into a hose line. After making just a couple of measurements I had a very good idea of how many gallons per minute were coming out of the hose, so from then on it was just a matter of timing how long the sprinklers were on and doing a bit of arithmetic.

Here are my results, from last summer’s watering season: I used 1900 gallons outdoors during June, 3200 gallons in July, 2500 in August, and 1600 in September. The total for the season was 9200 gallons, and about 90% of that went onto the patch of buffalo grass in my back yard. The rest was for spot-watering various trees, shrubs, and perennials. I tried to water about twice a week during the hot part of the summer, but sometimes it ended up being less, due to laziness or forgetfulness or being out of town.

For comparison, my indoor water use at home averages about 800 gallons per month.

About half of the city of Ogden—mostly the older neighborhoods—doesn’t have secondary water. Residents in those areas irrigate with metered culinary water, so we know how much they’re using. The amounts vary enormously, with many households using only a few thousand gallons per month but a few dozen customers using hundreds of thousands of gallons each. The use distribution from August 2012 is shown below, with the largest users (about 7% of them) omitted in order to show the rest at a reasonable scale. The median use among these customers during that month was 14,100 gallons, while the average, skewed by the high-use customers, was 20,500 gallons. One customer used a whopping 513,900 gallons.


These customers, of course, have a financial incentive to limit their water use (although thanks to relatively high base rates, the per-gallon fees aren’t very noticeable for those who use only a few thousand gallons per month). We can only presume that the other half of the city, with no such incentive, uses considerably more irrigation water per household.

For their culinary water, customers in Ogden receive a bill that shows not only the amount charged, but also the gallons used. In recent years the city has put additional useful information on the bill, such as the monthly amounts used over the past year (shown as a column graph) and a breakdown of how the water bill was calculated (which I personally lobbied for in early 2012). What’s still missing is any information comparing your usage to that of other city residents.

Along these lines, take eight minutes to watch this fantastic TED lecture. The speaker is Alex Laskey, president of a young company called Opower that helps utilities conserve energy by keeping their customers better informed. As Laskey describes, Opower’s work was motivated by an experiment done a decade ago in southern California, in which researchers found that the way to motivate people to use less electricity was to inform them about the conservation efforts of their neighbors.

Wouldn’t it be fantastic if all our utilities told customers how their usage compares to the average use of their neighbors? We could start with culinary water customers, right here in Ogden.

For our unmetered secondary water, though, it won’t be so easy. The first step is obviously to install meters. That will take a long time, but already the process has begun. Our area’s largest supplier of secondary water now has a pilot program to meter its customers’ usage in a few small neighborhoods. Based on the success of this program, I’m told that they’re hoping to install meters throughout their service area within as little as a decade. At that point they can, if they choose, begin to charge customers based on usage. But even before that controversial step, I suspect they’ll see significant water savings merely from telling customers whether their usage is excessive by local standards.

Friday, March 1, 2013

JavaScript and HTML5 for Physics


Like a number of other physics educators, I’ve invested a fair amount of time and energy over the last decade creating educational software in the form of Java applets.

I love Java for several reasons. It’s a reasonably easy language to learn and use, with logical rules and few exceptions. It gives very good performance, typically within a factor of 2 of native code. And, crucially, for a long time it was installed on nearly everyone’s computers, so most students and others could run my applets immediately, without any huge downloads or configuration hassles.

Unfortunately, that last advantage is now disappearing. For one thing, more and more people are replacing their computers with mobile devices that can’t run Java. Compounding the problem, security concerns have recently prompted many people (and companies) to disable Java on their computers. Some tech pundits have gone so far as to pronounce client-side Java dead.

While it’s likely that Java can be kept on life-support for several more years, its long-term prospects appear grim. So it’s time for us Java applet programmers to abandon this obsolescent technology and find an alternative. But is there one?

Until very recently, I thought the answer was no. Sure, I was aware of the new HTML5 canvas element, which allows drawing directly to the screen via JavaScript, right inside any web page, with no plugins. But this technology has mostly been described as a replacement for Flash, not Java, and I assumed that, like Flash, it would mean sacrificing a lot of performance. I couldn’t imagine that a scripting language could ever approach Java’s speed. Indeed, in an early test of a canvas animation a few years ago, I found the performance to be so poor that computationally intensive physics simulations were unthinkable.

I then avoided thinking about the issue until this winter, when the repeated Java security alerts provided an abrupt wake-up call. Apparently knowledgable geeks all over the internet were telling the public to disable Java, insisting that nobody should ever need to use it again.

So I decided it was time to give JavaScript the HTML5 canvas another try—and I was simply stunned. At least under Chrome, I found that graphics-intensive physics simulations run about half as fast in JavaScript as they do in Java. I can absolutely live with that factor of one-half.

My specific tests were with three simulations that I’ve spent a lot of time with over the years. First I coded a basic Ising model simulation to go with the corresponding section in my Thermal Physics textbook. Then I tried a molecular dynamics simulation, similar to the computational physics project I’ve assigned many times and the applet I wrote a few years ago. Finally, encouraged by these successes, I coded a fluid dynamics simulation using the lattice-Boltzmann algorithm, which I learned with the prodding and help of a student, Cooper Remkes, as he worked on a class project in late 2011.

As the following graphs show, the choice of browser can make a big difference. Each graph shows the relative number of calculation steps per unit time, so higher is better in all cases. I ran the simulations on my new MacBook Pro with a 2.3 GHz i7 processor, and also on one of the Windows 7 desktop PCs in the student computer lab down the hall from my office, with a 2.93 GHz i3 chip. Comparing these two machines to each other wouldn’t be fair at all, and comparisons of the three different simulations wouldn’t be meaningful either, so I’ve normalized each group of results to the highest (fastest) value in the group. The browser versions were Chrome 25, Firefox 19, Safari 6, Opera 12, and Internet Explorer 9; all but the last of these are current, as far as I can tell.  (I tried I.E. 10 on a different Windows machine and found it to be only a little faster than I.E. 9, in comparison to Chrome. I couldn’t easily find a Windows PC with Safari or Opera installed.)



The Ising model benchmark tests a mix of tasks including basic arithmetic, if-else logic, accessing a large two-dimensional array, random number generation, evaluating an exponential function, and drawing to the canvas via context.fillRect. By contrast, the molecular dynamics (MD) and fluid dynamics (FD) simulations heavily emphasize plain old arithmetic. The fluid simulation, however, does more calculation between animation frames, uses much larger arrays, and uses direct pixel manipulation (context.putImageData) to ensure that graphics isn’t a bottleneck. The MD simulation seems to be limited, at least on the fastest platforms, by the targeted animation frame rate.

As you can see, the performance of Opera and I.E. on the MD and FD simulations is a major disappointment. Let’s hope the JavaScript engines in these browsers get some big speed boosts in the near future. Safari and Firefox seem to give acceptable performance in all cases, though neither measures up to Chrome on the demanding FD benchmark. Chrome is the clear all-around winner, although it’s a bit disappointing on the Ising simulation under Windows.

And how does this performance compare to Java? It’s hard to make a comparison that’s completely fair, because of differences in the languages and, especially, the available graphics APIs. But in general, I’ve found that similar simulations in Java run about twice as fast as the best of these JavaScript benchmarks.

The bottom line is that if you choose your browser carefully, JavaScript on a new computer is significantly faster than Java was on the computers we were using during its heyday (a decade ago). And of course, for simulations that don’t require quite as much computational horsepower, JavaScript and canvas can also reach the proliferating swarms of smartphones and tablets. I’m therefore a bit puzzled by the apparent shortage, at least so far, of physics educators who are creating JavaScript/canvas simulations. I’m sure this shortage won’t last much longer.

[Update: See the comments below for further details on the benchmarks, especially for the Ising model simulation.]