Archive for August, 2007|Monthly archive page
After many late nights, we’ve finally released our human-consensus hurricane forecasting feature. To make this feature possible, we had to add quite a bit of functionality to the underbelly of the site, and while we were at it we took the time to insert a few more enhancements we think you’ll enjoy. Here’s the final list of fixes, updates, and enhancements:
- Sign-up/registration. It’s the way to join the Forecasting Team here at Stormpulse.com. You fill out a small form and presto, instant entry into our grand experiment: can a group of amateurs, weather enthusiasts, and professionals outperform the computer models in forecasting the path and intensity of tropical cyclones? We’ve also written a page explaining how this will work.
- Create an identity at Stormpulse.com using a personalized profile. We’ve created some basic fields to allow you to flesh out who you are and what brings you to the site. From the server’s end, having a profile allows us to keep track of your forecasting performance over time, and also offer you geography-specific features in the future (read: localized hurricane forecasts).
- We’re now processing and displaying Public and Intermediate Advisories. During the life of a storm, forecast advisories are the primary source of position and intensity observations from the National Hurricane Center. However, as a storm progresses, Public Advisories, with advisory numbers like ’19A’, can also contain this information. Our systems now process the data contained in these advisories as well, ensuring that you stay completely up-to-date on the life of the storm.
- The National Hurricane Center’s Tropical Weather Outlook appears during active periods. For some reason it made sense to us to have the Tropical Weather Outlook only visible when the Atlantic Basin was quiet. We’ve realized the error of our ways and are now showing the Outlook during periods of activity as well.
- You’ll probably never notice, but we’ve increased our response times and speed. I could list a bunch of nerdy acronyms, but suffice it to say that we’ve made the site a whole lot faster. My hat’s off to Brad.
- We’ve added storm descriptions for storms in 2007. Now you can read the wikipedia entries alongside this year’s storms, like the soon-to-be-retired Dean.
- Bug fixes by the gallon. Well, okay, maybe by the pint. There were a few annoyances in the tracking map and a handful of problems in our server code that got much-needed patches that you’ll probably never notice. I guess I don’t know why I’m telling you, except that if I didn’t, how could you appreciate it? ;-)
Once we have an active storm, it’ll be time to enter your forecasts. Then the real fun begins. Spaghetti model anyone?
“Through computer advances, model forecasts very likely will continue to improve, assuming we remember one fundamental problem with tropical cyclone forecasting: maximizing observations.“
“Numerical prediction of tropical cyclone tracks has improved tremendously since the early models of the 1950s and 1960s. Ironically, today’s reliance on model guidance has possibly led to the decline in skill of subjective tropical cyclone forecasts. It is hard to imagine that landfall forecasts in the 1970s were about as good as they are today and watch/warning areas were smaller. Back then forecasters relied very much on subjective forecast techniques. Today they rely heavily on model forecasts. Revitalizing and improving subjective analysis and forecast skills without inhibiting numerical model advances could provide significant improvements in track forecasts.“
— Dr Steve Lyons, “Hurricane Forecasting Considerations“
This week we’re going to be adding some significant enhancements to the Stormpulse website. One of these is the ability to create an account (free and painless, we promise!), which you’ll need to do if you want to participate in our forecasting system. Then, the next time there’s an active storm (post-Dean), you’re going to tell us where you think the storm is going to go by filling out a slick little form on the Stormpulse home page (just below the map) that asks you where the storm will be in 12, 24, 36, 48, 72, and 120 hours, as well as what the intensity (maximum wind speed) of the storm will be at those points in time. Then we’re going to take everyone’s forecasts and aggregate them in order to see if we can accurately forecast the movement and strengthening (or weakening) of tropical cyclones.
Wait a minute! Aren’t there all kinds of forecasting models out there already?
Yes. But this will be, as far as we know, the world’s first human-consensus hurricane-forecasting model.
Do you guys think you are some kind of experts?
No. In fact, that’s the point of the system—to de-emphasize the individual experts and discover the collective expert (and more importantly, the collective expert’s forecasts).
Why might this work?
The idea to do this hit us on the head in July of 2006. Since then, we’ve had it mostly under wraps, sharing the idea with close acquaintances and friends while gathering insight wherever we could find it. And all of our research pushes us toward the conclusion that this just might work.
For example, in June of this year (2007), we attended the Governor’s Hurricane Conference in Ft. Lauderdale, Florida. While there, we attended a few classes on tropical meteorology. In those classes, our suspicions about the very small world of professional hurricane forecasting were affirmed, insofar as it has several characteristics that make it ripe for disruption by a more democratic process:
- It is a world currently dominated by a few experts. Dr Gray of Colorado State University, Dr Steve Lyons of The Weather Channel, the Hurricane Research Team at NASA, the forecasters at the National Hurricane Center . . . all of these folks are considered experts, and rightly so. They are. But if the research behind the wisdom of crowds theory is correct, it would stand to reason that none of these experts in isolation will consistently perform better than a diverse, independent group. 
- It contains subjectivity, for better or for worse. For better, that subjectivity represents valuable intuition and insight—”I can feel it in my bones!” For worse, that subjectivity represents the unavoidable flaws of human judgment.
- It contains traces of bureaucracy. Don’t get me wrong—this is not a criticism of the National Hurricane Center or anyone other group of professionals whose great challenge it is to produce accurate, timely, and responsible forecasts. But, nevertheless, all groups of professionals wherein there is some order, structure, and authority will contain some amount of bureaucracy that could hinder its performance. Is that going too far? I don’t think so. If you’re misunderstanding me, let me know and I’ll try to restate this. All we’re saying is, the existing system necessarily contains rules and politics that make the system imperfect. I am pointing out that imperfection to underscore the opportunity to improve.
- Satellite data and images are underutilized in existing computer models. Current computer models have a limited ability to digest data gathered via satellite. This is unfortunate since satellite images are the best views we have of what’s going on inside a storm. Having tens, hundreds, or even thousands of human interpreters of satellite data provide their input into a human-consensus model should boost the accuracy of a resulting, synthesized forecast.
- Existing models are weak in predicting storm intensity and size. While track guidance has improved greatly due to advances in computing power, intensity predictions have not seem the same increase in accuracy. Trying something completely different—calling on an army of human forecasters instead of depending greatly on computers, could prove to be a breakthrough in this area. Even if what emerges is a complete failure in providing track guidance, what a human-consensus model provides in forecasting intensity, storm size, and storm surge could prove beneficial for years to come over computer-only calculations.
- Computer-consensus models have performed well. In a report written by the NHC in 2006, it was shown that the consensus models GUNA and CONU provide the best track guidance. At the conference, a forecaster from the National Hurricane Center told the audience that “for some reason it would seem that the [statistical/dynamic] models have offsetting biases in them that cancel each other out when you average them together.” Our thoughts exactly, which brings us to the next and most subtle point:
- You are a forecasting model. If the above (#6) is true, why stop at making a consensus out of only the computer models? Why not attempt to aggregate and synthesize all of the available models—computer and human, to produce one unified forecast?
Won’t all of the novices or intentional misanthropes spoil the system?
No. We are going to keep track of user’s performance and weigh the credibility of their forecasts accordingly. So, if Bob is consistently off by 500 miles at 12 hours out, we’re going to take his forecasts with a grain of salt in our final equation. On the other hand, if someone, no matter what his position in the world of weather, continually proves to be an accurate forecaster, we are going to weigh his forecast more heavily.
What about privacy?
If you choose to participate, whether or not your name shows up anywhere on our site or is ever shared outside of Stormpulse, Inc. will be up to you. For those that don’t mind their identity being attached to their performance, we are planning to publish rankings that show where you stand against the rest of the participants.
Where can this go?
Near the end of 2008 there is going to be a re-opening of the Joint Hurricane Testbed, a government program wherein the National Hurricane Center carefully considers suggestions as to how they can improve their forecasting process. If this works in any measure, that’s one possible outcome.
Are you serious? / That won’t work. / Wow, that’s cool!
Yes. / OK. / Thanks, we’ll see.
 It’s noteworthy that the National Hurricane Center already embraces this truth insofar as forecasters work on rotation. Also, there is a rule that a forecast may not drastically differ from its predecessor. While on the one hand this may suppress a moment of brilliance (and that’s where there’s an opportunity to improve), this avoids public outcry and fear over flip-floppy forecasts and is in effect a mechanism to bend toward a consensus.
We’ve added a few new cities to the map:
- Kingston, Jamaica
- Cozumel, Mexico
- Port of Spain, Trinidad and Tobago
- Havana, Cuba
- San Juan, Puerto Rico
To see what this does for you, load up the site and click ‘Kingston’. You’ll get a handy pop-up with a breakdown of wind probabilities over the next 5 days. And no, it doesn’t really look good.
Brad and I will be on the radio with meteorologists Tony Pann and Justin Berk this Sunday on WeatherTalkRadio, a show that airs from Baltimore, Maryland from 3:05pm-4:00pm ET.
It can be heard in their coverage area (Baltimore) on AM 680 WCBM, or online, and later as a podcast.
We are looking forward to using this opportunity to announce our “top secret” feature with you, and are looking forward to your feedback (we have a feeling it will be mildly controversial). We’ll have a simultaneous blog post describing the idea in more detail (and, if there’s an active storm out there, the ability to try it out).
Join us for a fun time of geeking out over all things weather (and especially the tropics)!