>[!abstract] >Probabilistic forecasting summarizes what is known about, or opinions about, future events. In contrast to single-valued forecasts (such as forecasting that the maximum temperature at a given site on a given day will be 23 degrees Celsius, or that the result in a given football match will be a no-score draw), probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast. Thus, probabilistic forecasting is a type of probabilistic classification. ("Probabilistic forecasting", 2025). ### Calibration --- >[!abstract] >In probabilistic forecasting (i.e., when predicting the chances of an event), **calibration** is how often the repeat predictions match observations. Perfect calibration happens when the percentage predicted matches the percentage of occurrence. If the percentage predicted (e.g., 20% of the time) is less than the percentage of occurrence (e.g., 50% of the time), the forecaster is said to be underconfident. If the other way around, overconfident. >[!quote] "We cannot rerun history so we cannot judge one probabilistic forecast — but everything changes when we have many probabilistic forecasts. If a meteorologist says there is a 70% chance of rain tomorrow, that forecast cannot be judged, but if she predicts the weather tomorrow, and the day after, and the day after that, for months, her forecasts can be tabulated and her track record determined. If her forecasting is perfect, rain happens 70% of the time when she says there is 70% chance of rain, 30% of the time when she says there is 30% chance of rain, and so on. This is called calibration." (Tetlock & Gardner, 2015) ### Resolution --- >[!abstract] >Separately, resolution is the ability to predict with high confidence whether something will happen, or won't happen. If the forecaster predicts an event to happen with a 40% chance, and it does happen 40% of the time; and then predicts a 60% chance and it happens 60% of the time, the calibration is perfect, but the resolution is low because 40–60% are ambivalent chances. However, if the forecaster predicts something will (or won't) happen with 80% certainty, and the observations match the prediction, then the calibration is still perfect *and* the resolution is great, too. > >A fair coin toss is a great example of that. A long-term probabilistic forecast of 50% for each side is well-calibrated but has poor resolution, because it does not make better-than-chance predictions for each toss. ## References - Probabilistic forecasting. (2025, January 3). In *Wikipedia*. https://en.wikipedia.org/w/index.php?title=Probabilistic_forecasting&oldid=1193680029 - Tetlock, P. E., Gardner, D. (2015). Superforecasting: The Art and science of prediction. Crown Publishers.