> [!abstract]
> Order statistics is the arrangement of a set of observations in an ascending order (from minimum to maximum), such that the $k$th statistic is also the $k$th smallest value of the sample. Then, by definition, $k_0$ is the minimum and $k_n$ is the maximum of a sample containing $n$ observations.
>
>This is different from **low- and high-order statistics**, however.
>- First-order statistics most basically summarize the data. They include the arithmetic mean (or average), median, and mode.
>- Second-order statistics describe the relationships between data points. They include the variance (squared deviation from the mean), covariance (between two variables), and correlation (a normalized version of the covariance from -1 to +1).
>- Third-order statistics involve measures that depend on the third power of deviations from the mean, and focus on the asymmetry in the data distribution. They include skewness (the lopsidedness of a distribution to the left or the right).
>- Fourth-order statistics involve the fourth power of deviations from the mean. They include kurtosis (the "tailedness" of the distribution).
>- Higher-order statistics do exist but are rarely used outside of specialized fields such as signal processing, due to complexity and diminishing returns in terms of providing insights.
> [!note]
> I like to use this statistics analogy when explaining first- and second-order thinking, which are too often over-simplified as "reacting to immediate circumstances" vs. "considering the broader implications", respectively (or as "thinking fast" vs. "thinking slow", in Kahneman parlance).