Statistics

A graph of a bell curve in a normal distribution showing statistics used in educational assessment, comparing various grading methods. Shown are standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nine, and percent in stanine.
A graph of a bell curve in a normal distribution showing statistics used in educational assessment, comparing various grading methods. Shown are standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nine, and percent in stanine.

Statistics is a mathematical science pertaining to collection, analysis, interpretation and presentation of data. It is applicable to a wide variety of academic disciplines from the physical and social sciences to the humanities, as well as to business, government, and industry.

Given a collection of data, statistics may be employed to summarize or describe the data; this use is called descriptive statistics. In addition, patterns in the data may be modeled, in a way that accounts for randomness and uncertainty in the observations, in order to draw inferences about the larger population; this use is called inferential statistics. Both of these uses may be termed applied statistics. There is also a discipline of mathematical statistics concerned with the theoretical basis of the subject.

The word statistics is also the plural of statistic (singular), which refers to the result of applying a statistical algorithm to a set of data.

Historical overview

The word statistics ultimately derives from the modern Latin term statisticum collegium ("council of state") and the Italian word statista (" statesman" or " politician"). The German Statistik, first introduced by Gottfried Achenwall (1749), originally designated the analysis of data about the state, signifying the "science of state". It acquired the meaning of the collection and classification of data generally in the early 19th century. It was introduced into English by Sir John Sinclair.

Thus, the original principal purpose of statistics was data to be used by governmental and (often centralized) administrative bodies. The collection of data about states and localities continues, largely through national and international statistical services; in particular, censuses provide regular information about the population.

Statistics eventually merged with the more mathematically oriented field of inverse probability, referring to the estimation of a parameter from experimental data in the experimental sciences (most notably astronomy). Today the use of statistics has broadened far beyond the service of a state or government, to include such areas as business, natural and social sciences, and medicine, among others.

Because of its history and wide applicability, statistics is generally regarded not as a subfield of mathematics but as a distinct, albeit allied, field. Many large universities maintain separate mathematics and statistics departments. Statistics is also taught in departments as diverse as psychology, education, and public health.

Important contributors to statistics

  • Carl Gauss
  • Blaise Pascal
  • Sir Francis Galton
  • William Sealey Gosset (known as "Student")
  • Karl Pearson
  • Sir Ronald Fisher
  • Gertrude Cox
  • Charles Spearman
  • Pafnuty Chebyshev
  • Aleksandr Lyapunov
  • Isaac Newton
  • Abraham De Moivre
  • Adolph Quetelet
  • Florence Nightingale
  • John Tukey
  • George Dantzig
  • Thomas Bayes

See also list of statisticians.

Conceptual overview

In applying statistics to a scientific, industrial, or societal problem, one begins with a population to be studied. This might be a population of people in a country, of crystal grains in a rock, or of goods manufactured by a particular factory. The population may even consist of a single process measured at various times; data collected about this kind of "population" constitute what is called a time series.

For practical reasons, rather than compiling data about the entire population one instead studies a chosen subset of the population, called a sample. Data are collected about the sample in some kind of experimental setting (cf experimental design).

If the sample is representative of the population, then inferences and conclusions made from the sample can be extended to the population as a whole. A major problem lies in determining the extent to which the chosen sample is representative. Statistics offers methods to estimate and correct for randomness (uncertainty) in the sample and in the data collection procedure, and subsequently to extract useful information from the data.

The fundamental mathematical concept employed in understanding such randomness is probability. Mathematical statistics (also called statistical theory) is the branch of applied mathematics that uses probability theory and analysis to examine the theoretical basis of statistics.

This article is primarily focused on applied statistics, however, which is ultimately concerned with two related problems about data: description and inference.

  • Descriptive statistics deals with the description problem: Can the data be summarized in a useful way, either numerically or graphically, to yield insight about the population in question? Basic examples of numerical descriptors include the mean and standard deviation. Graphical summarizations include various kinds of charts and graphs.
  • Inferential statistics is used to model patterns in the data, accounting for randomness and drawing inferences about the larger population. These inferences may take the form of answers to yes/no questions ( hypothesis testing), estimates of numerical characteristics ( estimation), prediction of future observations, descriptions of association ( correlation), or modeling of relationships ( regression). Other modeling techniques include ANOVA, time series, and data mining.

The concept of correlation is particularly noteworthy. Statistical analysis of a data set may reveal that two variables (that is, two properties of the population under consideration) tend to vary together, as if they are connected. For example, a study of annual income and age of death among people might find that poor people tend to have shorter lives, on average, than affluent people. The two variables are said to be correlated. However, one cannot immediately infer the existence of a causal relationship between the two variables; see correlation implies causation (logical fallacy).

Furthermore, the use of any statistical method is valid only when the system or population under consideration satisfies the basic mathematical assumptions of the method. Misuse of statistics can produce subtle but serious errors in description and interpretation. Even when statistics is correctly applied, the results can be difficult to interpret for a non-expert; a common example is the concept of statistical significance. The set of basic statistical skills (and skepticism) needed by people to deal with information in their everyday lives is referred to as statistical literacy.

Statistical methods

Experimental and observational studies

A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on a response or dependent variable. There are two major types of causal statistical studies, experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behaviour of the dependent variable are observed. The difference between the two types is in how the study is actually conducted.

An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation may have modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead data are gathered and correlations between predictors and the response are investigated.

An example of an experimental study is the famous Hawthorne studies which attempted to test changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured productivity in the plant then modified the illumination in an area of the plant to see if changes in illumination would affect productivity. Due to errors in experimental procedures, specifically the lack of a control group, the researchers while unable to do what they planned were able to provide the world with the Hawthorne effect.

An example of an observational study is a study which explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then perform statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers and then look at the number of cases of lung cancer in each group.

The basic steps for an experiment are to:

  1. plan the research including determining information sources, research subject selection, and ethical considerations for the proposed research and method,
  2. design the experiment concentrating on the system model and the interaction of independent and dependent variables,
  3. summarize a collection of observations to feature their commonality by suppressing details ( descriptive statistics),
  4. reach consensus about what the observations tell us about the world we observe ( statistical inference),
  5. document and present the results of the study.

Levels of measurement

There are four types of measurements or measurement scales used in statistics. The four types or levels of measurement (ordinal, nominal, interval, and ratio) have different degrees of usefulness in statistical research. Ratio measurements, where both a zero value and distances between different measurements are defined, provide the greatest flexibility in statistical methods that can be used for analysing the data. Interval measurements, with meaningful distances between measurements but no meaningful zero value (such as IQ measurements or temperature measurements in degrees Celsius). Ordinal measurements have imprecise differences between consecutive values but a meaningful order to those values. Nominal measurements have no meaningful rank order among values.

Statistical techniques

Some well known statistical tests and procedures for research observations are:

  • Student's t-test
  • chi-square
  • analysis of variance (ANOVA)
  • Mann-Whitney U
  • regression analysis
  • correlation
  • Fisher's Least Significant Difference test
    • Pearson product-moment correlation coefficient
    • Spearman's rank correlation coefficient