# Probability, its interpretation, and statistics

*Just an introduction over the main concepts: what are probability and statistics?*

Given a

*random variable*$x$

which can take values in space$\Omega$

(the *events space*), an*event*is the occurrence of one of the values allowed for$x$

and its *probability*gives the mathematical measure of how likely the event is to occur. It is a number between 0 and 1 (extremes included), where 0 means that the event does not occur at all and 1 that is occurs with certainty.For example, in the throws of a (fair) dice, the space of events is given by all the faces that the dice can take when thrown (there's 6 of them) and we if we measure the probability of each event we find

$\frac{1}{6}$

.In the

*frequentist*approach, the probability is calculated as the simple ratio of events to the total outcomes the variable can take, that is, as the frequency of the occurrence of event to trials. This assumes a sufficiently large (?) number of trials in the first place and that this frequency will asymptotically converge to the probability of our event when said number of trials goes to$\infty$

.Also note that this approach inherently entails the concept of repeatability of the process (experiment).

In the

*Bayesian*interpretation, the probability measures a degree of belief. The Bayes' theorem links the degree of belief in a proposition before and after accounting for the evidence, that is, the result of the data observation. In some sense, this interpretation is nearer to the layman's one: the probability encompasses the belief in something, the prior knowledge of the phenomenon at hand.An example illustrating the difference in the two approaches, carried out using a coin flip, can be found in this blog. A really good read.

Statistics is that branch of Mathematics dealing with the analysis of data, the testing of the reliability of experimental results and the building of models which can describe patterns and trends in the observations.

*Descriptive*

*Statistics*describes the main features of a collection of data quantitatively, that is, describes a sample without learning anything about the underlying population. It does not make use of probability theory.

*Inferential*

*Statistics*learns from a sample of data in order to infer about the population.

The

*probability*expresses the fraction of the successes over the total (we are using a frequentist interpretation) and is a number between 0 and 1. The*odds*of something quantify the fraction of successes to the failures instead and is a concept mostly used in the context of gambling.If you have possible events

$e_1, \ldots, e_n$

, the probability of one of them $P(e_x)$

occurring (the bars indicate the cardinality of the set of occurrences of the event) can be written as$P(e_x) = \frac{|e_x|}{\sum_{i=1}^n |e_x|}$

while the

*odds in favour*are defined as$o(e_x) = \frac{|e_x|}{\sum_{i=1, i \neq x}^n |e_x|} ,$

that is, as the fraction of occurrences of event to all occurrences of all other events.

*The odds against*are defined as the reciprocal:$o(\neg e_x) = \frac{\sum_{i=1, i \neq x}^n |e_x|}{|e_x|}$

In most cases, odds are reported in notation as

$e_x: \sum_{i=1, i \neq x}^n |e_x|$

(successes : failures) rather than as a ratio, or, very often as $p: 1-p$

where $p$

is the probability of success, or as $\frac{p}{1-p} : \frac{1-p}{p}$

.In a (fair) coin flip, the odds in favour of a head are 1:1, where the notation uses the third way outlined above.