When we want to draw a conclusion about the whole population, it is a great deal to know what are the **different types** of calculation of **inferential statistics**.

There are many techniques, methods, and types of calculation used in inferential statistics and here we will explain the most popular of them.

On this page:

- What is inferential statistics? Definition.
- Inferential statistics types of calculation: explained with examples.
- Infographic in PDF.

Definition:

Inferential statistics is a technique used to draw **conclusions** and **trends** about a large population based on a sample taken from it.

**For example**, let’s say you need to know the average weight of all the women in a city with a population of million people. It isn’t easy to get the weight of each woman.

This is where inferential statistics start playing. Inferential statistics can make conclusions about the whole population of women using data drawn from a sample or samples of it.

Inferential statistics is one of the 2 main types of statistical analysis. Just to remind that the other type – descriptive statistics describe basic information about a data set under study (more info you can see on our post descriptive statistics examples).

Inferential statistics study the relationships between variables within a sample. Then make generalizations and even predictions about the relationship between those variables within the whole population.

In order to do that, Inferential statistics need some techniques, **methods, and types of calculations**. Now, let’s see some of the most important of them.

**1. Linear Regression Analysis**

Linear regression models show a relationship between two variables with a linear algorithm.

Linear regression is a statistical method for studying relationships between **one or more independent variables (X) and one dependent variable (Y)**.

To say it another way, it is a mathematical modeling which lets you **make predictions** for the value of Y depending on the different values of X.

There are two main types of linear regression:

- Simple linear regression – when there is only one independent variable X which changes lead to different values for Y. You can see some simple linear regression examples.
- Multiple linear regression is used to show the relationship between one dependent variable and two or more independent variables.

Linear regression usually is graphically represented by scatter plot but it can be shown by other linear types of graphs too.

**2. Logistic Regression Analysis**

Logistic regression (also known as logit regression) is a regression model where the dependent variable is categorical (to know that is categorical data see our post about categorical data examples).

Logistic regression is conducted when the dependent variable is** dichotomous **(i.e the dependent variable has only two possible values).

Examples of dichotomous (binary) variables are: 0 and 1, Yes and No.

As the other linear regression models, the logistic regression is a predictive analysis. It aims to find the best fitting model to describe the relationship between the dichotomous characteristics of a dependent variable and a set of independent variables.

**Example:**

A real-life example of logistic regression problem is the answer to the question: “Is body weight have an effect on the probability of having a heart attack” (only 2 possible outcomes – Yes vs. No)?

**3. Analysis of Variance (ANOVA)**

Analysis of Variance (ANOVA) is a popular statistical method used to test and analyze differences between two or more means (averages). It searches **significant differences between means**.

**Example:**

**For example,** let’s say you have to study the education level of sportsmen in a given geographical area. You need to survey people on a variety of teams.

You need to find out if the education level is different among the football team versus the baseball team versus the basketball team. This is where ANOVA comes to help you determine if the mean education level is different among the different sports teams.

ANOVA compares numerous groups on the same variable. In the above case, the variable is education level.

More examples on ANOVA, you can see on onlinestatbook.

**4. Analysis of Covariance (ANCOVA)**

When a continuous covariate is included in an ANOVA we have ANCOVA (just to remind that a covariate is a continuous independent variable). The continuous covariates enter the model as regression variables.

To put in another way, ANCOVA **blends ANOVA and regression**.

ANCOVA is a type of inferential statistics modeling used in studying the differences in the mean values of the dependent variables. Those dependent variables relate to the impact of the controlled independent variables while taking into consideration the influence of the uncontrolled independent variables.

**Example:**

For example, ANCOVA is used to find out the variation in the intention of the customer to buy a given product with taking into account the different levels of price and the customer’s attitude towards that product.

**5. Statistical Significance (T-Test)**

The t-test **compares two means** (averages of 2 groups) and tells us if they are different from each other. The t-test also tells us how significant the differences are.

The t-test is used when comparing two groups on a given dependent variable.

**Example:**

For example, you want to know whether the average Californian spends more than the average Texan per month on movies. You ask a sample of 200 respondents from each state about their spendings on movies. You might observe a big or small difference in averages.

Another good example: a drug corporation wants to know if their new cancer drug improves life expectancy. In an experiment, there are 2 groups: one called a control group (a group who are not given the new drug) and a group who taking the new drug.

Let’s say that the control group shows an average life expectancy of +3 years. At the same time, the group taking the new drug shows a life expectancy of +4 years. At first sight, it might seem that the new drug works but it could be due to a hap. To test this, you can use a t-test to determine if the results are repeatable for the whole population.

**6. Correlation Analysis**

Correlation analysis studies the strength of a relationship between two variables. It is useful when you want to find out **if there are possible connections** between variables.

Also, correlation analysis shows that two or more variables have a strong (high) correlation or they have a weak or low correlation.

Correlation is designed to test relationships between quantitative variables or categorical variables.

**Examples:**

- Example for a
**high correlation**: people’s caloric intake and their weight. - Example for a
**low correlation**: people’s educational level choice and the type of bread they eat.

The correlation coefficient tells you how strong a relationship between 2 variables might be.

Correlation coefficients can range from -1.00 to +1.00. A “0” means there is no relationship at all. -1 means there is a perfect negative correlation. 1 means there is a perfect positive correlation.

A positive correlation means that when the value of one variable increases, the other increases too. We have negative correlation when the value of one variable increases, the other decreases.

7. Structural equation modeling

8. Survival analysis

9. Factor analysis

10. Multidimensional scaling

11. Cluster analysis

12. Discriminant function analysis, and many others.

Download the following infographic in PDF

Silvia Vylcheva has more than 10 years of experience in the digital marketing world – which gave her a wide business acumen and the ability to identify and understand different customer needs.

Silvia has a passion and knowledge in different business and marketing areas such as inbound methodology, data intelligence, competition research and more.