Experimental research designs are one of the classic approaches to empirical research—gathering research data in a way that is verifiable by observation or experience. But what exactly is an experimental research design, and how can you use one in your own research? In this in-depth guide, we’ll give you an overview of experimental research, describe the different types of experimental design, discuss the advantages and disadvantages of this approach and walk you through the four steps for completing experimental research.
Experimental research is scientifically-driven, quantitative research involving two sets of variables. The first set of variables, known as the independent variables, are manipulated by the researcher, in order to determine the impact on the second set of variables—the dependent variables. Using the experimental method, you can test whether, and how, the independent variables impact the dependent variable, which can help support a wide range of decisions in areas such as:
These are just a few different areas of consumer research that are suitable for experimental research. However, not all experimental research designs are equivalent. Let's take a look at the three different types of experimental design you might consider using, and some of the types of research questions they could be used for.
The simplest type of experimental design is called a pre-experimental research design, and it has many different manifestations. Using a pre-experiment, some factor or treatment that is expected to cause change is implemented for a group or multiple groups of research subjects, and the subjects are observed over a period of time.
Different types of pre-experimental research design include:
In this type of design, some type of treatment is applied to a single case study sample group. The group is then studied to determine whether the implementation of the treatment caused change, by comparing observations to general expectations of what the case would have looked like had the treatment not been implemented. There is no control or comparison group.
This type of design also involves observing one group with no control or comparison group. However, the group is observed at two points in time: once before the intervention is applied and once after the intervention is applied. For instance, if you want to determine whether concentration increases in a group of students after they take part in a study skills course, you might employ this type of experimental design. Any observed changes in the dependent variable are assumed to be the consequence of the intervention or treatment.
This type of design compares two groups. One that has experienced some intervention or treatment and one that has not. If any differences are observed between the two groups, it is presumed to be because of the treatment.
Build your audience, prepare your survey, get your results in minutes.
A true experimental research design involves testing a hypothesis in order to determine whether there is a cause-effect relationship between two or more sets of variables. Although there are a few established ways to conduct experimental research designs, all share four characteristics:
This type of approach might be used in concept testing, such as comparing the impact of changes of packaging design among a treatment group and a group that receives the original packaging.
Finally, a quasi-experimental research design follows some of the same principles as the true experimental design, but the research subjects are not randomly assigned to the control or treatment group. This type of research design often occurs in natural settings, where it is not possible for the researcher to control the assignment of subjects. An example of a quasi-experimental research design is a researcher presenting Saturday shoppers at a grocery store with a welcome banner and comparing their perceptions of how welcoming the store was to those visiting the store on a Tuesday when the banner was not present.
Now that you know what kinds of experimental designs are available, let’s focus on the steps you should take to set up your design.
In the first stage, establish your research question, and use it to distinguish between dependent and independent variables.
Independent vs. dependent variables
Independent variables are the variables that will be subjected to some kind of manipulation, and which are expected to impact the outcome. In contrast, the dependent variables are not manipulated, but represent the outcome and are expected to be impacted by the independent variables. For instance, if you are performing ad testing, you might have a research question like this:
From this research question, the independent variable will be different marketing messages, while the dependent variable will be product appeal.
Next, you should state your hypothesis. This should be a specific and testable statement that outlines what you expect to find, should emerge from your research question, and should be informed by the results of any previous research. For example, if you are comparing the impact of two different marketing messages on product appeal, you might state a hypothesis like this:
When stating hypotheses, there are a number of best practices to follow. The hypothesis:
Third, design your experimental treatments. This means manipulating your independent variable(s) in such a way that different groups of research subjects are exposed to different levels of that variable, or the same group of subjects is exposed to different levels at different times. For instance, if you’re interested in learning about whether trying a new eco laundry detergent impacts people’s views towards sustainability, you might provide some subjects (the treatment group) with the laundry detergent to use for a certain period of time, while a control group continues to use their regular detergent.
It is important to note that manipulation of the independent variable must involve the active intervention of the researcher. If differences in the variable occur naturally (e.g. if a researcher compares views on sustainability among households who already use eco detergents and those that use regular detergents), then an experiment has not been conducted. In this case, observed differences between the two groups might be because of some third, unknown variable that could impact the cause-effect relationship. For instance, households that contain one green activist may already use eco detergent, which makes it impossible to determine whether using the eco detergent impacts views on sustainability (or whether the relationship is in fact, the other way around). In some experiments, the independent variable can only be manipulated indirectly or incompletely, and in this case, it may be necessary to perform a manipulation check prior to testing the results: a statistical test that shows that the manipulation worked as expected.
Rely on quality data from respondents you can count on.
When manipulating your variables, you should be aware of the impact on internal validity and external validity. Internal validity can be understood as credibility and is largely concerned with answering questions such as, “Do the findings of the study make sense?” and “Are the findings credible?” External validity, on the other hand, is designed to examine whether the research findings can be transferred to another setting or context in which data collection did not take place. In other words, research findings that are externally transferable are generalizable beyond the parameters of the research setting.
A key question that you will need to address when constructing your variables is how broadly or finely you should test them. For instance, if you are measuring the appeal of a product, you could ask survey respondents to assess appeal on a three point measure, like Appealing, Neither Appealing, Unappealing, or on a finer tuned 10-point Likert scale measure. Both approaches have benefits and drawbacks and the approach you should take will depend on what you want to get out of the research. If you are only interested in whether a product is appealing (or not) and not by how much, it makes sense to use a broader approach.
In the next stage of the experimental research design, you should categorize your survey subjects into appropriate treatment groups. There are many ways that you can do this, but you should be aware that the approach you use can impact the validity and reliability of the results.
There are two main approaches to randomization: a completely randomized design and a randomized block design.
A completely randomized design places random subjects into the treatment or control group. The reason for randomization is that the experimenter assumes that on average, potentially confounding variables will affect each condition equally; so that any observed significant differences between the treatment and control conditions can probably be attributed to the independent variable.