Explore R1 Curriculum
Purchase R1 Curriculum
Purchase R1 Discover App

Evidence-Based Curriculum

‘Evidence-based practice’ means a practice or treatment, or set of tools, “that has been scientifically tested and subjected to expert of clinical judgment and determined to be appropriate for the intervention or treatment of a given individual, population, or problem area.”

Evidence-based practices are developed, implemented, further researched, and refined over time and therefore build more evidence regarding their reliability and effectiveness.

Evidence-based practices are important because they help ensure decisions about curriculum, tools, and assessments are based on strong research rather than assumptions, historical use, or opinions.

The Hierarchy of Evidence is a way of ranking different types of information based on how reliable, trustworthy, and effective they are. Some types of evidence are stronger because they use careful methods, large amounts of data, and reduce bias, while others are weaker because they rely on small samples, observation, or opinion.

This hierarchy is important because it helps providers, programs, and practitioners make better decisions on what and how to implement interventions, curriculum, and tools depending on their populations and settings. By understanding which evidence is stronger, researchers, clinicians, and policymakers can place more confidence in conclusions that are based on high-quality studies and be more cautious with weaker evidence. In simple terms, it helps one separate what is most likely to be true and most effective from what is less reliable.

As organizations, providers, programs, and practitioners are selecting the best curriculum and tools for their population and setting, the question is not just “is your curriculum evidence-based”, but “where are you on the hierarchy”.

The Hierarchy of Evidence: Types and Categories

Let’s start with the basics. The primary types of evidence include the following, listed from highest to lowest strength:

Systematic Review – A systematic review comprehensively collects, evaluates, and synthesizes all relevant research on a specific practice, treatment, or intervention. Because it applies rigorous methods and critically appraises its sources, it is consistently ranked at the top of the evidence hierarchy.

Meta-Analysis – Often conducted as part of a systematic review, a meta-analysis statistically combines data from multiple studies. By aggregating results, it increases statistical power and produces conclusions that are more robust than those of individual studies. Its data-driven nature also helps reduce bias compared to purely observational research.

Randomized Controlled Trial (RCT) – RCTs are structured experiments in which participants are randomly assigned to receive one of two or more interventions. Randomization minimizes selection bias and other sources of error. Although often placed just below systematic reviews and meta-analyses, some hierarchies rank RCTs at the highest level.

Cohort Study – A cohort study is an observational design in which researchers follow a group of participants who share a common characteristic and compare outcomes with a group that does not share that characteristic. Researchers observe outcomes without intervening.

Case-Control Study – In a case-control study, researchers compare individuals who have experienced a specific outcome (cases) with those who have not (controls). The groups are examined for prior exposure to potential risk factors. A higher exposure rate among cases suggests an association.

Case Report – A case report provides a detailed description of a single patient or a small group, including symptoms, diagnosis, treatment, and outcomes. While considered lower-quality evidence due to limited sample size, case reports often serve as early indicators of emerging practices or effects.

Expert Opinion and Background Information – At the base of the evidence hierarchy are expert opinions and background sources. This category typically reflects professional consensus or guidelines issued by authoritative organizations rather than the views of a single individual.


Evidence-based and best practices occur at the intersection of three equal considerations:

  • Best research evidence provides reliable findings from high-quality studies to inform what works.

  • Best clinical experience brings professional judgment and practical expertise gained through real-world practice.

  • Consistency with patient values ensures that decisions respect individual preferences, needs, and goals.

When these three elements are considered together, practices are more effective, appropriate, and meaningful.

The R1 Learning System and curriculum are derived from the best research evidence and constructed with flexibility to allow practitioners to adapt them to their role, knowledge, skill, experience, and specific considerations of their populations and settings. Below you will find a selection of research evidence in several core areas.


References:

Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71

OCEBM Levels of Evidence Working Group. (2011). The Oxford 2011 levels of evidence. Oxford Centre for Evidence-Based Medicine. https://www.cebm.ox.ac.uk/resources/levels-of-evidence

Melnyk, B. M., & Fineout-Overholt, E. (2019). Evidence-based practice in nursing and healthcare: A guide to best practice (4th ed.). Wolters Kluwer.

Polit, D. F., & Beck, C. T. (2021). Nursing research: Generating and assessing evidence for nursing practice (11th ed.). Wolters Kluwer.

Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (Eds.). (2023). Cochrane handbook for systematic reviews of interventions (2nd ed.). Wiley-Blackwell. https://doi.org/10.1002/9781119536604