Skip Navigation LinksHome » Articles » What is ‘Implementation’? And why is it important for sports medicine and physical activity?

What is ‘Implementation’? And why is it important for sports medicine and physical activity?


– Written by Ivana Matic and Heather McKay, Qatar


“… developing effective interventions is only the first step toward improving the health and well-being of populations. “

- Durlak and Dupre, 2008



Billions of dollars are invested in health sciences research annually. This generates abundant new evidence that, ideally, would result in a systematic transfer of effective research findings into practice and the widespread implementation of new, evidence-based programmes and policies. However, this is most often not the case.


Original research findings generally take 1 to 2 decades to become a part of routine clinical practice. As a result, as many as 45% of patients do not receive the ‘best’ care supported by scientific evidence. Alarmingly, 25% of care provided is potentially harmful1,2. Bridging the gap between research discovery and implementation of relevant findings should improve healthcare substantially. However, the road taken from research to implementation is a long and winding one – embedded within a socio-ecological approach to human behaviour. This approach reminds us that individuals are immersed in sociocultural milieu that influence their behaviour. Conversely, individuals influence the settings where they live, learn, work and play.


The goals of this paper are to:

  1. Introduce basic concepts and models related to implementation.
  2. Stress the importance of and need for implementation evaluation.
  3. Provide two illustrations of implementation concepts and models applied to injury prevention and physical activity research.



There is a gap between theory and practice that must be overcome to effectively implement health promotion programmes. Even when this gap is bridged, theory alone is insufficient to fully guide practice3. Hence, the role of theory is not to offer universal explanations or predictions but rather to seek a better understanding of complex situations by acknowledging specific contextual factors and experiences of practitioners and communities that influence these situations. Feedback loops are critical as they provide an indicator of when change is needed so that adjustments can be made along the way to improve the uptake and effectiveness of interventions. Indeed, theory should provide a foundation for all interventions. However, successful implementation of a programme can only be achieved through practical strategies that consider different contextual factors.


We distinguish between two types of theory common to health promotion.

  1. Explanatory theory focuses on ‘the problem’ and helps identify relevant factors to target.
  2. Change theory informs the development and implementation of intervention strategies.

As there are numerous theories to draw upon, selecting a relevant theory and applying it in practice can be challenging for any health researcher. Furthermore, single theories may inadequately encompass the complexity of ecological views of health. Thus, some authors highlight the need to incorporate multiple theories.


In the context of evidence-based health promotion, it is necessary to reflect on the importance of different methods when designing effective interventions or developing an evaluation plan. Qualitative approaches alone may yield outcomes that are not generalisable to a wider population and may be subject to researcher bias. On the other hand, quantitative approaches on their own provide little insight into causal mechanisms that lead to desired outcomes. Further, purely quantitative approaches are unable to identify barriers and enablers to implementation that may ultimately be responsible for the success or failure of any programme. Therefore, integrating quantitative and qualitative research methodologies may well be the ‘perfect marriage’ in health promotion. Indeed, the World Health Organization called for evidence-based approaches to health promotion policy/practice and advocates using multiple methods.



Health promotion programmes are most likely to ‘succeed’ if they are founded upon valid theories adapted appropriately to the specific context. We direct those who wish to have physical activity or injury prevention programmes (for example) adopted to Rogers’ Diffusion of Innovations theory4. This theory “seeks to explain how, why and at what rate new ideas or technologies spread through cultures”4. Innovations ‘diffuse’ as members of a group or a social system choose to adopt an innovation through a five-step process:

  1. knowledge,
  2. persuasion,
  3. decision,
  4. implementation and
  5. confirmation (Figure 1).


Adoption therefore speaks to an individual’s process – from when a person first learns about a programme to them adopting it; diffusion speaks to how the innovation spreads across cultures. Diffusion is influenced by individual but also societal level factors such as communication channels, opinion leaders and change agents.



Simply speaking, evaluation is measuring the extent to which a programme achieves its desired health outcome. Evaluation also focuses on discerning the contribution of different processes that influence the extent to which outcomes are achieved5. Although theory retains an important place in the design of evaluation strategies, a balance must exist between using theoretical analyses and being open to unanticipated outcomes.


We note three key types of health promotion evaluations:

  1. impact,
  2. outcome,
  3. implementation.


All have distinct and key roles to play in health research. Impact and outcome evaluations both assess the effect of an intervention – but at different levels. Impact evaluations focus on immediate, short-term effects of an intervention that corresponds to programme objectives. Outcome evaluations examine long-term effects of an intervention and/or the ability of the intervention to achieve programme goals.


We focus for the rest of this article upon the importance of and need for under-utilised implementation evaluation.



Implementation is defined as a specified set of activities designed to put into practice an activity or programme of known dimensions6. As per our opening quote– ‘… designing evidence-based interventions is only the first step toward improving health of the population’. Ask any practitioner and they will likely agree that implementation efforts are more challenging than developing effective practices and programmes.


Implementation evaluation monitors all aspects of the process of programme delivery including:

  1. the extent to which a programme and its elements are implemented as planned (fidelity),
  2. how much of the programme was delivered (dose),
  3. how well different programme components are delivered (quality),
  4. how responsive participants were to the programme (responsiveness),
  5. programme theory and practices that distinguish it from other programmes (differentiation) and
  6. changes made to the programme as designed during implementation (adaptation).


Thus, implementation evaluation is sometimes referred to as a process evaluation. We deem it essential to monitor implementation to assess internal and external validity of an intervention and to better understand how and why a programme did (or did not) work (Figure 2). This is key as implementation challenges are unavoidable and can otherwise diminish the impact of an innovative programme.


Consider the example of a rigorous, evidence-based cardiac rehabilitation programme implemented over 12 months (Figure 2). In this example, imagine that at final measurement the intervention did NOT improve cardiovascular fitness. Upon closer examination, you note from your implementation evaluation that only 40% of participants were compliant with the intervention and only 30% of instructors delivered the intervention as designed (poor dose and fidelity). Is the intervention ineffective or is the negative result the product of poor implementation of a ‘good’ intervention?


Scenarios where an intervention is not delivered as planned can lead to a “type III” error. That is, results are interpreted based on how the intervention was designed rather than how the intervention was delivered; this may also influence how results are interpreted. Thus, implementation evaluation takes a close look at how core elements of an intervention were im-plemented to provide the ‘real’ rather than the ‘intended’ context for interpreting study outcomes. Durlak and DuPre7 examined 542 interventions that focused on prevention and health promotion programmes for children and youth across a wide range of topics (e.g. physical health and academic performance); 483 were summarised in 5 meta-analyses and 59 additional studies that assessed the impact of implementation on outcomes were also included in the analysis. Higher levels of implementation led to outcomes with mean effect sizes four to five times higher compared with studies where implementation was poor! A second take away message from this report was that perfect or near-perfect implementation of any intervention is unrealistic. Positive results were achieved with implementation of 60%; very few studies attained implementation levels greater than 80%!


Given the vagaries of health research, changes to an intervention at the implementation stage are impossible to avoid. Although undesirable changes pose a threat to programme fidelity, an intervention must be flexible with a contingency plan in place that allows the research team to adapt. Thus, we and others raise the question of how best to balance programme fidelity and adaptability – both deemed essential components that contribute to sustained implementation7. That is, when implementing an intervention across a different or changed setting, how best to maintain fidelity to core intervention components while adapting core processes and strategies for implementation to account for the changed context8.



It is a commonly shared view that an ecological perspective contributes to a more comprehensive understanding and interpretation of implementation models. Socio-ecological approaches are effective for solving problems that require comprehensive multi-factorial solutions (e.g. physical inactivity). These models address the interaction between individuals and multiple settings and levels of influence on behaviour and the interdependency between settings and levels9-12. Using a socio-ecological framework (Figure 3), practitioners or researchers target one or more levels for change and develop relationships or broad partnerships across sectors and disciplines, necessary to facilitate the desired change.


Among a number of implementation frameworks, one that we have cited previously stands out7. This framework categorises key groups of variables that influence successful implementation across five levels:

  1. Community (e.g. politics and funding and policy).
  2. Provider (e.g. perceived need and benefits).
  3. Innovation (characteristics of the intervention e.g. adaptability).
  4. Prevention delivery system (organisational capacity e.g. communication, shared decision making).
  5. Prevention support system (training and technical assistance e.g. resources available and necessary skill levels to support the intervention)7.


Twenty-three factors were associated with these five categories and these are discussed in detail elsewhere7. There is no universal way of identifying specific factors that have the greatest influence on an intervention in any given setting or across levels. However, 11 factors surface most often and are considered key to successful implementation of a programme or practice: funding, providers’ skill proficiency, positive work climate, shared decision-making, co-ordination with other agencies, formulation of tasks, leadership, programme champions, administrative support, training and technical assistance7.



Finch and colleagues have been at the forefront of applying implementation models to injury prevention13 and outlined the Translating Research into Injury Prevention Practice framework. They emphasised that ‘stakeholder groups’ such as coaches, athletes and sporting bodies must be engaged as key translation partners for research outcomes in injury prevention to be implemented widely. We briefly describe an effective injury prevention programme that adhered to many elements of an effective implementation model14.


Example 1: Implementation of an ACL injury prevention model

In 1999-2000 a group of clinician scientists from the Oslo Sports Trauma Research Center designed and implemented an effective ACL prevention programme for Norwegian female handball players. The programme was designed to be delivered by coaches. However, coach compliance (one element of implementation fidelity) to delivering the programme was low. Therefore, the delivery mechanism was adapted and physical therapists were trained to implement the programme during the second season (2000-2001). This demonstrates the concept of maintaining fidelity to core intervention components while adapting the strategy based on feedback. Further, in this first phase, the impact evaluation showed that the programme was effective for reducing risk of injury. Effectiveness was further enhanced in elite teams that complied with the prevention programme.


During 2004-2005, the formal programme and delivery of it by physical therapists supported by the research project, was withdrawn. Ongoing injury surveillance showed a steep escalation in ACL injury rate (Figure 4). Unfortunately, as implementation was not formally evaluated across this timeframe, researchers were unable to discern factors that influenced this negative outcome. However, informal interviews with players suggested non-compliance to the programme by coaches or players. Therefore, in 2005 the model was again adapted to refocus on coaches through a series of free regional coach seminars. There was a subsequent substantial reduction in the ACL injury rate following the 2005 to 2006 information campaign.


This highlights one opportunity where a formal implementation evaluation would provide insight into specific implementation factors that influenced outcomes, so that targeted action might be taken to address these factors.


Finally, at the next phase, as part of the wide-scale implementation/dissemination process, the implementation team enacted three important initiatives. They:

  1. utilised attention from the popular media to highlight the programme and its effectiveness,
  2. launched a new website in May 2008 ( that targeted key stakeholder groups (coaches and athletes) to provide information on an injury prevention and
  3. added formal questions on compliance to the prevention programme.


We illustrate the implementation framework15 applied to implementation of the ACL injury prevention programme (Figure 5). This framework can be applied to most injury prevention models (e.g. evaluation of a country-wide campaign that implemented ‘The FIFA 11+’ to prevent soccer injuries16).


Example 2: Implementation of a school-based health promotion model

The final example relates to health promotion and a whole school-based physical activity intervention, ‘Action Schools! BC’ that embraced the socio-ecological framework and adopted many of the essential elements of sustained implementation17 ( Action Schools! BC (AS! BC) was developed in 2005 in response to disturbing health trends including levels of physical activity in children in British Columbia, Canada. A confluence of events (e.g. hosting the Vancouver-Whistler 2010 Olympic Games) spurred on a partnership between several government Ministries, a multidisciplinary research team and physical activity, community health and education stakeholders who together invested in this initiative to promote and enhance opportunities for physical activity in schools across British Columbia. Goals of AS! BC were to:

a) provide a school environment where students had more opportunities to be more active and make healthy choices more often and

b) facilitate a supportive community and provincial-level environment.


Stakeholders from across disciplines and sectors partnered to design and implement a series of studies to evaluate i) feasibility, ii) efficacy iii) effectiveness and iv) broad scale implementation of the model18. Partners adopted a long-term view of success and cycled through phases of knowledge and product development, knowledge and product transmission (efficacy study) and knowledge and product dissemination (effectiveness study) embedded within a continuous process of stakeholder input and adaptation of the model (Figure 6)9.


Efficacy and effectiveness trials showed that teacher compliance was moderate but fidelity to the model was excellent and teachers provided more physical activity opportunities for students to be physically active. The impact evaluation showed that schools that adopted the AS! BC intervention enhanced physical activity, bone health and cardiovascular fitness and did not compromise academic performance of elementary school students (Grades 4 to 7). The process evaluation showed that AS! BC contributed to macro-level changes; 50% of provincial level stakeholders noted that implementation of the AS! BC model positively affected their strategic approach and collaboration on other physical activity initiatives.


Dissemination of AS! BC in British Columbia has achieved unprecedented success reaching 1,455 schools (92%) and more than 62,000 teachers, administrators other key stakeholders and approximately 450,000 children (June 30, 2012). In 2011 researchers assessed characteristics of teachers and schools and attributes of the innovation that were associated with implementation of the AS! BC model on a province-wide level, 4 years after it was scaled-up and disseminated more broadly18. Self-efficacy, outcome expectation, training received organisational climate/support, level of institutionalisation, environmental influence and attributes of the innovation were significantly associated with implementation. Taken together, these outcomes are encouraging and reflect what is possible when a region comes together to implement an effective strategy to address concerns confronting a generation of children and adolescents, their parents, teachers and health practitioners. Importantly, the implementation process and the model can be adapted to benefit the health of children in other countries around the world.



Although implementation research is increasingly recognised as an important means to evaluate health promotion programmes, it has achieved little attention in sports medicine or sports injury prevention research. One author hoisted a flag that challenged traditional approaches currently adopted in sports medicine as not fully appropriate and encouraged a shift in the design and reporting of implementation in research studies in this field19. Furthermore, there is a need to better understand the broad ecological context where injury interventions are delivered as an integral part of assessing whether or not and in what context these programmes are effective13. There is a relatively vast and high quality literature that speaks to development of injury prevention models and their efficacy in controlled settings. However, very few assessed whether these models were effective once they were transferred into ‘real-world’ settings. Of almost 12,000 injury prevention articles published from 1938 to 2010, fewer than 1% assessed implementation effectiveness20. Sports medicine, injury prevention and physical activity intervention research may advance in this regard by collaborating with other relevant disciplines (e.g. health promotion). Incorporating qualitative methodologies and concepts more commonly found in the social sciences and drawing on available examples from other fields may serve as effective starting points to better understand ‘process’ and ‘implementation’ in research studies.


Ultimately, our collective goal is to implement effective programmes into sport medicine, injury prevention, physical activity and health promotion settings. To succeed in doing so, it is imperative to better understand implementation factors that influence adoption and longer-term sustainability of these programmes within complex ecological settings.



We would like to thank Dr Grethe Myklebust for providing her feedback on the implementation model applied to the ACL injury prevention programme. We have borrowed generously from the published work of Dr PJ Naylor and are grateful for her insights on implementation and for a decade of collaboration. The tireless leadership of Action Schools! BC byMs Bryna Kopelow and Ms Jennifer Fenton is reflected in the glow of its success.


Ivana Matic

Senior Health Promotion Officer


Healther McKay Ph.D.




  1. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care 2001; 39:46-54.
  2. McGlynn E, Asch S, Adams J, Keesey J, Hicks J, DeCristofaro A et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348:2635-2645.
  3. Green J. The role of theory in evidence-based health promotion practice. Health Educ Res 2000; 15:125-129.
  4. Rogers EM. Diffusion of innovations, 5th ed. Free Press, New York 2003.
  5. Nutbeam D, Bauman A. Evaluation in a nutshell. A practical guide to the evaluation of health promotion programs. McGraw-Hill, Sydney 2006.
  6. Fixen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: a synthesis of the literature. University of South Florida, Tampa 2005. Available from:
  7. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol 2008; 41:327-350.
  8. Bierman KL, Coie JD, Dodge KA, Greenberg MT, Lochman JE, McMahon RJ et al. The implementation of the Fast Track Program: an example of a large-scale prevention science efficacy trial. J Abnorm Child Psychol 2002; 30:1-14.
  9. Naylor PJ, Macdonald HM, Reed KE, McKay HA. Action Schools! BC: a socioecological approach to modifying chronic disease risk factors in elementary school children. Prev Chronic Dis 2006; 3.
  10. McLeroy K, Bibeau D, Steckler A, Glanz K. An ecological perspective on health promotion programs. Health Educ Q 1988; 15:351-377.
  11. Stokols D. Translating social ecological theory into guidelines for community health promotion. Am J Health Promot 1996; 10:282-298.
  12. Stokols D. Bridging the theoretical and applied facets of environmental psychology. American Psychologist 1996; 51:1188-1189.
  13. Finch CF, Donaldson A. A sports setting matrix for understanding the implementation context for community sport. Br J Sports Med 2010; 44:973-978.
  14. Myklebust G, Skjølberg A, Bahr R. ACL injury incidence in female handball 10 years after the Norwegian ACL prevention study: important lessons learned Br J Sports Med 2013; 47:476-479.
  15. Dubois N, Wilkerson T, Hall C. A frame-work for enhancing the dissemination of best practices. Ontario (CA): Heart Health Resource Centre, Ontario Public Health Association; 2003.
  16. Junge A, Lamprecht M, Stamm H, Hasler M, Tschopp M, Reuter H et al. Countrywide campaign to prevent soccer injuries in Swiss amateur players. Am J Sports Med 2011; 39:57-63.
  17. Naylor PJ, Macdonald HM, Warburton DE, Reed KE, McKay HA. An active school model to promote physical activity in elementary schools: action schools! BC. Br J Sports Med 2008; 42:338-343.
  18. Mâsse L, McKay H, Valente M, Brant R, Naylor P. Physical activity implementation in school: a 4-year follow-up. Am J Prev Med 2012; 43:369-377.
  19. Finch CF. No longer lost in translation: the art and science of sports injury prevention implementation research. Br J Sports Med 2011; 45:1253-1257.
  20. Klügl M, Shrier I, McBain K, Shultz R, Meeuwisse WH, Garza D et al. The prevention of sports injury: an analysis of 12,000 published manuscripts. Clin J Sports Med 2010; 20:407-412.


Image via USAG Humphreys

Switch Language: list thumbnails
Bookmark and Share


Healthy Lifestyle

Article Images

Copyright © Aspetar Sports Medicine Journal 2023