Want to see my report, coach?
Written by Martin Buchheit, France
16-Feb-2017
Category: Sports Science

Volume 6 | Targeted Topic - Straight Science | 2017
Volume 6 - Targeted Topic - Straight Science

Sport science reporting in the real world

 

– Written by Martin Buchheit, France

 

On the 9 March 2013, Sir Alex Ferguson delivered in the Irish Times probably one of the most encouraging ever message for sport scientists in football: “Sports science, without question, is the biggest and most important change in my lifetime. It has moved the game onto another level that maybe we never dreamt of all those years ago. Sports Science has brought a whole new dimension to the game”. While such statements are gold for universities advertising sport sciences courses all over the world and for young students willing to embrace their carrier in elite clubs, the actual value of sport science may not always be rated as high in some elite clubs or federations1. Having an impact on the training programme, as a sport scientist, is anything but easy1. The way coaches and athletes understand, accept and use sport science is highly variable and unpredictable. The path leading to effective sport science support is a is a long and winding road, with frequent stops and constant redirections required. Historically, many mistakes have been made while we learned about the veracity and usefulness of our data and the best ways to report and implement sports science in the elite sports setting. Among the different components of effective sport science support, the three most important steps are likely the following:

  1. Having an appropriate understanding and analysis of the data; i.e. using the right metrics and statistics. The first consideration is the choice of the best variables, i.e. those can be trusted in terms of validity and relativity and that can be useful to answer the questions that are actually asked by coaches and players. Second, working with relatively small numbers of athletes within a team setting as well as being unable to effectively control for many variables makes interpretation difficult with traditional analytical approaches such as Null Hypothesis Significance Testing (NHST, which includes ‘p values’ and ‘t-tests’ for example). Over the last decade or so, however, great strides have been made in understanding and reporting the effects we have on our athletes and more valid and relevant approaches exist which are much easier to clinically interpret2. The modern practitioner working oblivious to these useful variables and analytical approaches could be considered incompetent, in my opinion, whereas a practitioner aware of these approaches but clinging to the past borders on disingenuous.
  2. Offering attractive and informative reports via improved data presentation/visualisation. Effectiveness in this step depends likely more on artistic skills and a creative mind than proper scientific knowledge and this is often overlooked in sport sciences programme. Day-to-day trials and errors are likely key in the search of the optimal data visualisation strategies.
  3. Having appropriate communication skills and personal attitude to efficiently deliver these data and reports to coaches and athletes. This step is without doubt the most important of the process; there is however no training offered at universities for this. Nothing replaces experience, high personal standards and humility at this stage, which is generally developed over time.

 

The following sections will detail each of these three components.

 

COLLECTING AND UNDERSTANDING THE (RIGHT) DATA

The first important step to build a successful sport science system is to choose and work with the right data3. With the exponential rise in (micro) technology, collecting data from athletes has never been so easy. For every training session it is relatively easy to fully characterise both the external (e.g. tracking systems, encoders, force plates) and the internal load (e.g. heart rate, muscle oxygenation, sweat rate) placed on each athlete. However, technology per se might not be the solution; the foundations of successful sport science support are probably laid on the pitch first, when practitioners select the type of data that may help them to answer the questions that coaches and athletes have actually asked, in the way they collect these data, how they understand the limitations of each variable and how they analyse, report and utilise all this information3. While validity/reliability studies are important in the search of the best variables, their practical usefulness should also not be overlooked, i.e. their ability to be used to impact on the training programme. This relates to ‘interesting vs important’ types of data. For example, measurement of maximal oxygen uptake vs maximal aerobic speed; only the latter can be used for training prescription.

 

Statistics are probably one of the most important aspects of sport science when it comes to using data to make decisions. Unfortunately, the statistical proficiency of most practitioners in the field is often insufficient to maximise the use of their data and in turn, impact meaningfully on training programmes. One of the main reasons for practitioners’ lack of ‘statistical efficiency’ is that statistical lectures at university have, to date, exclusively sung the praises of NHST, which is:

  • Not appropriate to answer the types of questions that arise from the field: as detailed in Table 1, the magnitude of an effect is what matters the most to practitioners – P values don’t inform this4.
  • Not appropriate to assess individuals, which is the core of elite athlete monitoring. In fact, conventional statistics allow analysis of population-based responses only (Table 1)4.

 

As a valid alternative to NHST, clear analytical advances can be reached using magnitude-based inferences (MBI, Table 1). This ‘new’ statistical approach, driven largely by Will G. Hopkins’ efforts over the past 15 years, has changed my life, both as an academic and practitioner in elite sport11. I personally hope that MBI is influential with other scientists, as it has been to me. While the debate will likely continue, MBI is today a well-established analytical approach in sports science and in other fields, particularly clinical medicine where practical/clinical significance often takes priority over statistical significance4.

 

MBI is based on two simple concepts:

  1. Changes/differences in any variable are systematically compared to a typical threshold representative of a smallest important or meaningful change (later to be termed the smallest worthwhile change, SWC12).
    1. Why? Not all changes are worthwhile/meaningful. It is the magnitude of the change/difference that matters first: ‘is the change larger/greater than the SWC? If yes, how many times greater?’ In this context, change/differences of 1x, 3x, 6x and 10x SWC can be considered as small, moderate, large and very large, respectively4.
    2. How? The most appropriate method to define it is however variable-dependent, which forces researchers to adopt a conscious process when analysing their data. “NHST is easy, but misleading. MBI is hard, but honest” (W.G. Hopkins, personal communication)4. Recommendations to calculate the SWC are provided in Table 2.
  2. Instead of a classic ‘yes or no’ type response (NHST), the probabilities for these changes/differences to be ‘real’ (greater than the SWC) are reported.
    1. More precisely: chances are reported both quantitatively (e.g. 75/25/0 for percentage chances of greater/similar/smaller magnitude than the SWC) and qualitatively (e.g. possibly, likely, very likely – Figures 1 and 2, and Table 3).
    2. How? These percentage chances and associated qualitative interpretations are generally set a priori (e.g. <1%, almost certainly not; 1 to 5%, very unlikely; 5 to 25%, probably not; 25 to 75%, possible; 75 to 95%, likely; 95 to 99, very likely; >99%, almost certain)
    3. Practically: these percentage chances can be obtained with only a few copy and paste manoeuvres using a specifically-designed spreadsheet freely available online13,14. Final decisions can then be translated into plain language when chatting with coaches: ‘This attacker has very likely increased his sprinting speed. The magnitude of improvement should be enough for him to win a few more balls during matches.’

 

PRESENTING THE DATA

Similar to the aphorism that all roads lead to (and therefore from) Rome, the same data and results set can be presented in many ways (Figure 3). Once the relevant questions have been identified, the best variables have been selected and the appropriate statistics applied, the greatest challenge for sport scientists is to find the most efficient type of data visualisation and reporting to get their message across. Several considerations to optimise tables, graphs and content presentation are discussed below and illustrated in Table 3 and Figure 3.

 

  1. Reports should be as simple and as informative as possible (‘simple but powerful’):
    1. Limited to a few ‘important’ variables (those that can be used to answer the questions that coaches and athletes have actually asked and can have an impact on the programme).
    2. Extra decimals and ‘noise’ removed for clarity (Table 3).
    3. All text written horizontally for readability (Figure 3b).
    4. Labels added to graphs so that exact values can be seen too (graph for patterns, numbers for details, if required) (Figure 3b).
    5. Meaningful changes or differences highlighted to be seen at a glance (Figure 2) – with different possible levels of data analysis. Microsoft Excel’s conditional formatting depicting MBI is a useful example (Table 2).
    6. Including error bars where possible to acknowledge uncertainty (typical error of the measurement and confidence intervals for individual and average values, respectively) (Figure 2 and 3).
    7. Using advanced visualisation tools such as Tableau or Microsoft BI. Although these require some training, they may be helpful to create aesthetically pleasing and advanced reports that may be more likely to catch coaches’ and athletes’ attention.
  2. Format of the message should match with coach and athlete expectations, preferences and habits (which is linked to the search of the best delivery path, see below):
    1. Visual vs verbal information.
    2. Paper vs digital reports.
    3. Quantitative vs qualitative interpretation.
    4. Tables vs graphs (and types of graphs, e.g. bars vs radars etc.)

 

DELIVERING THE DATA

This last section is definitively less scientific than the previous two: it rather reflects personal views based on experiences and discussions with peers in the ‘industry’. These ideas were recently summarised in an editorial for the International Journal of Sports Physiology and Performance1. While delivering the data is only one of the three steps highlighted in the present paper, it may be the most important. If sport scientists can’t communicate with the coach, if they can’t create interest and interactions with the coaches and players, then they obviously won’t manage the get the message through (i.e. deliver) and their fancy reports with high quality stats will end up in the bin. The authors conclude: “Masters’ degrees and PhD qualifications often are of little benefit in the quest of creating such a collaborative and productive environment. Understanding the specific codes of a sport or a very specific community of athletes takes many years. Having the respect and trust from high-profile athletes is often more a matter of personality and behaviour than scientific knowledge and skills. As described by the fantastic Dave Martin, we, sport scientists (monkeys) and coaches and athletes (felines and big cats) don’t belong to the same species. We have different expectations, behave differently and tend to make our decisions based on evidence and facts, while they rely on feelings and experience. Creating these links, building these bridges requires time and effort. Since the majority of coaches, supporting staff and athletes often don’t know what to expect from scientific support at the club, it is only by sitting right next to them during training sessions and team debriefs, by sharing meals and coffees, being with them in the 'trenches' that sport scientists can appreciate what coaches and athletes may find useful and which information they rely on to make their decisions1.” Leaving a report on a desk or a bench in not impactful; it is the conversation that makes the data meaningful and that can only occur once a relationship has been developed. Also, while having a strong character is often compulsory to survive in most places, open mindedness, humility and a form of kindness are probably some of the most important personality traits to develop to make an impact in this world. With these personal and social engagement skills in mind, it is not surprising that the majority of the most renowned researchers, sports scientists and performance managers to date have, in parallel to their academic journeys, exposed themselves deeply to the elite sport culture, either directly (as athletes) or indirectly (as coaches)1. Only those may have the ability to properly deliver data reports and influence decisions accordingly.

 

CONCLUSION

The value and importance of sport science varies greatly between elite clubs and federations. Among the different components of effective sport science support, the three most important elements are likely the following:

  1. Appropriate understanding and analysis of the data; i.e. using the most important and useful metrics only and using magnitude-based inferences as statistics. In fact, traditional null hypothesis significance testing (P values) is neither appropriate to answer the types of questions that arise from the field (i.e. assess magnitude of effects and examine small sample sizes) nor to assess changes in individual performances.
  2. Attractive and informative reports via improved data presentation/visualisation (‘simple but powerful’).
  3. Appropriate communication skills and personality traits that help to deliver data and reports to coaches and athletes. Developing such an individual profile requires time, effort and most importantly, humility.

 

Martin Buchheit Ph.D.

Head of Performance

Paris Saint Germain Football Club

Paris, France

Contact: mbuchheit@psg.fr

 

References

  1. Buchheit M. Chasing the 0.2. Int J Sports Physiol Perform 2016; 11:417-418.
  2. Batterham AM, Hopkins WG. Making meaningful inferences about magnitudes. Int J Sports Physiol Perform 2006; 1:50-57.
  3. Buchheit M, Simpson B. Player tracking technology: half-full or half-empty glass? . Int J Sports Physiol Perform 2016 [In press].
  4. Buchheit M. The numbers will love you back in return – I promise. Int J Sports Physiol Perform 2016; 11:551-554.
  5. McCormack J, Vandermeer B, Allan GM. How confidence intervals become confusion intervals. BMC Med Res Methodol 2013; 13:134.
  6. McGuigan MR, Cormack SJ, Gill ND. Strength and power profiling of athletes: selecting tests and how to use the information for program design. Strength Cond J 2013; 35:7-14.
  7. Pettitt RW. The standard difference score: a new statistic for evaluating strength and conditioning programs. J Strength Cond Res 2010; 24:287-291.
  8. Al Haddad H, Simpson BM, Buchheit M. Monitoring changes in jump and sprint performance: best or average values? Int J Sports Physiol Perform 2015; 10:931-934.
  9. Hopkins WG. How to interpret changes in an athletic performance test. Sportscience 2004; 8:1-7.
  10. Cohen J. Things I have learned (so far). Am Psychol 1994; 45:1304-1312.
  11. Buchheit M. Any Comments? 2013. Available from: www.herearemycomments.wordpress.com/. [Accessed 16 March 2016].
  12. Hopkins WG. Statistical vs clinical or practical significance [Powerpoint presentation]. Sportscience 2002; 6. Available from: www.sportsci.org/jour/0201/Statistical_vs_clinical.ppt
  13. Hopkins WG. Precision of the estimate of a subject's true value [Excel spreadsheet]. In: Internet Society for Sport Science. Sportscience 2000. Available from: www.sportsci.org/resource/stats/xprecisionsubject.xls2000 [Accessed November 2016].
  14. Hopkins WG. A spreadsheet for deriving a confidence interval, mechanistic inference and clinical inference from a P value. Sportscience 2007; 11:16-20. Available from: www.sportsci.org/2007/wghinf.htm [Accessed November 2016].
  15. Buchheit M, Allen A, Poon TK, Modonutti M, Gregson W, Di Salvo V. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies. J Sports Sci 2014; 32:1844-1857.
  16. Buchheit M, Morgan W, Wallace J, Bode M, Poulos N. Physiological, psychometric, and performance effects of the Christmas break in Australian football. Int J Sports Physiol Perform 2015; 10:120-123.
  17. Haugen T, Buchheit M. Sprint running performance monitoring: methodological and practical considerations. Sports Med 2016; 46:641-656.
  18. Hopkins WG, Marshall SW, Batterham AM, Hanin J. Progressive statistics for studies in sports medicine and exercise science. Med Sci Sports Exerc 2009; 41:3-13.
  19. Buchheit M. Monitoring training status with HR measures: do all roads lead to Rome? Front Physiol 2014; 27:73.
Table 4: Example of various levels of data reporting using changes in submaximal heart rate responses to a standardised submaximal run. The level of clarity and usefulness increases from left to right. Individual changes in submaximal heart rate in a professional soccer player when running at 12 km/h throughout two competitive seasons (% of maximal heart rate). Adapted from Buchheit, 20164. SWC=smallest worthwhile change (1%)19, TE=typical error of measurement (3%)19. A change that is >SWC+TE has a 75% likelihood to be true4. The number of * indicates the likelihood for the changes to be substantial, with ** referring to likely changes, and *** to very likely changes, using a specifically designed spreadsheet freely available on the internet12. Data in the far right column are displayed in Figure 2.
Table 1: Reasons why academics and practitioners should abandon null-hypothesis significance testing (NHST) and embrace magnitude-based inferences (MBI) (adapted from Buchheit, 20164). SWC=smallest worthwhile change.
Table 2: Suggested methods to derive the smallest worthwhile change4. For an exhaustive list of SWCs for different performance measures see the work of Hopkins9 and Buchheit16,17. Change/differences of 1x, 3x, 6x and 10x SWC can be considered as small, moderate, large and very large, respectively4. SWC=smallest worthwhile change, CMJ=countermovement jump, MAS=maximal aerobic speed, SD=standard deviation.
Figure 1: Example of possible decisions when interpreting changes using magnitude-based inferences. Note the clear vs unclear cases (based on confidence limits, in relation to the shaded trivial area), which is firstly, the beauty of magnitude-based inferences and, secondly, not possible via null hypothesis significance testing. Note also how, for clear effects, the likelihood of changes increases as the confidence limits shrink. Reprinted with permission from McCormack et al5.
Figure 2: Individual changes in submaximal heart rate in a professional soccer player when running at 12 km/h throughout two competitive seasons (% of maximal heart rate). The shaded area represents trivial changes (1%)3. The error bars represent the typical error of measurement (3%)3. The number of * indicate the likelihood for the changes to be substantial, with ** referring to likely changes, and *** very likely changes. The magnitudes of the changes are set as multiples of the smallest worthwhile change (SWC); i.e. 1-3x (small), 3-6x (moderate) and large >6x SWC. Adapted from McCormack et al5.
Table 3: Effect of a nutritional supplement on jumping ability, which is used to illustrate the misleading nature of p values. In the present case, the inclusion of two more subjects (player 13 and 14), which doesn’t even affect the group mean and standard deviation, induces a 180° change in the study conclusion using null hypothesis significance testing (not beneficial vs beneficial). In contrast, both the small magnitude of the effect (standardised changes >0.218, i.e. pre-post/pooled SD) and the overall data interpretation (inferences, % of chances for the supplement to have a beneficial effect) remain unchanged; they show the effectiveness of the nutritional supplement, irrespective of the sample size.
Figure 3: Illustration of various levels of data visualisation using distance covered during soccer matches as an example. Compared with (a), (b) is likely easier to read since all text is displayed horizontally and more informative: distance labels are provided on the side each the bars for more precision, error bars (typical error of the measurement, 1%) are added to reflect uncertainty of measurement and the shaded area represents team average ± standard deviation, which helps to visualise between-player differences. (c) highlights within-player differences for a given match of interest (red cross, players’ top technical performance/impact on match result as rated by coaches) vs individual historical data (circle, with 90% CI). Since the most appropriate method to derive a smallest worthwhile change is still debated for such data (Table 2), the magnitude of the difference is provided in the actual unit (distance covered in meters that is outside the 90% confidence interval (90% CI, right part of the graph), and its interpretation is left to the practitioner. The take-home message from the graph is that there is no clear association between overall match outcome and total distance covered. CI=confidence intervals.
Figure 3: Illustration of various levels of data visualisation using distance covered during soccer matches as an example. Compared with (a), (b) is likely easier to read since all text is displayed horizontally and more informative: distance labels are provided on the side each the bars for more precision, error bars (typical error of the measurement, 1%) are added to reflect uncertainty of measurement and the shaded area represents team average ± standard deviation, which helps to visualise between-player differences. (c) highlights within-player differences for a given match of interest (red cross, players’ top technical performance/impact on match result as rated by coaches) vs individual historical data (circle, with 90% CI). Since the most appropriate method to derive a smallest worthwhile change is still debated for such data (Table 2), the magnitude of the difference is provided in the actual unit (distance covered in meters that is outside the 90% confidence interval (90% CI, right part of the graph), and its interpretation is left to the practitioner. The take-home message from the graph is that there is no clear association between overall match outcome and total distance covered. CI=confidence intervals.
Figure 3: Illustration of various levels of data visualisation using distance covered during soccer matches as an example. Compared with (a), (b) is likely easier to read since all text is displayed horizontally and more informative: distance labels are provided on the side each the bars for more precision, error bars (typical error of the measurement, 1%) are added to reflect uncertainty of measurement and the shaded area represents team average ± standard deviation, which helps to visualise between-player differences. (c) highlights within-player differences for a given match of interest (red cross, players’ top technical performance/impact on match result as rated by coaches) vs individual historical data (circle, with 90% CI). Since the most appropriate method to derive a smallest worthwhile change is still debated for such data (Table 2), the magnitude of the difference is provided in the actual unit (distance covered in meters that is outside the 90% confidence interval (90% CI, right part of the graph), and its interpretation is left to the practitioner. The take-home message from the graph is that there is no clear association between overall match outcome and total distance covered. CI=confidence intervals.

Share

Volume 6 | Targeted Topic - Straight Science | 2017
Volume 6 - Targeted Topic - Straight Science

More from Aspetar Journal

Sports Medicine
Down the rabbit hole or jumping over it?

Written by – Jill Cook, Australia

Sports Medicine
There are many good reasons to screen your athletes

Written by – Nicol van Dyk, Arnhild Bakken, Stephen Targett and Roald Bahr, Qatar

Sports Medicine
The current sports medicine journal model is outdated and ineffective

Written by – Christian Barton, Australia

Latest Issue

Download Volume 13 - Targeted Topic - Nerve Compression Syndromes | 2024

Trending

Editorial
FROM OUR EDITOR
Editorial
FROM OUR GUEST EDITOR
Interview
FAF DU PLESSIS
Sports Science
THE USE OF A CLINICAL TRIAD IN DIAGNOSING PERIPHERAL NERVE COMPRESSIONS
Sports Radiology
IMAGING TECHNIQUES FOR PERIPHERAL NERVE COMPRESSIONS

Categories

Member of
Organization members