Skip to main content

Table 2 Some comparisons between static and dynamic meta-analysis. In dynamic meta-analysis, many decisions are made by users, not researchers. However, these decisions are informed by researchers, who provide the metadata on which the decisions are based. In a static meta-analysis, most decisions are made by researchers. However, these decisions are often informed by users, who are often consulted when the protocol for a meta-analysis is being developed. Thus, both researchers and users can be involved in both static and dynamic meta-analysis, but only in dynamic meta-analysis can users interact with the methods and results

From: Dynamic meta-analysis: a method of using global evidence for local decision making

Questions Static Dynamic Strengths (+) and weaknesses (−) of dynamic meta-analysis
Which interventions should be reviewed? Which outcomes should be reviewed? Researchers decide Users decide + Users can decide whether interventions and outcomes should be split or lumped (e.g. as comparisons of “apples and oranges”)
Researchers may not have classified interventions and outcomes in a way that is relevant to users
Which studies should be included? High-quality studies only? Low-quality studies that are locally relevant? Researchers decide Users decide + Users can include/exclude studies based on relevance and study quality
+ Users can weight studies based on relevance and study quality
Users may not understand the limitations of study quality (e.g. blocking, controls, correlation vs causation, etc.)
Researchers may not have classified study quality or described methods in a way that is relevant to users (poor reporting of methods or missing metadata)
Which results are informative? Researchers decide Users decide + Users can explore results that researchers may not have explored (e.g. cover crops that are brassicas, in the USA, with irrigation)
Users may not understand, or may be overwhelmed by, the analysis methods and results (e.g. multiple options)
Researchers may not have classified metadata in a way that is relevant to users
Which results are credible? Researchers decide Users decide + Users can select, deselect, and adjust settings to control the assumptions
+ Users can permute settings for sensitivity analysis
Users may not understand the limitations of the analysis methods and results (e.g. model validity)
Results may be vulnerable to cherry picking, data dredging, and other biases, if protocols for evidence use are not developed