From: Dynamic meta-analysis: a method of using global evidence for local decision making
Questions | Static | Dynamic | Strengths (+) and weaknesses (−) of dynamic meta-analysis |
---|---|---|---|
Which interventions should be reviewed? Which outcomes should be reviewed? | Researchers decide | Users decide | + Users can decide whether interventions and outcomes should be split or lumped (e.g. as comparisons of “apples and oranges”) − Researchers may not have classified interventions and outcomes in a way that is relevant to users |
Which studies should be included? High-quality studies only? Low-quality studies that are locally relevant? | Researchers decide | Users decide | + Users can include/exclude studies based on relevance and study quality + Users can weight studies based on relevance and study quality − Users may not understand the limitations of study quality (e.g. blocking, controls, correlation vs causation, etc.) − Researchers may not have classified study quality or described methods in a way that is relevant to users (poor reporting of methods or missing metadata) |
Which results are informative? | Researchers decide | Users decide | + Users can explore results that researchers may not have explored (e.g. cover crops that are brassicas, in the USA, with irrigation) − Users may not understand, or may be overwhelmed by, the analysis methods and results (e.g. multiple options) − Researchers may not have classified metadata in a way that is relevant to users |
Which results are credible? | Researchers decide | Users decide | + Users can select, deselect, and adjust settings to control the assumptions + Users can permute settings for sensitivity analysis − Users may not understand the limitations of the analysis methods and results (e.g. model validity) − Results may be vulnerable to cherry picking, data dredging, and other biases, if protocols for evidence use are not developed |