Home Sci-Tech Africa: Eight Common Problems With Science Literature Reviews and How to Fix Them

Africa: Eight Common Problems With Science Literature Reviews and How to Fix Them

Namibia: Peugeot Will Export 'Once Issues Are Resolved'

Researchers regularly review the literature that’s generated by others in their field. This is an integral part of day-to-day research: finding relevant research, reading and digesting the main findings, summarising across papers, and making conclusions about the evidence base as a whole.

However, there is a fundamental difference between brief, narrative approaches to summarising a selection of studies and attempting to reliably, comprehensively summarise an evidence base to support decision-making in policy and practice.

So-called "evidence-informed decision-making" relies on rigorous systematic approaches to synthesising the evidence. Systematic review has become the highest standard of evidence synthesis. It is well established in the pipeline from research to practice in several fields including health, the environment and social policy. Rigorous systematic reviews are vital for decision-making because they help to provide the strongest evidence that a policy is likely to work (or not). They also help to avoid expensive or dangerous mistakes in the choice of policies.

But systematic review has not yet entirely replaced traditional methods of literature review. These traditional reviews may be susceptible to bias and so may end up providing incorrect conclusions. This is especially worrying when reviews address key policy and practice questions.

The good news is that the limitations of traditional literature review approaches could be improved relatively easily with a few key procedures. Some of these are not prohibitively costly in terms of skill, time or resources. That’s particularly important in African contexts, where resource constraints are a daily reality, but should not compromise the continent’s need for rigorous, systematic and transparent evidence to inform policy.

In our recent paper in Nature Ecology and Evolution, we highlighted eight common problems with traditional literature review methods. We gave examples for each problem, drawing from the field of environmental management and ecology. Finally, we outlined practical solutions.


These are the eight problems we identified in our paper.

First, traditional literature reviews can lack relevance. This is because limited stakeholder engagement can lead to a review that is of limited practical use to decision-makers.

Second, reviews that don’t publish their methods in an a priori (meaning that it is published before the review work begins) protocol may suffer from mission creep. In our paper we give the example of a 2019 review that initially stated it was looking at all population trends among insects. Instead, it ended up focusing only on studies that showed insect population declines. This could have been prevented by publishing and sticking to methods outlined in a protocol.

Third, a lack of transparency and replicability in the review methods may mean that the review cannot be replicated. Replicability is a central tenet of the scientific method.

Selection bias is another common problem. Here, the studies that are included in a literature review are not representative of the evidence base. A lack of comprehensiveness, stemming from an inappropriate search method, can also mean that reviews end up with the wrong evidence for the question at hand.

Traditional reviews may also exclude grey literature. This is defined as any document

produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers, i.e., where publishing is not the primary activity of the producing body.

It includes organisational reports and unpublished theses or other studies. Traditional reviews may also fail to test for evidence of publication bias; both these issues can result in incorrect or misleading conclusions. Another common error is to treat all evidence as equally valid. The reality is that some research studies are more valid than others. This needs to be accounted for in the synthesis.

Inappropriate synthesis is another common issue. This involves methods like vote-counting, which refers to tallying studies based on their statistical significance. Finally, a lack of consistency and error checking (as would happen when a reviewer works alone) can introduce errors and biases if a single reviewer makes decisions without consensus.

All of these common problems can be solved, though. Here’s how.


Stakeholders can be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets. Best-practice guidelines for this process already exist.

Researchers can carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, critical appraisal and synthesis in detail. Organisations like the Collaboration for Environmental Evidence have existing protocols from which people can draw.

Researchers also need to be explicit and use high-quality guidance and standards for review conduct and reporting. Several such standards already exist.

Another useful approach is to carefully design a search strategy with an info specialist; to trial the search strategy against a benchmark list; and to use multiple bibliographic databases, languages and sources of grey literature. Researchers should then publish their search methods in an a priori protocol for peer review.