Systematic Review Literature Search

People involved:

About this project

This project seeks to develop methods and tools for improving how experts search for literature for systematic reviews. So far, research in this area by this team has provided useful tools for visually assisting Boolean query formulation, fully automatic methods for Boolean query refinement specifically in the systematic review domain, domain-specific retrieval models, and a test collection.

See below for a list of relevant publications, tools, description of the task, background information, and challenges.

In this page

Relevant Publications

Automatic Query Transformation

Query Formulation

Test Collections

Retrieval Models

    What are systematic reviews?

    Systematic reviews are a type of literature review, synthesising all possible relevant information for highly focused research questions. In the medical domain, systematic reviews are the foundation of evidence based medicine and are of the highest quality evidence for this domain. Systematic reviews in the medical domain not only inform health care practitioners about what decisions to make about diagnosis and treatment, but have also been used to inform governmental policy making. The main type of systematic review seeks to systematically search and critically appraise and synthesise evidence from clinical studies (i.e., randomised controlled trials). There are, however, also a number of other types of systematic reviews, such rapid reviews (where time is a more important factor), scoping reviews (which synthesise a range of many broad areas of literature), or umbrella reviews (which could be though of as systematic reviews of systematic reviews).

    Cost factors

    There are a number of considerations to make when conducting a systematic review. Most importantly are the time and cost factors involved. A systematic review has a number of steps which must be completed in a systematic nature. These steps are usually defined well in advance and strictly adhered to during the construction of the review. At a high level, these steps are as follows:

    1. Identification of research question.
    2. Construction of study protocol.
    3. Formulation of search strategy.
    4. Screening and Appraisal of studies.
    5. Synthesis of studies.
    6. Publication and distribution of review.

    The main cost of a systematic review appears in step 4, where studies are screened and appraised to determine their inclusion or exclusion criteria for the following step. Often, a search strategy retrieves thousands, if not tens of thousands of results. The systematic nature of these reviews calls for inspecting each and every result retrieved. It is also common for this screening and appraisal to be performed in parallel with multiple reviewers to reduce bias (increasing the cost further). This screening process can often take months, and sometimes a year or more (depending on how many results are retrieved).

    Reducing cost factors

    There has been much research developing tools to assist researchers undertaking a systematic review to reduce the monetary and time costs of the review. Typically these tools help by assisting to prepare and maintain reviews, re-ranking results through active learning, automating evidence synthesis, among others. There has also been much research to develop methods for automatically prioritising the results in the screening and appraisal phase. Systematic review literature search is unlike typical web search (e.g., Google) as a Boolean retrieval model is used. Most research on ranking in the Information Retrieval domain has focused primarily on this “ad-hoc” task of ranking documents for a query similar to one that would be issued to a modern search engine. Ranking the results of a Boolean query cannot be performed with the same principals, and there has been many studies showing the ineffectiveness of Boolean queries vs. the types of queries used in modern search engines. The screening prioritisation for systematic review literature search therefore uses approaches such as active learning, rather than improving the retrieval model.

    A Boolean query allows for the complete control over the search results. While it does not provide a mechanism for ranking, the trade-off in specifying exactly what should be retrieved through the use of set-based operators, term matching, and field restrictions allows for expert control. Furthermore, many search engines used for systematic review literature search (e.g., PubMed) also incorporate medical ontologies (i.e., MeSH), explicit stemming, and complex Boolean operators such as adjacency. These features of the types of Boolean queries used in this domain are also the reason the cost of a review are so high: the complexity involved in constructing a Boolean query to effectively satisfy the information need of the review is extremely high.

    Improving Boolean query formulation

    There are significant time and cost savings to be had by improving the effectiveness of Boolean queries. A more effective Boolean query retrieves less irrelevant studies while maintaining the number of relevant studies. Screening prioritisation only helps to bubble the most relevant studies to the top of the list; reviewers still must screen all studies systematically. A more effective query translates to less studies to screen overall. Even small decreases in numbers of irrelevant studies can significantly reduce cost and time factors of systematic review construction. Decreases in the time it takes to construct systematic reviews can lead to more accurate and up-to-date evidence based medicine; improving decisions by health care professionals.

    Our Tools

    We have developed a number of tools to assist with the construction of systematic reviews.

    searchrefiner

    searchrefiner screenshot

    searchrefiner is an interactive interface for visualising and understanding queries used to retrieve medical literature for systematic reviews. searchrefiner is an open source project; the source is made available on GitHub. It is currently in development, however you may preview the interface at this demo link (note that users must be approved prior to use).

    MeSHSuggester (MeshMate)

    MeSHSuggester (MeshMate) screenshot

    MeSHMate (MeshSuggester) is a Web-based MeSH term suggestion prototype system integrated in tera tools that allows users to obtain suggestions from a number of underlying methods, including BERT-based neural suggestion methods, suggestion can be conducted using Atomic-BERT, Semantic-BERT, and Fragment-BERT. You can create an account first at tera-tools and then access the MeSHMate tool.

    DenseReviewer

    DenseReviewer screenshot

    DenseReviewer is a screening prioritization tool for systematic reviews, leveraging dense retrieval and relevance feedback to rank studies efficiently during title and abstract screening. It dynamically updates rankings based on user assessments, optimizing the screening process. The tool includes a web-based interface for interactive screening and a Python library for integration and experimentation. It supports structured PICO queries, allows self-hosting via Docker, and improves efficiency in identifying relevant studies.

    AiReview

    AiReview screenshot

    AiReview is an open platform designed to accelerate systematic reviews (SRs) using large language models (LLMs). It provides an extensible framework and a web-based interface for LLM-assisted title and abstract screening. AiReview enables researchers to leverage LLMs transparently by offering different roles for AI involvement—pre-reviewer, co-reviewer, and post-reviewer—to support decision-making, live collaboration, and quality control. The tool integrates open-source and commercial LLMs, allowing users to customize screening criteria, interaction levels, and model settings. It aims to improve efficiency, transparency, and accessibility in SRs.

    back to top