A whitepaper on faster and more effective literature reviews, powered by AI.
Systematic literature reviews are time consuming, therefore relatively costly, and have aspects of repetitiveness. Despite several attempts by others, it has been difficult so far to develop effective computer assistance. We have developed a tool based on artificial intelligence technology that assists with selecting relevant titles and abstract: Selectical.
This whitepaper explains the challenges faced during systematic literature review in the biomedical field, and the solutions Selectical provides. The tool Selectical reduces the workload of reviewers with 66% by using new, real time, self-learning AI-technology, and still identifies over 99% of relevant papers.
The selection of relevant scientific articles in a specific literature review is relatively time consuming and needs the input of skilled researchers. It is important that the selected titles and abstracts include all articles that are eventually relevant for data-extraction. Therefore each title/abstract requires scrutiny. Considering that literature reviews of thousands of titles/abstract are no exception, the workload is huge.
Automation of the title/abstract task seems a logical step, but machine-assisted selection yielding satisfactory results is hampered for several reasons:
Applying Artificial Intelligence (AI) is difficult with the above mentioned constraints, among other things because AI models will most of the time only be effective if they are trained in similar situations. It requires a special type of AI-technology to create a model that is able to replace (part of) the human work given those constraints.
Selectical is an AI-driven tool that automatically learns at every literature review project what the relevant papers are, based on the researcher's selection input. After a short while Selectical has learned enough to do the rest of the title/abstract selection job. On average, Selectical finds in 34% of the manual selection time over 99% of all relevant papers.
Selectical works in every browser and is easy to use:
There are several applications of Selectical within the literature review process: as 'second reviewer' in case the title/abstract selection needs to be done in duplicate; as 'control tool'; and as 'primary selector'.
After uploading article information in Selectical, this information (title, abstract, additional info from databases such as PubMed and Embase) is processed and optimized by Selectical in order to set up efficient processing of the title/abstract selection. This initial step requires some, but limited time and computing power, and no time from the reviewer him/herself. After this set up, the reviewer can start labelling the titles/abstracts.
When the reviewer starts selecting the articles in Selectical, the AI starts learning. The AI is trained by the selections made by the reviewer, to discriminate between 'relevant', 'non-relevant' and 'not sure'. This process is called Active Learning (see explanation box). Eventually the AI has learned which articles are relevant and which are not, without the interference of the reviewer.
If not all articles are labelled by the human reviewer, how will we know for sure that all relevant articles will be selected by the AI-tool? That is the challenge in developing a tool for the literature selection task. Selectical uses an innovative strategy to quantify the uncertainty about unseen articles. If no measurable uncertainty is left for the remaining, unseen articles, the human selection activities will stop. An export of the results for all articles can be made.
Active Learning means that the self-learning Artificial Intelligence is actively adjusted by the input of the human user (reviewer in our case). The AI learns by 'cheating' from the human user how the task should be executed.
This is possible because the AI-algorithm relates a level of certainty to the decisions made. In other words, the AI can be certain or uncertain about the automated decisions made.
In the case of selection of titles/abstract the AI has to learn what is a relevant article and what isn't. To do so:
Eventually, the AI can execute the work of the human user with a high level of certainty, by repeating these steps several times. How often depends on the problem at hand.
We use two criteria to judge the results of Selectical's work:
Selectical was challenged on these two criteria by testing it on literature review projects that had already been fully labelled by human reviewers. We simulated that Selectical would have been used in these projects and compared the results. We tested 36 literature review projects that were each 25 times simulated with Selectical. Each simulation round used different random initial parameters. We used literature review projects that varied in health/disease area (e.g. infectious diseases, chronic diseases, rare diseases, nutrition, alcohol use), focussed subjects (e.g. effectiveness of a certain vaccin) and wider subjects (e.g. the natural history of a disease), and with different sizes (100 to over 7000 titles/abstracts).
The 36 literature review projects included 80 thousand titles/abstracts of which 2000 labelled by the human reviewers as relevant.
The average results of these simulation with respect to efficiency and quality were:
Criterium | Result |
---|---|
Amount of work saved | 66% |
Quality | 99.3% |
For the detailed results, please feel free to send us an email at hello@wearelandscape.nl.
High quality, time-saving automated selection of research papers has long been a challenge for Artificial Intelligence. However, with real-time self-learning AI, Selectical returns more than 99% of the relevant articles saving 66% of the reviewer's time.
Simulations and the experience of users show that Selectical can assist in literature review projects with a wide range of review objectives. In addition, Selectical shows better performance than comparable tools.
If you are curious and would like to know the performance of Selectical on your type of literature reviews, we offer to do some test simulations on recent projects.