FAQ
Frequently asked questions about biologit MLM-AI

Reviews

Why some abstract in my weekly review have already been screened?

Newly generated reviews may present pre-screened abstracts that were tagged at the duplicate detection stage. Duplicate abstracts relate to a corresponding abstract in the same review, or in a previous review for the same product.
The "Duplicates" tab indicates pre-screened articles.

How does Duplicate Detection Work?

MLM-AI performs duplicate detection by content, linking duplicate articles with same title and abstract. This is useful when the same article is published on multiple venues with different dates and IDs.
Articles are flagged as duplicates only when:
  • They appear in the same review results. One of the articles will be automatically reviewed as "duplicate". If the duplicate articles come from different sources, PubMed is given preference as the "primary" source, the others flagged as duplicate.
  • An article is a duplicate of another article seen in a past review for the same monitor.

When will my review move from "In Progress" to "Completed"?

Reviews are categorized into tabs by status:
Reviews stay "In Progress" while there are still abstracts to be screened, and move to "Completed" once all abstracts have been screened (ie. saved with a decision by a user).

Data Sources and Searching Abstracts

What literature sources are present in MLM-AI?

MLM-AI ingests data from established and open access databases to achieve a broad literature reach. Currently, we automatically scan and upload the following sources:
  • PubMed - Comprehensive repository of biomedical and life sciences literature comprising over 33 million citations.
  • Crossref - Worldwide Open access repository of scientific literature.
  • The Directory of Open Access Journals - Worldwide Open Access literature.
  • SciELO - Scientific life sciences literature focusing on Latin America and the Caribbean.
  • DialNet - Open Access repository focused on literature from Spain.

How does MLM-AI search literature sources?

In MLM-AI:
  • Abstracts from all sources is ingested daily into our database.
  • Abstracts are searched for every active monitor, according to the specified product keywords and synonyms, and the specified sources to be searched.
  • All retrieved results are de-duplicated and presented as a review.
Note that searches are performed in the medication keywords only, as specified by your monitor configuration. No additional keyword-based filtering occurs.

What is the search strategy employed by MLM-AI?

MLM-AI will match any of the key terms specified in the monitor configuration.
In the example below, the key terms associated with "Betamethasone" are OR'ed together, and searched across all sources specified in the monitor.
Example key terms from Medication Monitor

Why search sources like Crossref, DOAJ, etc?

These additional sources broaden the reach of scientific literature available to you, and can contribute in significant ways in identifying relevant satefy information.
We have conducted a comparative study showing that additional safety information can be found from these sources. Learn more here:

AI Models

Will MLM-AI detect Suspected adverse events abstracts for my product?

MLM-AI models are designed to be drug-agnostic, having been trained with a cross-section of scientific literature covering a broad range of drug classes.
There are various strategies that can be employed for the safe implementation of AI-based screening. In AI-Enabled Screening Workflows we discuss techniques to leverage AI into your workflows.

How accurate are MLM-AI models?

All MLM-AI models are configured for high recall on the target category of interest (Suspected AE, Suspected Case etc). This means AI tags produced by MLM-AI are "conservative" in that it tolerates some amount of false positives so that missing a relevant abstract is highly unlikely.
There are many ways in which users can take advantage of AI Tags produced by MLM-AI. Please see the section on AI-Enabled Screening Workflows to learn more and use one that best suits your needs.
For performance results against experimental datasets, see our published study on:

Are AI-based tags safe to use? How can I take advantage of them?

The best resource on how to safely integrate AI tags is our Guide to AI-Enabled Screening Workflows.

How many abstracts can I expect to be categorized as Suspected Adverse Event?

Based on our experiments you can expect 30-50% of inbound articles to be tagged as Suspected AE by MLM-AI. However, the results will vary from product to product.

Why is MLM-AI detecting "suspected" adverse event, and other "suspected" categories?

MLM-AI models were designed so that relevant safety information is not missed. With this in mind, we have labeled and curated datasets that capture how likely safety information is being discussed either in the abstract or in the full text of an article.
Consider the samples below extracted from abstracts. The most ambiguous cases may still be relevant, and would require further review in the full text of the article:
By looking for "suspected" safety information, MLM-AI can still tag abstracts with incomplete information.

Where can I learn more about the AI models powering AI Tags?

See our article discussing the the use of AI for literature screening:
Last modified 1mo ago