Comment on page
FAQ
Frequently asked questions about biologit MLM-AI
Newly generated reviews may present pre-screened abstracts that were tagged at the duplicate detection stage. Duplicate abstracts relate to a corresponding abstract in the same review, or in a previous review for the same product.
Reviews always use the monitor settings in effect at the time of submission. This means any monitor changes performed later will not be reflected. This is by design to ensure traceability: the system behaves and reflects monitor settings current at the time the review was created.
Note the same applies to scheduled reviews: once a review is scheduled (i.e. it appears in the "Processing" tab), the settings are fixed.
MLM-AI performs duplicate detection by ID, DOI and content. This is further detailed in the page:
Articles are flagged as duplicates only when:
- They appear in the same review results. One of the articles will be automatically reviewed as "duplicate".
- An article is a duplicate of another article seen in a past review for the same Monitor.
Learn more about how the techniques used in automated duplicate detection on: Feature spotlight: Duplicate Detection in Biologit MLM-AI
Reviews stay "In Progress" while there are still abstracts to be screened, and move to "Completed" once all abstracts have been screened ie. all abstracts were saved with a decision by a user or by MLM-AI via automated action.
Search results obtained from PubMed, EMBASE, EBSCO, Adis, and ProQuest can be directly uploaded into MLM-AI for screening.
The supported formats and help with the export steps are detailed in:
Duplicate detection is a feature of Monitors and is active only when Reviews are requested. Note that you can create monitors based on the query you formulate in the Article Search screen, and benefit from duplicates detection this way.
The query syntax of the MLM-AI database allows for wildcards that can expand common suffixes, for example:
toxic*
-> toxic, toxicity, toxicitiespregna*
-> pregnant, pregnancy, pregnancies
In certain cases the
-
is ignored to facilitate matching of commonly (non) hyphenated terms like "where by" and "where-by"Yes, a maximum of
1000
hits can be retrieved from a single query. Try reducing the date range or refining the search criteria.Yes, once E2B settings are configured for your system, users can produce E2B XML files in R2 format for articles that qualify for ICSR.
Quality control with various levels of sampling is supported. Users can also configure their workflows to clearly indicate QC status.
MLM-AI ingests data from established and open access databases to achieve a broad literature reach. Currently, we automatically scan and upload the following sources:
- IBECS and LILACS - open access repositories with regional coverage in Spain, Latin America and the Caribbean
There are two options:
(1) When creating a monitor using product name and synonyms, MLM-AI will match any of the key terms specified in the monitor configuration.
In the example below, the key terms associated with "etanercept" are OR'ed together, and searched across all sources specified in the monitor.

(2) Monitors can also be created from a search string created by the user. From Article Search, create a query string to suit your needs and proceed to create a monitor using the "pill" button. The monitor query string is exactly the one created by the user.
Including more repositories will broaden the reach of scientific literature available to you, and can contribute in identifying more relevant safety information.
Additional databases can also help in meeting your regulatory requirements.
See also these resources for further information on this topic:
MLM-AI models are product agnostic, having been trained with a cross-section of scientific literature covering a broad range of abstracts.
There are various strategies that can be employed for the safe implementation of AI-based screening. In AI-Enabled Screening Workflows we discuss techniques to leverage AI into your workflows.
All MLM-AI models are configured for high recall on the target category of interest (Suspected AE, Suspected Case etc). This means AI tags produced by MLM-AI are "conservative" in that it tolerates some amount of false positives so that missing a relevant abstract is highly unlikely.
There are many ways in which users can take advantage of AI Tags produced by MLM-AI. Please see the section on AI-Enabled Screening Workflows to learn more and use one that best suits your needs.
For performance results please refer to the latest fact sheet:
The best resource on how to safely integrate AI tags is our Guide to AI-Enabled Screening Workflows.
Title-only citations may sometimes appear, depending on how journals decide to publish their data.
- Without an abstract there is insufficient information to reliably make model predictions
- Hence, all "No Abstract" articles are tagged as "Suspected AE", indicating they should be screened
- "No Abstract" citations are also presented in a separate "No Abstarct" tab in Review results, to facilitate a separate screening workflow
MLM-AI models were designed so that relevant safety information is not missed. With this in mind, we have labeled and curated datasets that capture how likely safety information is present either in the abstract or in the full text of an article.
Consider the samples below extracted from abstracts. The most ambiguous cases may still be relevant, and will likely need review of the full text of the article:

By looking for "suspected" safety information, MLM-AI can still tag abstracts with incomplete information without missing important safety data.
See our article discussing the the use of AI for literature screening:
See also the model technical specification and intended uses in the FactSheet:
Finally, our technical paper goes in more details on our AI development methodology and experimental results:
Medication synonyms presented in the Monitor Configuration pages are periodically (at least quarterly) collected from the following public sources: MeSH, ChemID, OpenFDA and the EMA list of marketed products.
Any modification to monitors apply only to Reviews created after the change.
Existing reviews always use the monitor configuration present at the time of review creation. This ensures consistent behavior with the then current settings. This also applies to "Scheduled" reviews that have been submitted and are already in the "Processing" tab.
- Maximum number of synonyms used in a monitor:
600
- Maximum configured monitors per account:
100
- Maximum configured special situation options:
50
- Maximum configured exclusion options:
50
Workflow
- Maximum configurable workflow decisions (system settings or monitor):
50
- Maximum number of teams assigned to a single user:
10
- Maximum configured teams per account:
50
- Maximum number of days in a review:
95
- Oldest start date of a review (number of days from today):
450
- Date where data starts in the MLM-AI article database:
01-JAN-2020
Attachments
- Maximum size of uploaded attachment:
50Mb
- Maximum results in a request:
1000
By default every user is subject to the following password rules:
- Minimum password length:
10
characters - At least
2
numbers in password - At least
1
lowercase letter - At least
1
uppercase letter - At least
1
special character ($&?
, etc...)
Last modified 2mo ago