Users that are registered in the platform can request a temporary password to login and update the forgotten password. This is covered in Forgot password?.
Reviews and Monitors
Why some abstract in my weekly review have already been screened?
Newly generated reviews may present pre-screened abstracts that were tagged at the duplicate detection stage. Duplicate abstracts relate to a corresponding abstract in the same review, or in a previous review for the same product.
I updated a monitor, but the changes did not take effect on the monitor reviews
Reviews always use the monitor settings in effect at the time of submission. This means any monitor changes performed later will not be reflected. This is by design to ensure traceability: the system behaves and reflects monitor settings current at the time the review was created.
Note the same applies to scheduled reviews: once a review is scheduled (i.e. it appears in the "Processing" tab), the settings are fixed.
How does Duplicate Detection Work?
MLM-AI performs duplicate detection by ID, DOI and content. This is further detailed in the page:
When will my review move from "In Progress" to "Completed"?
Reviews stay "In Progress" while there are still abstracts to be screened, and move to "Completed" once all abstracts have been screened ie. all abstracts were saved with a decision by a user or by MLM-AI via automated action.
Finally, if required users can directly assign articles for QC using the assignment feature.
How does sampling of articles work?
Sampling Criteria
Sampling will return at least one article, irrespective of sampling rate
Otherwise, sampling will round up or down to the nearest integer value, according to sampling rate
Sampling and Reviews
In each review, articles are randomly shuffled into a list using a pre-defined method, and sampling draws articles from the shuffled list
This ensures sampling produces consistent results, allowing subsequent QC sessions to retrieve the same articles
In Find Articles, sampling applies to each review separately (following the sampling approach above), and are then aggregated and retrieved to the user
Data Sources and Searching Articles
What literature sources are present in MLM-AI?
MLM-AI ingests data from established and open access databases to achieve a broad literature reach. Currently, we automatically scan and upload the following sources:
PubMed - Comprehensive repository of biomedical and life sciences literature
Crossref - Worldwide open access repository of scientific literature
SciELO - Scientific life sciences literature focusing on Latin America and the Caribbean. The following Scielo repositories are ingested:
Scielo Brazil
Scielo Spain (from 01-JAN-2025)
DialNet - open access repository focused on literature from Spain
IBECS and LILACS - open access repositories with regional coverage in Spain, Latin America and the Caribbean
Note: As of May/2024 the IBECS source is no longer being updated with new publications at the source
Data start date for LILACS is Nov/2023
What is the search strategy employed by MLM-AI?
There are two options:
(1) When creating a monitor using product name and synonyms, MLM-AI will match any of the key terms specified in the monitor configuration.
In the example below, the key terms associated with "etanercept" are OR'ed together, and searched across all sources specified in the monitor.
(2) Monitors can also be created from a search string created by the user. From Article Search, create a query string to suit your needs and proceed to create a monitor using the "pill" button. The monitor query string is exactly the one created by the user.
Should I include additional sources available in the Biologit Database in my searches?
Including more repositories will broaden the reach of scientific literature available to you, and can contribute in identifying more relevant safety information.
Additional databases can also help in meeting your regulatory requirements.
See also these resources for further information on this topic:
Will MLM-AI detect Suspected adverse events abstracts for my product?
MLM-AI models are product agnostic, having been trained with a cross-section of scientific literature covering a broad range of abstracts.
There are various strategies that can be employed for the safe implementation of AI-based screening. In AI-Enabled Screening Workflows we discuss techniques to leverage AI into your workflows.
How accurate are MLM-AI models?
All MLM-AI models are configured for high recall on the target category of interest (Suspected AE, Suspected Case etc). This means AI tags produced by MLM-AI are "conservative" in that it tolerates some amount of false positives so that missing a relevant abstract is highly unlikely.
There are many ways in which users can take advantage of AI Tags produced by MLM-AI. Please see the section on AI-Enabled Screening Workflows to learn more and use one that best suits your needs.
For performance results please refer to the latest fact sheet:
What is the Behavior of Tags in Articles where only Title is Available?
Title-only citations may sometimes appear, depending on how journals decide to publish their data.
Without an abstract there is insufficient information to reliably make model predictions
Hence, all "No Abstract" articles are tagged as "Suspected AE", indicating they should be screened
"No Abstract" citations are also presented in a separate "No Abstarct" tab in Review results, to facilitate a separate screening workflow
Why are MLM-AI Tags Called "Suspected" adverse event, and other "Suspected" categories?
MLM-AI models were designed so that relevant safety information is not missed. With this in mind, we have labeled and curated datasets that capture how likely safety information is present either in the abstract or in the full text of an article.
Consider the samples below extracted from abstracts. The most ambiguous cases may still be relevant, and will likely need review of the full text of the article:
By looking for "suspected" safety information, MLM-AI can still tag abstracts with incomplete information without missing important safety data.
Where can I learn more about the AI models powering AI Tags?
See our article discussing the the use of AI for literature screening:
I made changes to a monitor configuration. When will I see them in Review results?
Any modification to monitors apply only to Reviews created after the change.
Existing reviews always use the monitor configuration present at the time of review creation. This ensures consistent behavior with the then current settings. This also applies to "Scheduled" reviews that have been submitted and are already in the "Processing" tab.
System Limits
Monitors
Maximum number of synonyms used in a monitor: 600
Maximum configured monitors per account: 100
Maximum configured special situation options: 50
Maximum configured exclusion options: 50
Workflow
Maximum configurable workflow decisions (system settings or monitor): 50
User Management
Maximum number of teams assigned to a single user: 10
Maximum configured teams per account: 50
Review Date Range
Maximum number of days in a review: 95
Oldest start date of a review (number of days from today): 450
Data Starting Point
Date where data starts in the Biologit Database: 01-JAN-2020
By default every user is subject to the following password rules:
Minimum password length: 10 characters
At least 2 numbers in password
At least 1 lowercase letter
At least 1 uppercase letter
At least 1 special character ($&?, etc...)
Can the default password policy change?
Yes, contact support to request different password rules for your account.
How to configure password auto-lock after too many failed logins?
Password auto-lock prevents account break-in attempts by temporarily locking the account after too many failed logins. This is a system configuration that administrators can setup via the Settings page. Learn more more on Security Settings.