biologit MLM-AI 1.1
BiologitContact Sales
  • biologit MLM-AI
  • Application
    • Sign on and Navigation
    • Reviews
      • Find Articles from Reviews
    • Uploading Review Data
    • Review Details
      • Searching Inside a Review
      • Author Country of Origin
      • Work Assignment
    • Article Screening
      • Attachments and Full Text Screening
      • Reviewing Articles in Related Reviews
      • Article-Level Day Zero
    • Batch Screening
    • Article Search
    • Reports
    • Dashboards
  • Configuration
    • Monitors
    • Users, Teams and Permissions
    • Settings
    • Custom Workflows
    • E2B Configuration Guide
    • Notifications
  • TOPICS
    • MLM-AI Concepts
    • Handling Article Dates
    • AI-Enabled Screening Workflows
    • Duplicate Detection
  • More Help
    • Support and Platform Details
    • Release Notes
      • Release Notes - 2023
      • Release Notes - 2022
    • Notices
      • Data Privacy
      • CFR-11
    • FAQ
Powered by GitBook
On this page
  • Reviews and Monitors
  • Why some abstract in my weekly review have already been screened?
  • I updated a monitor, but the changes did not take effect on the monitor reviews
  • How does Duplicate Detection Work?
  • When will my review move from "In Progress" to "Completed"?
  • Uploading Data
  • What data can I upload to MLM-AI for screening?
  • Article Search
  • Why do I see duplicates on my query results?
  • How do I perform term expansion?
  • I am using "-" in my search terms, and got more results than expected
  • Are there limits to results in Article Search?
  • Screening Articles
  • Does MLM-AI support E2B files?
  • Does MLM-AI support quality control (QC) workflows?
  • Data Sources and Searching Abstracts
  • What literature sources are present in MLM-AI?
  • What is the search strategy employed by MLM-AI?
  • Should I include additional sources avaialble in the Biologit Database in my searches?
  • AI Tags
  • Will MLM-AI detect Suspected adverse events abstracts for my product?
  • How accurate are MLM-AI models?
  • Are AI-based tags safe to use? How can I take advantage of them?
  • What is the Behavior of Tags in Articles where only Title is Available?
  • Why are MLM-AI Tags Called "Suspected" adverse event, and other "Suspected" categories?
  • Where can I learn more about the AI models powering AI Tags?
  • Monitor Configuration
  • How does the synonyms expansion work?
  • I made changes to a monitor configuration. When will I see them in Review results?
  • System Limits
  • Security
  • What is the default password policy?
  • Can the default password policy change?
  • How to configure password auto-lock after too many failed logins?
  1. More Help

FAQ

Frequently asked questions about biologit MLM-AI

PreviousCFR-11

Last updated 3 months ago

Reviews and Monitors

Why some abstract in my weekly review have already been screened?

Newly generated reviews may present pre-screened abstracts that were tagged at the stage. Duplicate abstracts relate to a corresponding abstract in the same review, or in a previous review for the same product.

I updated a monitor, but the changes did not take effect on the monitor reviews

Reviews always use the monitor settings in effect at the time of submission. This means any monitor changes performed later will not be reflected. This is by design to ensure traceability: the system behaves and reflects monitor settings current at the time the review was created.

Note the same applies to scheduled reviews: once a review is scheduled (i.e. it appears in the "" tab), the settings are fixed.

How does Duplicate Detection Work?

MLM-AI performs duplicate detection by ID, DOI and content. This is further detailed in the page:

Articles are flagged as duplicates only when:

  • They appear in the same review results. One of the articles will be automatically reviewed as "duplicate".

  • An article is a duplicate of another article seen in a past review for the same Monitor.

Learn more about how the techniques used in automated duplicate detection on:

When will my review move from "In Progress" to "Completed"?

Reviews stay "In Progress" while there are still abstracts to be screened, and move to "Completed" once all abstracts have been screened ie. all abstracts were saved with a decision by a user or by MLM-AI via automated action.

Uploading Data

What data can I upload to MLM-AI for screening?

Search results obtained from PubMed, EMBASE, EBSCO, Adis, and ProQuest can be directly uploaded into MLM-AI for screening.

The supported formats and help with the export steps are detailed in:

Article Search

Why do I see duplicates on my query results?

How do I perform term expansion?

The query syntax of the MLM-AI database allows for wildcards that can expand common suffixes, for example:

  • toxic* -> toxic, toxicity, toxicities

  • pregna* -> pregnant, pregnancy, pregnancies

I am using "-" in my search terms, and got more results than expected

In certain cases the - is ignored to facilitate matching of commonly (non) hyphenated terms like "where by" and "where-by"

Are there limits to results in Article Search?

Yes, a maximum of 1000 hits can be retrieved from a single query. Try reducing the date range or refining the search criteria.

Screening Articles

Does MLM-AI support E2B files?

Does MLM-AI support quality control (QC) workflows?

Data Sources and Searching Abstracts

What literature sources are present in MLM-AI?

MLM-AI ingests data from established and open access databases to achieve a broad literature reach. Currently, we automatically scan and upload the following sources:

    • Note: As of May/2024 the IBECS source is no longer being updated with new publications at the source

What is the search strategy employed by MLM-AI?

There are two options:

In the example below, the key terms associated with "etanercept" are OR'ed together, and searched across all sources specified in the monitor.

Should I include additional sources avaialble in the Biologit Database in my searches?

Including more repositories will broaden the reach of scientific literature available to you, and can contribute in identifying more relevant safety information.

Additional databases can also help in meeting your regulatory requirements.

See also these resources for further information on this topic:

AI Tags

Will MLM-AI detect Suspected adverse events abstracts for my product?

MLM-AI models are product agnostic, having been trained with a cross-section of scientific literature covering a broad range of abstracts.

How accurate are MLM-AI models?

All MLM-AI models are configured for high recall on the target category of interest (Suspected AE, Suspected Case etc). This means AI tags produced by MLM-AI are "conservative" in that it tolerates some amount of false positives so that missing a relevant abstract is highly unlikely.

For performance results please refer to the latest fact sheet:

Are AI-based tags safe to use? How can I take advantage of them?

What is the Behavior of Tags in Articles where only Title is Available?

Title-only citations may sometimes appear, depending on how journals decide to publish their data.

  • Without an abstract there is insufficient information to reliably make model predictions

  • Hence, all "No Abstract" articles are tagged as "Suspected AE", indicating they should be screened

Why are MLM-AI Tags Called "Suspected" adverse event, and other "Suspected" categories?

MLM-AI models were designed so that relevant safety information is not missed. With this in mind, we have labeled and curated datasets that capture how likely safety information is present either in the abstract or in the full text of an article.

Consider the samples below extracted from abstracts. The most ambiguous cases may still be relevant, and will likely need review of the full text of the article:

By looking for "suspected" safety information, MLM-AI can still tag abstracts with incomplete information without missing important safety data.

Where can I learn more about the AI models powering AI Tags?

See our article discussing the the use of AI for literature screening:

See also the model technical specification and intended uses in the FactSheet:

Finally, our technical paper goes in more details on our AI development methodology and experimental results:

Monitor Configuration

How does the synonyms expansion work?

I made changes to a monitor configuration. When will I see them in Review results?

Any modification to monitors apply only to Reviews created after the change.

Existing reviews always use the monitor configuration present at the time of review creation. This ensures consistent behavior with the then current settings. This also applies to "Scheduled" reviews that have been submitted and are already in the "Processing" tab.

System Limits

Monitors

  • Maximum number of synonyms used in a monitor: 600

  • Maximum configured monitors per account: 100

  • Maximum configured special situation options: 50

  • Maximum configured exclusion options: 50

Workflow

  • Maximum configurable workflow decisions (system settings or monitor): 50

User Management

  • Maximum number of teams assigned to a single user: 10

  • Maximum configured teams per account: 50

Review Date Range

  • Maximum number of days in a review: 95

  • Oldest start date of a review (number of days from today): 450

Data Starting Point

  • Date where data starts in the MLM-AI article database: 01-JAN-2020

Attachments

  • Maximum size of uploaded attachment: 50Mb

Article Search

  • Maximum results in a request: 1000

Security

What is the default password policy?

By default every user is subject to the following password rules:

  • Minimum password length: 10 characters

  • At least 2 numbers in password

  • At least 1 lowercase letter

  • At least 1 uppercase letter

  • At least 1 special character ($&?, etc...)

Can the default password policy change?

How to configure password auto-lock after too many failed logins?

See also the for more details

Duplicate detection is a feature of and is active only when Reviews are requested. Note that screen, and benefit from duplicates detection this way.

Yes, once are configured for your system, users for articles that qualify for ICSR.

Quality control with is supported. Users can also to clearly indicate QC status.

Finally, if required users can directly assign articles for QC using the .

- Comprehensive repository of biomedical and life sciences literature

- Worldwide open access repository of scientific literature

The (DOAJ) - Worldwide Open Access literature

- Scientific life sciences literature focusing on Latin America and the Caribbean

- open access repository focused on literature from Spain

- open access repositories with regional coverage in Spain, Latin America and the Caribbean

(1) When creating a monitor using product name and synonyms, MLM-AI will match any of the key terms specified in the .

(2) Monitors can also be created from a search string created by the user. From , create a query string to suit your needs and proceed to create a monitor using the "pill" button. The monitor query string is exactly the one created by the user.

There are various strategies that can be employed for the safe implementation of AI-based screening. In we discuss techniques to leverage AI into your workflows.

There are many ways in which users can take advantage of AI Tags produced by MLM-AI. Please see the section on to learn more and use one that best suits your needs.

The best resource on how to safely integrate AI tags is our .

"No Abstract" citations are also presented in in Review results, to facilitate a separate screening workflow

Medication synonyms presented in the pages are periodically (at least quarterly) collected from the following public sources: , , and the .

Note: this limit does not apply to .

Yes, to request different password rules for your account.

Password auto-lock prevents account break-in attempts by temporarily locking the account after too many failed logins. This is a system configuration that administrators can setup via the Settings page. Learn more more on .

Uploading Review Data
assignment feature
PubMed
Crossref
Directory of Open Access Journals
SciELO
DialNet
IBECS and LILACS
monitor configuration
Article Search
Case study using DOAJ and Crossref for safety surveillance
GVP-VI Guidelines and biologit MLM-AI
AI-Enabled Screening Workflows
AI-Enabled Screening Workflows
https://www.biologit.com/post/towards-ai-transparency-with-model-factsheets
Guide to AI-Enabled Screening Workflows
https://www.biologit.com/post/fast-mlm-with-ai
https://www.biologit.com/post/towards-ai-transparency-with-model-factsheets
Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events
uploaded reviews
contact support
Monitors
you can create monitors based on the query you formulate in the Article Search
MeSH
ChemID
OpenFDA
EMA list of marketed products
Monitor Configuration
duplicate detection
Duplicate Detection
Feature spotlight: Duplicate Detection in Biologit MLM-AI
configure their workflows
can produce E2B XML files in R2 format
Processing
Review Lifecycle
various levels of sampling
a separate "No Abstarct" tab
E2B settings
Security Settings