# FAQ

## Login

### I forgot my password

Users that are registered in the platform can request a temporary password to login and update the forgotten password. This is covered in [#forgot-password](https://docs.biologit.com/application/navigation#forgot-password "mention").

## Reviews and Monitors

### Why some abstract in my weekly review have already been screened?

Newly generated reviews may present pre-screened abstracts that were tagged at the [duplicate detection](https://docs.biologit.com/topics/duplicate-detection) stage. Duplicate abstracts relate to a corresponding abstract in the same review, or in a previous review for the same product.

### I updated a monitor, but the changes did not take effect on the monitor reviews

Reviews always use the monitor settings in effect at the time of submission. This means any monitor changes performed later will not be reflected. This is by design to ensure traceability: the system behaves and reflects monitor settings current at the time the review was created.

Note the same applies to *scheduled* reviews: once a review is scheduled (i.e. it appears in the "[Processing](https://docs.biologit.com/application/reviews#the-processing-tab)" tab), the settings are fixed.

### How does Duplicate Detection Work?

MLM-AI performs duplicate detection **by ID, DOI and content**. This is further detailed in the page:

* [Duplicate Detection](https://docs.biologit.com/topics/duplicate-detection)

Articles are flagged as duplicates **only when**:

* They appear in the same review results. One of the articles will be automatically reviewed as "duplicate".
* An article is a duplicate of another article seen in a past review for the *same Monitor*.

{% hint style="info" %}
Learn more about how the techniques used in automated duplicate detection on: [**Feature  spotlight: Duplicate Detection in Biologit MLM-AI**](https://www.biologit.com/post/feature-spotlight-duplicate-detection-in-biologit-mlm-ai)
{% endhint %}

### When will my review move from "In Progress" to "Completed"?

Reviews stay "In Progress" while there are still abstracts to be screened, and move to "Completed" once *all abstracts* have been screened ie. all abstracts were saved with a decision by a user or by MLM-AI via automated action.

* See also the [Review Lifecycle](https://docs.biologit.com/application/reviews#the-review-lifecycle) for more details

## Uploading Data

### What data can I upload to MLM-AI for screening?

Search results obtained from **PubMed, EMBASE, EBSCO, Adis, and ProQuest** can be directly uploaded into MLM-AI for screening.&#x20;

The supported formats and help with the export steps are detailed in: &#x20;

{% content-ref url="../application/upload-data" %}
[upload-data](https://docs.biologit.com/application/upload-data)
{% endcontent-ref %}

## Article Search

### Why do I see duplicates on my query results?

Duplicate detection is a feature of [Monitors](#monitors) and is active only when Reviews are requested. Note that [you can create monitors based on the query you formulate in the Article Search](#article-search) screen, and benefit from duplicates detection this way.

### How do I perform term expansion?

The query syntax of the MLM-AI database allows for wildcards that can expand common suffixes, for example:

* `toxic*` -> toxic, toxicity, toxicities
* `pregna*` -> pregnant, pregnancy, pregnancies

### I am using "-" in my search terms, and got more results than expected

In certain cases the `-` is ignored to facilitate matching of commonly (non) hyphenated terms like "where by" and "where-by"

### Are there limits to results in Article Search?

Yes, a maximum of `1000` hits can be retrieved from a single query. Try reducing the date range or refining the search criteria.

## Screening Articles

### Does MLM-AI support E2B files?

Yes, once [E2B settings](https://docs.biologit.com/configuration/settings#e2b-settings) are configured for your system, users [can produce E2B XML files in R2 format](https://docs.biologit.com/application/article-screening#e2b) for articles that qualify for ICSR.

### Does MLM-AI support quality control (QC) workflows?

Quality control with [various levels of sampling](https://docs.biologit.com/application/review-details#tab-filter-sampling-and-sorting-choices-follow-user-during-screening) is supported. Users can also [configure their workflows](https://docs.biologit.com/configuration/custom-workflows) to clearly indicate QC status.&#x20;

Finally, if required users can directly assign articles for QC using the [assignment feature](https://docs.biologit.com/application/review-details/work-assignment).&#x20;

### How does sampling of articles work?

**Sampling Criteria**

* Sampling will return at least one article, irrespective of sampling rate
* Otherwise, sampling will round up or down to the nearest integer value, according to sampling rate

**Sampling and Reviews**

* In each review, articles are randomly shuffled into a list using a pre-defined method, and sampling draws articles from the shuffled list
  * This ensures sampling produces consistent results, allowing subsequent QC sessions to retrieve the same articles
* In Find Articles, sampling applies to each review separately (following the sampling approach above), and are then aggregated and retrieved to the user

## Data Sources and Searching Articles

### What literature sources are present in MLM-AI?

MLM-AI ingests data from established and open access databases to achieve a broad literature reach. Currently, we automatically scan and upload the following sources:

* [**PubMed** ](https://pubmed.ncbi.nlm.nih.gov/)- Comprehensive repository of biomedical and life sciences literature
* [**Crossref** ](https://www.crossref.org/services/)- Worldwide open access repository of scientific literature
* The [**Directory of Open Access Journals**](https://doaj.org/) (DOAJ) - Worldwide Open Access literature
* [**SciELO** ](https://www.scielo.br/)- Scientific life sciences literature focusing on Latin America and the Caribbean. The following Scielo repositories are ingested:
  * Scielo Brazil
  * Scielo Spain (from 01-JAN-2025)
* [**DialNet** ](https://dialnet.unirioja.es/)- open access repository focused on literature from Spain
* [**IBECS and LILACS**](https://red.bvsalud.org/en/vhl-search-portal-collection/) - open access repositories with regional coverage in Spain, Latin America and the Caribbean
  * **Note**: As of May/2024 the IBECS source is no longer being updated with new publications at the source
  * Data start date for LILACS is Nov/2023

### What is the search strategy employed by MLM-AI?

There are two options:

(1) When creating a monitor using product name and synonyms, MLM-AI will match any of the key terms specified in the [monitor configuration](https://docs.biologit.com/configuration/monitor-configuration).

In the example below, the key terms associated with "etanercept" are **OR**'ed together, and searched across all sources specified in the monitor.

<figure><img src="https://1171269993-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F6pIe8UPxEvsbrxTVXkGi%2Fuploads%2FgzYQVfQ8EIeROk8W5Miz%2Fimage.png?alt=media&#x26;token=f8f9e95e-ab8e-4311-abbc-5f92b5493bf5" alt=""><figcaption></figcaption></figure>

(2) Monitors can also be created from a search string created by the user. From [Article Search](https://docs.biologit.com/application/article-search), create a query string to suit your needs and proceed to create a monitor using the "pill" button. The monitor query string is exactly the one created by the user.

{% embed url="<https://youtu.be/B0WYiHC30Hk>" %}

### Should I include additional sources available in the Biologit Database in my searches?

Including more repositories will broaden the reach of scientific literature available to you, and can contribute in identifying more relevant safety information.&#x20;

Additional databases can also help in meeting your regulatory requirements.

See also these resources for further information on this topic:

{% hint style="info" %}
[Case study using DOAJ and Crossref for safety surveillance](https://www.biologit.com/post/broader-searches-for-adverse-event-articles-a-case-study-with-doaj-and-crossref)
{% endhint %}

{% hint style="info" %}
[GVP-VI Guidelines and biologit MLM-AI](https://www.biologit.com/post/gvp-vi-guidelines-and-biologit-mlm-ai)
{% endhint %}

## AI Tags

### Will MLM-AI detect Suspected adverse events abstracts for my product?

MLM-AI models are product agnostic, having been trained with a cross-section of scientific literature covering a broad range of abstracts.

There are various strategies that can be employed for the safe implementation of AI-based screening. In [AI-Enabled Screening Workflows](https://docs.biologit.com/topics/ai-enabled-screening-workflows) we discuss techniques to leverage AI into your workflows.

### How accurate are MLM-AI models?

All MLM-AI models are configured for **high recall** on the target category of interest (Suspected AE, Suspected Case etc). This means AI tags produced by MLM-AI are "conservative" in that it tolerates some amount of false positives so that missing a relevant abstract is highly unlikely.&#x20;

There are many ways in which users can take advantage of AI Tags produced by MLM-AI. Please see the section on [AI-Enabled Screening Workflows](https://docs.biologit.com/topics/ai-enabled-screening-workflows) to learn more and use one that best suits your needs.

For performance results please refer to the latest fact sheet:

* <https://www.biologit.com/post/towards-ai-transparency-with-model-factsheets>

### Are AI-based tags safe to use? How can I take advantage of them?

The best resource on how to safely integrate AI tags is our [Guide to AI-Enabled Screening Workflows](https://docs.biologit.com/topics/ai-enabled-screening-workflows).

### What is the Behavior of Tags in Articles where only Title is Available?

Title-only citations may sometimes appear, depending on how journals decide to publish their data.&#x20;

* Without an abstract there is insufficient information to reliably make model predictions
* Hence, all "No Abstract" articles are tagged as "Suspected AE", indicating they should be screened&#x20;
* "No Abstract" citations are also presented in [a separate "No Abstarct" tab](https://docs.biologit.com/application/review-details#tabs) in Review results, to facilitate a separate screening workflow

### Why are MLM-AI Tags Called "Suspected" adverse event, and other "Suspected" categories?

MLM-AI models were designed so that relevant safety information is not missed. With this in mind, we have labeled and curated datasets that capture how likely safety information is present either in the abstract or in the full text of an article.

Consider the samples below extracted from abstracts. The most ambiguous cases may still be relevant, and will likely need review of the full text of the article:

![](https://1171269993-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MWAcqCJbPL5YYXAvC98%2F-MX-FvF3AzhxIWyNHE3O%2F-MX1GKiMkmcIbdwB0luY%2Fimage.png?alt=media\&token=b647e4d8-045a-454f-b851-6d6a4cb49aa4)

By looking for "suspected" safety information, MLM-AI can still tag abstracts with incomplete information without missing important safety data.

### Where can I learn more about the AI models powering AI Tags?

See our article discussing the the use of AI for literature screening:

* <https://www.biologit.com/post/fast-mlm-with-ai>

See also the model technical specification and intended uses in the FactSheet:

* <https://www.biologit.com/post/towards-ai-transparency-with-model-factsheets>

Finally, our technical paper goes in more details on our AI development methodology and experimental results:

* [Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events](https://arxiv.org/ftp/arxiv/papers/2201/2201.00692.pdf)

## Monitor Configuration

### How does the synonyms expansion work?

Medication synonyms presented in the [Monitor Configuration](#monitor-configuration) pages are periodically (at least quarterly) collected from the following public sources: [MeSH](https://www.nlm.nih.gov/databases/download/mesh.html), [ChemID](https://www.nlm.nih.gov/databases/download/chemidplus.html), [OpenFDA](https://open.fda.gov/data/downloads/) and the [EMA list of marketed products](https://www.ema.europa.eu/en/human-regulatory/post-authorisation/data-medicines-iso-idmp-standards/public-data-article-57-database).

### I made changes to a monitor configuration. When will I see them in Review results?

Any modification to monitors apply *only* to Reviews created *after* the change.

Existing reviews always use the monitor configuration present *at the time of review creation*. This ensures consistent behavior with the then current settings. This also applies to "Scheduled" reviews that have been submitted and are already in the "Processing" tab.

## System Limits

#### Monitors

* Maximum number of synonyms used in a monitor: `600`
* Maximum configured monitors per account: `100`
* Maximum configured special situation options: `50`
* Maximum configured exclusion options: `50`

**Workflow**

* Maximum configurable workflow decisions (system settings or monitor): `50`

#### User Management

* Maximum number of teams assigned to a single user: `10`
* Maximum configured teams per account: `50`

#### Review Date Range

* Maximum number of days in a review: `95`
* Oldest start date of a review (number of days from today): `450`

#### Data Starting Point

* Date where data starts in the Biologit Database: `01-JAN-2020`&#x20;
  * Note: exceptions apply. [See also FAQ on available sources](#data-sources-and-searching-abstracts)
* Note: this limit does not apply to [uploaded reviews](https://docs.biologit.com/application/upload-data).

**Attachments**

* Maximum size of uploaded attachment: `50Mb`

#### Article Search

* Maximum results in a request: `1000`

## Security

### What is the default password policy?

By default every user is subject to the following password rules:

* Minimum password length: `10` characters
* At least `2` numbers in password
* At least `1` lowercase letter
* At least `1` uppercase letter
* At least `1` special character (`$&?`, etc...)

### Can the default password policy change?

Yes, [contact support](https://docs.biologit.com/more-help/support-and-platform-details) to request different password rules for your account.

### How to configure password auto-lock after too many failed logins?

Password auto-lock prevents account break-in attempts by temporarily locking the account after too many failed logins. This is a system configuration that administrators can setup via the Settings page. Learn more more on [Security Settings](https://docs.biologit.com/configuration/settings#security).
