Learn more about how MLM-AI works and key concepts that deliver more efficient workflows.
Literature Monitoring with MLM-AI
To introduce how MLM-AI makes Medical Literature Monitoring more efficient, consider the below diagram outlining a traditional workflow for this process:
A typical literature monitoring workflow comprises these stages:
One or more databases (PubMed, Embase, Google Scholar, etc.) are searched, each with their specific query language.
Results (title and abstract) are extracted and collated together.
Articles are de-duplicated, removing repeated and articles previously seen on past runs.
Abstracts are then screened for relevant safety events.
If screen is positive, the article may then proceed for a full text review.
Results are QA'ed and exported for further processing downstream.
All the above steps were taking into account when building MLM-AI's integrated workflow. MLM-AI updates the traditional workflow as follows:
MLM-AI automates searching and de-duplication of results from multiple sources, presenting a single, clean view of all retrieved abstracts.
MLM-AI searches from a growing number of sources automatically, including PubMed, DOAJ, and Crossref. See the FAQ on data sources and the latest release notes for more information.
Alternatively, users can also upload their own search results from sources that support exporting into a reference management format, and perform the remaining workflow from inside MLM-AI.
Using AI Tags (see below), MLM-AI also enables workflows that rank and filter irrelevant articles based on predictions from our AI models. This further removes time from processing large volumes of inbound articles.
Finally, the remaining steps of the workflow are all performed from within a single tool, with all actions recorded into an audit log and with full visibility by the entire team.
Monitors and Reviews
In MLM-AI, the screening process has the following steps:
Users configure a monitor defining the search criteria for a product, literature sources to search, and workflow details.
Once configured, results are produced periodically, de-duplicated and presented to users as Reviews. They are also tagged by our AI models for faster screening (see AI Tags below).
Depending on the desired customer workflow, results may also be filtered according to their AI tag.
Users can then screen abstracts in reviews according to their workflow, and export results upon screening completion.
The next section describes how you can leverage the AI models in MLM-AI to achieve a more effective screening process.
Every abstract presented to users receives a number of tags based on predictions made by MLM-AI. These tags can be used to filter and rank results, facilitating more efficient workflows.
Example tags associated to an abstract
The abstract or full text may contain the description of an adverse event. Thus, MLM-AI flagged the abstract as a candidate for further screening.
The abstract or full text may contain the description of a case (identifiable patient).
The abstract describes animal or in-vitro studies.
Any detected special situation (elderly, pediatric, pregnancy) is also flagged as a tag.
A confirmed patient mention (ex: "a 72-year old woman" etc)
Why "suspected" adverse events?
MLM-AI models were built to deliver efficiencies while maintaining a high hit rate on potentially relevant events. A "suspected" event indicates an event may have been described in the full text and alluded to in the abstract and was considered by MLM-AI as worth further inspection.
Also, note that suspected events are drug-agnostic: they can refer to any drug or treatment mentioned in the abstract.
MLM-AI performs duplicate detection by content, linking duplicate articles with same title and abstract. This is useful when the same article is published on multiple venues with different dates and IDs.
Articles may be flagged as duplicates inside your review only when:
They appear in the same review results. One of the articles will be automatically reviewed as "duplicate". If the duplicate articles come from different sources, PubMed is given preference as the "primary" source (ie. the source not flagged as duplicate).
An article is a duplicate of another article from review results from a past review for the same monitor.
"Duplicate" tabs in review details.
Filtering by Tags
Monitors can be configured to suit more automated
How MLM-AI Handles Article Dates
The start and end dates of search results are important parameters when performing periodic MLM. In general, there are two important dates to consider:
The publication date of an article - the date the article was published, according to the source journal.
The date an article becomes visible in a database - ie. the article's indexation date, or "first seen" date.
For example, an article may be published by the journal today, but it may be a few days or weeks before it makes its way to the PubMed database, where it would appear on search results. Note however the publication date is unchanged.
The diagram below illustrates the effects of this publication delay: a weekly search strategy that uses only the publication date may miss this article entirely.
To address this issue, MLM-AI always searches articles using the "first seen date" at the source repository. This ensures that the article is retrieved if it first appeared in the source within the date range required, even in cases where a publication is delayed in being indexed.
In addition, MLM-AI provides monitoring configuration options to fine-tune the strategy:
Include abstracts found by publication date - searches the same date range matching the publication date. This option is useful to maintain correspondence to existing manual search strategies.
Discard publication dates older than 60 days - if the difference of an article first seen date and publication date is older than 60 days, results are discarded. This option is useful to avoid spurious results from "blanket" publication of journals going back many years.