Deep Learning Identifies PPMS Patients Who May Respond to Treatment

News

Deep learning identified subgroups of patients with primary progressive multiple sclerosis (PPMS) who may respond to disease-modifying therapy, researchers said.

A trained model separated responders and non-responders across a range of predicted effect sizes, reported Jean-Pierre Falet, MD, of McGill University in Montreal, Canada, at the ACTRIMS Forum 2022, the annual meeting of the Americas Committee for Treatment and Research in Multiple Sclerosis.

Confirmed disability progression over 24 weeks for anti-CD20 drugs was significantly different versus placebo when selecting the model’s prediction for the top 25% most responsive patients (HR 0.442, P=0.0497), compared with HR 0.787 (P=0.292) for the entire group.

The model also identified responders to laquinimod, a drug studied in PPMS, finding a significant treatment effect in the top 20% most responsive individuals (HR 0.275, P=0.028).

“Very few drugs have been found to slow disability progression in the progressive subtypes of MS,” Falet said. Ocrelizumab (Ocrevus), an anti-CD20 drug, has shown an effect in PPMS, while siponimod (Mayzent), a selective sphingosine-1-phosphate receptor modulator, has shown an effect in secondary progressive MS, he noted.

“Part of the problem lies in the slow progression of disability,” Falet pointed out. “It’s difficult to identify a treatment effect in the timeframe of a phase II or III trial.”

In their study, Falet and colleagues used a multitask multi-layer perceptron (MLP), a type of neural network, to predict the conditional average treatment effect for people with PPMS taking anti-CD20 monoclonal antibodies or laquinimod. They included baseline imaging and clinical data from three placebo-controlled trials: the ORATORIO trial of ocrelizumab, the OLYMPUS trial of the anti-CD20 drug rituximab (Rituxan) which is used off-label to treat MS, and the ARPEGGIO trial of laquinimod.

A shuffled mix of data from 1,020 participants in the two anti-CD20 trials, ORATORIO and OLYMPUS, was separated into a training (70%) and testing (30%) dataset. Data from ARPEGGIO served as additional external validation. A multitask MLP was trained to predict the rate of disability progression, assessed by changes in Expanded Disability Status Scale (EDSS) scores, in both active treatment and placebo groups.

People who were anti-CD20 antibody responders tended to be younger and male, and had higher disability scores, shorter disease duration, and more T2 lesions at baseline. Only a small proportion of the population received benefit, which may be why the effect is diluted when the entire group is included at the clinical trial level, Falet pointed out.

By identifying people who experience a greater treatment benefit, modeling could allow for predictive enrichment, increasing the power of future phase II clinical trials, Falet noted. This, in turn, could identify more candidate drugs for phase III studies.

“We can estimate the sample size needed for a future 1-year clinical trial,” he said. If randomizing the top 50% who are predicted to be more responsive, the sample size would be 497; if randomizing everyone, the sample size would need to be 3,068, he said.

In the clinic, a trained model also could help make more informed treatment recommendations, Falet added.

The approach has some limitations, including challenges with the interpreting the algorithm, he noted. It’s also unclear what the implications may be for clinical trial design for regulatory approval.

Disclosures

Falet disclosed no relationships with industry.

Leave a Reply

Your email address will not be published. Required fields are marked *