Search results
Found 37860 matches for
Congratulations to NDM's Sergi Padilla-Parra, who has been successful in obtaining one of the nine new ERC Consolidator Grants, the most awarded to any institution in the UK and second place in Europe.
[Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extensionDiretrizes para protocolos de ensaios clínicos com intervenções que utilizam inteligência artificial: a extensão SPIRIT-AI].
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.
[Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extensionDiretrizes para relatórios de ensaios clínicos com intervenções que utilizam inteligência artificial: a extensão CONSORT-AI].
The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
Differentially Private Statistical Inference through β-Divergence One Posterior Sampling
Differential privacy guarantees allow the results of a statistical analysis involving sensitive data to be released without compromising the privacy of any individual taking part. Achieving such guarantees generally requires the injection of noise, either directly into parameter estimates or into the estimation process. Instead of artificially introducing perturbations, sampling from Bayesian posterior distributions has been shown to be a special case of the exponential mechanism, producing consistent, and efficient private estimates without altering the data generative process. The application of current approaches has, however, been limited by their strong bounding assumptions which do not hold for basic models, such as simple linear regressors. To ameliorate this, we propose βD-Bayes, a posterior sampling scheme from a generalised posterior targeting the minimisation of the β-divergence between the model and the data generating process. This provides private estimation that is generally applicable without requiring changes to the underlying model and consistently learns the data generating parameter. We show that βD-Bayes produces more precise inference estimation for the same privacy guarantees, and further facilitates differentially private estimation via posterior sampling for complex classifiers and continuous regression models such as neural networks for the first time.
Blood transfusion in the care of patients with visceral leishmaniasis: a review of practices in therapeutic efficacy studies.
Blood transfusion remains an important aspect of patient management in visceral leishmaniasis (VL). However, transfusion triggers considered are poorly understood. This review summarises the transfusion practices adopted in VL efficacy studies using the Infectious Diseases Data Observatory VL clinical trials library. Of the 160 studies (1980-2021) indexed in the IDDO VL library, description of blood transfusion was presented in 16 (10.0%) (n=3459 patients) studies. Transfusion was initiated solely based on haemoglobin (Hb) measurement in nine studies, combining Hb measurement with an additional condition (epistaxis/poor health/clinical instability) in three studies and the criteria was not mentioned in four studies. The Hb threshold range for triggering transfusion was 3-8 g/dL. The number of patients receiving transfusion was explicitly reported in 10 studies (2421 patients enrolled, 217 underwent transfusion). The median proportion of patients who received transfusion in a study was 8.0% (Interquartile range: 4.7% to 47.2%; range: 0-100%; n=10 studies). Of the 217 patients requiring transfusion, 58 occurred before VL treatment initiation, 46 during the treatment/follow-up phase and the time was not mentioned in 113. This review describes the variation in clinical practice and is an important initial step in policy/guideline development, where both the patient's Hb concentration and clinical status must be considered.
Analyses of human vaccine-specific circulating and bone marrow-resident B cell populations reveal benefit of delayed vaccine booster dosing with blood-stage malaria antigens
We have previously reported primary endpoints of a clinical trial testing two vaccine platforms for the delivery of Plasmodium vivax malaria DBPRII: viral vectors (ChAd63, MVA), and protein/adjuvant (PvDBPII with 50µg Matrix-M™ adjuvant). Delayed boosting was necessitated due to trial halts during the pandemic and provides an opportunity to investigate the impact of dosing regimens. Here, using flow cytometry – including agnostic definition of B cell populations with the clustering tool CITRUS – we report enhanced induction of DBPRII-specific plasma cell and memory B cell responses in protein/adjuvant versus viral vector vaccinees. Within protein/adjuvant groups, delayed boosting further improved B cell immunogenicity compared to a monthly boosting regimen. Consistent with this, delayed boosting also drove more durable anti-DBPRII serum IgG. In an independent vaccine clinical trial with the P. falciparum malaria RH5.1 protein/adjuvant (50µg Matrix-M™) vaccine candidate, we similarly observed enhanced circulating B cell responses in vaccinees receiving a delayed final booster. Notably, a higher frequency of vaccine-specific (putatively long-lived) plasma cells was detected in the bone marrow of these delayed boosting vaccinees by ELISPOT and correlated strongly with serum IgG. Finally, following controlled human malaria infection with P. vivax parasites in the DBPRII trial, in vivo growth inhibition was observed to correlate with DBPRII-specific B cell and serum IgG responses. In contrast, the CD4+ and CD8+ T cell responses were impacted by vaccine platform but not dosing regimen and did not correlate with in vivo growth inhibition in a challenge model. Taken together, our DBPRII and RH5 data suggest an opportunity for protein/adjuvant dosing regimen optimisation in the context of rational vaccine development against pathogens where protection is antibody-mediated.
Feasibility of wearable monitors to detect heart rate variability in children with hand, foot and mouth disease
AbstractHand foot and mouth disease (HFMD) is caused by a variety of enteroviruses, and occurs in large outbreaks in which a small proportion of children deteriorate rapidly with cardiopulmonary failure. Determining which children are likely to deteriorate is difficult and health systems may become overloaded during outbreaks as many children require hospitalization for monitoring. Heart rate variability (HRV) may help distinguish those with more severe diseases but requires simple scalable methods to collect ECG data.We carried out a prospective observational study to examine the feasibility of using wearable devices to measure HRV in 142 children admitted with HFMD at a children’s hospital in Vietnam. ECG data were collected in all children. HRV indices calculated were lower in those with enterovirus A71 associated HFMD compared to those with other viral pathogens.HRV analysis collected from wearable devices is feasible in a low and middle income country (LMIC) and may help classify disease severity in HFMD.
Facilitating Safe Discharge Through Predicting Disease Progression in Moderate Coronavirus Disease 2019 (COVID-19): A Prospective Cohort Study to Develop and Validate a Clinical Prediction Model in Resource-Limited Settings
Abstract Background In locations where few people have received coronavirus disease 2019 (COVID-19) vaccines, health systems remain vulnerable to surges in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. Tools to identify patients suitable for community-based management are urgently needed. Methods We prospectively recruited adults presenting to 2 hospitals in India with moderate symptoms of laboratory-confirmed COVID-19 to develop and validate a clinical prediction model to rule out progression to supplemental oxygen requirement. The primary outcome was defined as any of the following: SpO2 < 94%; respiratory rate > 30 BPM; SpO2/FiO2 < 400; or death. We specified a priori that each model would contain three clinical parameters (age, sex, and SpO2) and 1 of 7 shortlisted biochemical biomarkers measurable using commercially available rapid tests (C-reactive protein [CRP], D-dimer, interleukin 6 [IL-6], neutrophil-to-lymphocyte ratio [NLR], procalcitonin [PCT], soluble triggering receptor expressed on myeloid cell-1 [sTREM-1], or soluble urokinase plasminogen activator receptor [suPAR]), to ensure the models would be suitable for resource-limited settings. We evaluated discrimination, calibration, and clinical utility of the models in a held-out temporal external validation cohort. Results In total, 426 participants were recruited, of whom 89 (21.0%) met the primary outcome; 257 participants comprised the development cohort, and 166 comprised the validation cohort. The 3 models containing NLR, suPAR, or IL-6 demonstrated promising discrimination (c-statistics: 0.72–0.74) and calibration (calibration slopes: 1.01–1.05) in the validation cohort and provided greater utility than a model containing the clinical parameters alone. Conclusions We present 3 clinical prediction models that could help clinicians identify patients with moderate COVID-19 suitable for community-based management. The models are readily implementable and of particular relevance for locations with limited resources.
Differences in 5'untranslated regions highlight the importance of translational regulation of dosage sensitive genes.
Untranslated regions (UTRs) are important mediators of post-transcriptional regulation. The length of UTRs and the composition of regulatory elements within them are known to vary substantially across genes, but little is known about the reasons for this variation in humans. Here, we set out to determine whether this variation, specifically in 5'UTRs, correlates with gene dosage sensitivity. We investigate 5'UTR length, the number of alternative transcription start sites, the potential for alternative splicing, the number and type of upstream open reading frames (uORFs) and the propensity of 5'UTRs to form secondary structures. We explore how these elements vary by gene tolerance to loss-of-function (LoF; using the LOEUF metric), and in genes where changes in dosage are known to cause disease. We show that LOEUF correlates with 5'UTR length and complexity. Genes that are most intolerant to LoF have longer 5'UTRs, greater TSS diversity, and more upstream regulatory elements than their LoF tolerant counterparts. We show that these differences are evident in disease gene-sets, but not in recessive developmental disorder genes where LoF of a single allele is tolerated. Our results confirm the importance of post-transcriptional regulation through 5'UTRs in tight regulation of mRNA and protein levels, particularly for genes where changes in dosage are deleterious and lead to disease. Finally, to support gene-based investigation we release a web-based browser tool, VuTR, that supports exploration of the composition of individual 5'UTRs and the impact of genetic variation within them.
Image-based consensus molecular subtyping in rectal cancer biopsies and response to neoadjuvant chemoradiotherapy.
The development of deep learning (DL) models to predict the consensus molecular subtypes (CMS) from histopathology images (imCMS) is a promising and cost-effective strategy to support patient stratification. Here, we investigate whether imCMS calls generated from whole slide histopathology images (WSIs) of rectal cancer (RC) pre-treatment biopsies are associated with pathological complete response (pCR) to neoadjuvant long course chemoradiotherapy (LCRT) with single agent fluoropyrimidine. DL models were trained to classify WSIs of colorectal cancers stained with hematoxylin and eosin into one of the four CMS classes using a multi-centric dataset of resection and biopsy specimens (n = 1057 WSIs) with paired transcriptional data. Classifiers were tested on a held out RC biopsy cohort (ARISTOTLE) and correlated with pCR to LCRT in an independent dataset merging two RC cohorts (ARISTOTLE, n = 114 and SALZBURG, n = 55 patients). DL models predicted CMS with high classification performance in multiple comparative analyses. In the independent cohorts (ARISTOTLE, SALZBURG), cases with WSIs classified as imCMS1 had a significantly higher likelihood of achieving pCR (OR = 2.69, 95% CI 1.01-7.17, p = 0.048). Conversely, imCMS4 was associated with lack of pCR (OR = 0.25, 95% CI 0.07-0.88, p = 0.031). Classification maps demonstrated pathologist-interpretable associations with high stromal content in imCMS4 cases, associated with poor outcome. No significant association was found in imCMS2 or imCMS3. imCMS classification of pre-treatment biopsies is a fast and inexpensive solution to identify patient groups that could benefit from neoadjuvant LCRT. The significant associations between imCMS1/imCMS4 with pCR suggest the existence of predictive morphological features that could enhance standard pathological assessment.
Spatio-temporal spread of artemisinin resistance in Southeast Asia
Current malaria elimination targets must withstand a colossal challenge–resistance to the current gold standard antimalarial drug, namely artemisinin derivatives. If artemisinin resistance significantly expands to Africa or India, cases and malaria-related deaths are set to increase substantially. Spatial information on the changing levels of artemisinin resistance in Southeast Asia is therefore critical for health organisations to prioritise malaria control measures, but available data on artemisinin resistance are sparse. We use a comprehensive database from the WorldWide Antimalarial Resistance Network on the prevalence of non-synonymous mutations in the Kelch 13 (K13) gene, which are known to be associated with artemisinin resistance, and a Bayesian geostatistical model to produce spatio-temporal predictions of artemisinin resistance. Our maps of estimated prevalence show an expansion of the K13 mutation across the Greater Mekong Subregion from 2000 to 2022. Moreover, the period between 2010 and 2015 demonstrated the most spatial change across the region. Our model and maps provide important insights into the spatial and temporal trends of artemisinin resistance in a way that is not possible using data alone, thereby enabling improved spatial decision support systems on an unprecedented fine-scale spatial resolution. By predicting for the first time spatio-temporal patterns and extents of artemisinin resistance at the subcontinent level, this study provides critical information for supporting malaria elimination goals in Southeast Asia.
Specific plasma microRNAs are associated with CD4+ T-cell recovery during suppressive antiretroviral therapy for HIV-1.
ObjectiveThis study investigated the association of plasma microRNAs before and during antiretroviral therapy (ART) with poor CD4+ T-cell recovery during the first year of ART.DesignMicroRNAs were retrospectively measured in stored plasma samples from people with HIV (PWH) in sub-Saharan Africa who were enrolled in a longitudinal multicountry cohort and who had plasma viral-load less than 50 copies/ml after 12 months of ART.MethodsFirst, the levels of 179 microRNAs were screened in a subset of participants from the lowest and highest tertiles of CD4+ T-cell recovery (ΔCD4) (N = 12 each). Next, 11 discordant microRNAs, were validated in 113 participants (lowest tertile ΔCD4: n = 61, highest tertile ΔCD4: n = 52). For discordant microRNAs in the validation, a pathway analysis was conducted. Lastly, we compared microRNA levels of PWH to HIV-negative controls.ResultsPoor CD4+ T-cell recovery was associated with higher levels of hsa-miR-199a-3p and hsa-miR-200c-3p before ART, and of hsa-miR-17-5p and hsa-miR-501-3p during ART. Signaling by VEGF and MET, and RNA polymerase II transcription pathways were identified as possible targets of hsa-miR-199a-3p, hsa-200c-3p, and hsa-miR-17-5p. Compared with HIV-negative controls, we observed lower hsa-miR-326, hsa-miR-497-5p, and hsa-miR-501-3p levels before and during ART in all PWH, and higher hsa-miR-199a-3p and hsa-miR-200c-3p levels before ART in all PWH, and during ART in PWH with poor CD4+ T-cell recovery only.ConclusionThese findings add to the understanding of pathways involved in persistent HIV-induced immune dysregulation during suppressive ART.
Interpretations of Studies on SARS-CoV-2 Vaccination and Post-acute COVID-19 Sequelae.
This article discusses causal interpretations of epidemiologic studies of the effects of vaccination on sequelae after acute severe acute respiratory syndrome coronavirus 2 infection. To date, researchers have tried to answer several different research questions on this topic. While some studies assessed the impact of postinfection vaccination on the presence of or recovery from post-acute coronavirus disease 2019 syndrome, others quantified the association between preinfection vaccination and postacute sequelae conditional on becoming infected. However, the latter analysis does not have a causal interpretation, except under the principal stratification framework-that is, this comparison can only be interpreted as causal for a nondiscernible stratum of the population. As the epidemiology of coronavirus disease 2019 is now nearly entirely dominated by reinfections, including in vaccinated individuals, and possibly caused by different Omicron subvariants, it has become even more important to design studies on the effects of vaccination on postacute sequelae that address precise causal questions and quantify effects corresponding to implementable interventions.