Generic selectors
Exact matches only
Search in title
Search in content
post
page

June 2019: Challenges in application of artificial intelligence in healthcare

Featuring Dr.Ashish Khanna, Rakesh Shiradkar,PhD. and Professor Sharona Hoffman

01 June, 2019

Watch the video of this event on our YouTube channel.

Speakers

An engaging discussion held on challenges in application of artificial intelligence in healthcare.Various perspectives including challenges with data quality, machine learning modeling and most importantly ethical and legal challenges were highlighted.

Many thanks to the team of the Center of Computational Imaging and Personalized Diagnostics (CCIPD) at Case Western Reserve University for hosting and co-organizing this event with BrainX Community.

Abstracts from our 3 eminent speakers are available below.

Abstracts from our 3 eminent speakers are available below.

Dr.Ashish Khanna

Title: Solving healthcare challenges with machine learning and artificial intelligence-clinical  challenges.

Big data analytics fit well into the many thousands of numbers generated by patient monitors in the ICU and the general care floor. As anesthesiologists and perioperative physicians, we have been tasked with taking care of patients as they transition from the critical care unit to the general care floor. It is essential that we are able to accurately determine and predict the nature, scope and extent of cardiopulmonary deterioration in all of these patient care environments. Dr.Khanna’s talk was centered on clinical questions, research methodology and the dearth of granular data to drive predictive analytics models as they exist today. He gave examples of the intensive care unit where at least one blood pressure reading (verified and entered in the EMR) is at best available for a data pull, whereas continuous streaming chunks of unverified data stored in monitoring systems are purged out every 3 days or so.

Hypotension in the postoperative critically ill patient is of critical importance. The conventional blood pressure to defend in the ICU has always been stated to be at least 65mmHg. This is based on RCT verified data, and other observational work. Our recent work across 9,000 ICU patients in a 110 ICUs in the United States, showed that in fact higher pressures may be needed in the critically ill patient. Herein, we showed that the earliest sign of harm begins at a MAP of 85mmHg, for myocardial injury, acute kidney injury & mortality and for AKI and mortality increases progressively down to a MAP of 55mmHg. We subsequently examined a 3,000-patient cohort of postoperative critically ill patients admitted to the surgical ICU from the operating room. We identified a strong non-linear association with the lowest MAP on any given day in the ICU and a composite of myocardial injury and or mortality and a secondary outcome of acute kidney injury. These associations were identified at blood pressure thresholds that were previously considered normal. 

Similarly, current data on the general care floor is that entered via conventional spot checks based monitoring protocols, that limit us to a once in 4-6 hours’ time interval of vital signs. We know that postoperative cardiorespiratory events are preceded by 6-12 hours of vital signs abnormalities; and some may yet be largely unpredictable. Knowing the extent of the problem and the lack of predictability of the same, the only logical solution may be continuous, automated, multi-parameter monitoring of cardiorespiratory parameters on all patients on the regular nursing floor. It remains to be seen whether this practice will make a difference in the early detection of patient decline and clinical outcomes along with the utilization of emergency response teams in hospital systems. Keeping in mind that alarm fatigue secondary to the many false alarms that may be generated from these systems may be a real problem, an initial scientific question should also integrate the investigation of the extent of nursing responses to continuous monitoring systems on the ward. However, the amount of big data generated from continuous monitoring will need to be curated and fitted into AI platforms that can create a strong efferent limb of provider response based on mitigation of false alarms and creation of monitoring stations that allow continuous proactive surveillance of deterioration.

Reference:Automated continuous noninvasive ward monitoring: future directions and challenges.Khanna AK, et al. Crit Care. 2019.

Rakesh Shiradkar,PhD.

Title: Machine Learning Challenges in Medical Image Analysis

 

Machine Learning is increasingly gaining prominence in the Medical Imaging community due its potential benefits in numerous clinical tasks. There are, however, challenges at a number of levels which machine learning scientists in medical image analysis face, that need to be addressed. 

This talk brings out some of the commonly faced challenges including those with respect to data acquisition, pre-processing, choice of learning strategies, validation, communicating with clinicians and interpretation of results. Potential strategies for addressing some of these are also touched upon.

Professor Sharona Hoffman

Title:Artificial Intelligence and Ethics

 

Artificial intelligence (AI) is generating significant excitement and holds promise to improve treatment outcomes, but it also has important ethical and legal implications.  First, are privacy and discrimination concerns.  Data generated by AI will be incorporated into electronic health records and thus may be vulnerable to privacy breaches or be obtained by third parties through disclosure authorizations.  Employers, insurers, and others may use AI data for discriminatory purposes and make adverse decisions regarding data subjects.

  Second, AI may exacerbate health disparities because resource-poor facilities may not have access to it.  Use of racial or ethnic identification as a variable in AI analysis may also lead to stigmatization of certain groups as more diseased or biologically inferior to others. Third, clinicians should be wary of causing psychological harms by disclosing predictions about patients’ long-term health outlook (e.g. future cognitive decline).  The potential for erroneous predictions resulting from training data that are of poor-quality, biased, or otherwise flawed augments this concern.  Finally, physicians worry about the impact AI will have on the physician-patient relationship and about its liability ramifications.  This talk analyzed all of these concerns and outlined strategies to address them.

Reference:“What Genetic Testing Teaches about Predictive Health Analytics” to be published in the North Carolina Law Review in late 2019.