Received: Accepted: AugPublished: September 26, 2017Ĭopyright: © 2017 DeMasi et al. University of Illinois at Urbana-Champaign, UNITED STATES To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring.Ĭitation: DeMasi O, Kording K, Recht B (2017) Meaningless comparisons lead to false optimism in medical machine learning. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood-the patient-specific baseline. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. using everything your phone measures about you for diagnostics or monitoring. A new trend in medicine is the use of algorithms to analyze big datasets, e.g.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |