7 min reading time
Former $MELA CEO Dr. Joseph Gulfo’s book “Innovation Breakdown” is so enlightening (and well-written) that I invited him to join the 10x faculty in May. Was his experience extreme, biased, or common? BACKGROUND β’ Melanoma is the only cancer you can “see coming” and diagnose in time. But diagnosis is so difficult even dermatologist Liz Tanzi didn’t have her mole biopsied. A year later, she had Stage 1 melanoma. β’ Mela Sciences’ MelaFind (now on the market) performs a multi-spectral analysis and gives dermatologists a highly sensitive tool to detect melanoma at a curable stage. MELA’s EXPERIENCE β’ MELA completed the largest prospective clinical trial ever performed in melanoma detection and met every endpoint set with FDA in the binding agreement. β’ During the PMA review, FDA indicated favorable opinion and MELA expected an imminent advisory panel. β’ Instead, after a new FDA director took over, the very next communication from FDA was a Not Approvable Letter! β’ After FDA acquiesced and gave MELA a panel meeting. Three days before the meeting, FDA dumped deceitful and flawed analyses of the MelaFind pivotal trial data. β’ In all, it took TWELVE YEARS for MelaFind to be cleared for commercialization. +++ Dr. Gulfo said, “If something this non-invasive and potentially life-saving can have so much difficulty getting through FDA and small-company public financing, what does that portend for other true scientific breakthroughs?” +++ Does Dr. Gulfoβs story surprise you? Do you believe his is the outlying case or endemic of FDA? Is it Division dependent? Joe’s $27 book (only $18 including shipping, code MEDBIZ) at http://medgroup.biz/breakdown Karl Schulmeisters Robert Christensen Karl Schulmeisters Robert Christensen Karl Schulmeisters Training dataset is the that set of inputs of both “positive” and “negative” examples that are used to “teach” the system what to diagnose. And particularly for optical based systems, this pretty much means a set of images that have been proven and vetted as such. And because of how Machine Learning works, you ideally want a data set of a couple of hundred thousand examples. For most medical conditions, such datasets do not exist. The second aspect is the “supervisory control” – namely how along the iterative “teaching” aspect of the system you identify the “false positives/negatives” and feed that back into the system. Ideally your “test” set is different than your “training set” – so that you can show the actual accuracy of the diagnosis. so now you are talking about an even larger dataset. Karl Schulmeisters Both are outcomes with potentially fatal or injurious consequences. OTOH, an intrusive device for a new kind of ortho surgery, or even a new kind of drug eluting stent relies on MD skills and expertise in application. So I think this speaks less to innovation and a lot more to the fairly high barrier of proof that the FDA is going to require for “big data”/”automated analytics” based diagnostics. And this is not necessarily a bad thing. I know a fair bit about “Machine Learning” (which is what these systems mostly are) and have friends who have done things like built the Child Pornography image detection software for one of the large search engines. Karl Schulmeisters Finally, the number of folks who understand how this Big Data/Machine Learning stuff works, is a very very limited and very expensive group of folks. So its unlikely the FDA will have a great deal of expertise in this area in the near term. Joseph Gulfo Joe Hage Joe Hage Maybe it was misinformation by the FDA, but my understanding was that the sensitivity and specificity were not great at the time. Medical devices need to aid a clinician in decision making or they might as well roll the dice. I didn’t read the book, but imaging technology has evolved greatly since then and our ability to get a “strong signal” from the skin has as well. Let’s be careful not to throw the baby out with the bathwater. It makes great headlines, but I’d really love to know the facts. What was the “real” sensitivity and specificity in 2004? Joe Hage Joe Hage Joe Hage Joe Hage Joe Hage Joe Hage Joe Hage Joe Hage Gordon McKenzie As I understand it, and I would be happy to be corrected, Melaβs evidence demonstrated that their device worked a little better than a relatively inexperienced dermatologist working with less than ideal data (images on screen). It was an improvement, but not a massive one. MM kills, so any improvement is worthwhile, but cost and biopsy rates (absolute and FP) do matter in the real world. More problematically, the need to have 100% sensitivity (so that they didnβt miss anything) drove their specificity down lower than it would have had to be if it had genuinely been [used as] a guidance tool to help doctors, rather than a simple yes/no. Melaβs later moves to include images or numeric scales supports this. Doctors and the FDA were understandable wary of a ‘black box’ result. I do agree that Mela were treated harshly, but the situation was more nuanced than at first glance, with real questions around clinical benefit. Karl Schulmeisters Marked as spam
|