For example, only a human doctor can make a "diagnosis" or write a prescription. Even just making "treatment recommendations" (e.g., a more-effective treatment or one with less side effects than the current one), which would then be reviewed and approved by an appropriately-licensed physician, is a problem.
Not having worked in the radiology space, my naive hunch is that it has to be framed as a "screening tool" which "helps the physicians find cancers." (Which can be justified to regulators because ML tends to have higher recall than humans on these tasks, albeit lower precision).
In theory, ML helps doctors spot cancers they wouldn't ordinarily see. In practice, it probably means that the physicians still technically "review" all images, but only give a cursory glance to those not flagged by the system.
I am a doctor and an AI practioner. A radiologists job is far far more than pathology detector. No AI will replace the image recongition aspect of their job in my lifetime. And that's the most automatable aspect of their job. The human and legal aspects of their job is a whole different beast. At best, technologies will allow radiologists to better or more efficient
My impression was that medical data access for training and eval was the really big thing holding it back... You wind up with deep learning folks obsessing over datasets with tens of examples.
I know I read press releases ages ago saying "hey look, we can do this machine learning", but I haven't really heard anything about it since.