Boston

Making clinicians worthy of medical AI: Lessons from Tesla – Boston, Massachusetts

Boston, Massachusetts 2021-10-21 04:45:55 –

NSesla is conducting an unprecedented social experiment. Test your car driver to see if it is safe enough to accept your company driver. Fully Automated Driving (FSD) Beta Software UpdateExtends the autonomy of cars, especially in urban areas.

The company automatically evaluates humans based on: Safety score It consists of five elements, including a forward collision warning per 1,000 miles of mileage, aggressive turning, and forced autopilot cancellation.

Social conversations about artificial intelligence tend to focus on the capabilities of machines, but Tesla’s experiments put the spotlight on humans: are drivers responsible enough to be given a superpower?

advertisement

As medical researchers, we have realized that this question may be at the heart of an exciting paradigm for the success of AI-assisted medicine, but it also raises additional questions: safety scores Is it accurate and fair? Will human improvement continue after the incentive and after the evaluation period? After all, interventions evaluated in the pristine setting of clinical studies Adherence to medication Also Maintaining weight loss..

Like self-driving cars, medical AI does not prevent unconventional doctors from making out-of-context mistakes. If placed between clear lanes on the other side of the road, the vehicle may self-propell without warning. Without an oncoming vehicle, the safety score may not even be able to penalize the driver for such a terrible mistake.

advertisement

In medicine, naive machine learning models are no substitute for human attention and common sense. Such attention is needed to understand how medical AI abuses context. where Data is, when Even if these labels are used, measurements will be taken or problematic labels such as race will be used. Looks like it’s hidden To a human expert.

In contrast to being a sought-after expert, much of today’s AI is like an avid and loyal medical student who jumps at every decision. Made by a professional clinician And predict the next steps the clinician will take anyway. Such behavior may serve as a tool for explanation and education, but it means that context is always important.

AI is good at being constantly alert, remembering everything you see, performing very technical, often narrow tasks, and ruthlessly using contextual information to improve performance. Given these characteristics, where and for whom in medicine should AI be expected to shine, what would an effective human-machine collaboration look like?

Self-driving car experience suggests that AI may improve areas of medical behavior that doctors are lazy, tired, forgotten, or paying attention only intermittently. Ventilator pressure regulation in the intensive care unit, individual drug administration, predictions, etc. Of side effects. The experience of autonomous driving also suggests that, perhaps counterintuitively at first, it may be prioritized to equip only doctors who are capable of working safely with medical AI. .. It’s unwise to let AI do some of the work that requires context awareness and common sense. AI is also not good at understanding human motives and values. Instead, medical AI may only provide a safety net if the physician is playing his part in a human-machine partnership.

Tesla’s experiments also show the power of human incentives, such as receiving FSD beta software updates, at least in the short term.The equivalent of the next fully autonomous driving update for overloaded doctors could automatically be AI Write clinical notes After hearing the encounter between the patient and the doctor, or mainly Claim arbitrage process With an insurance company. Such benefits produce immediate short-term rewards rather than unspecified or fantastic long-term promises.

On the dystopian path, a system of human achievement and rewards in the hands of selfish bureaucrats and governments can lead to clinicians Used or abused.. It’s very hard to imagine a scenario where a doctor would see a patient for more than 10 minutes and the AI ​​system would stop communicating with the insurance company or reduce the doctor’s reimbursement, effectively penalizing the doctor. it’s simple. A physician’s “safety score” may encourage extensive and unnecessary overtesting of patients with low prior probabilities.

In medicine, now is the time to permanently use the medical version of the fully autonomous driving performance and reward system to prevent doctors from making mechanical gears. present day.. Doing so is essential to ensure that effective human-machine collaboration is beneficial to patients, physicians, and health economics.

Arjun K. Manrai participates in the Boston Children’s Hospital Computational Health Informatics Program and is an assistant professor of pediatric and biomedical informatics at Harvard Medical School. Isaac S. Kohane is a professor and founding chairman of the Department of Biomedical Informatics, Harvard Medical School.



Making clinicians worthy of medical AI: Lessons from Tesla Source link Making clinicians worthy of medical AI: Lessons from Tesla

Back to top button