Facebook Pixel Code

The Five Most Key Takeaways from This Blog

I Robot, M.D.

If any reader who got through the key takeaways section above with some unanswered questions, allow this writer to try to answer some that are anticipated. 

For one, this project gets a significant amount of funding from Wellcome Leap, which is an organization that helps mitigate America’s quite horrific addiction epidemic. 

The A.I. does not have the power to handle the treatment, either. That stuff will and—in spite of costs of American privatized medicine likely keeping many at-risk individuals away from medical treatment—should remain in the hands of doctors. In this writer’s considered opinion, at least. 

But, according to an article about the research project, patients under the care of a medical A.I. model will “still be able to receive pain medicine, along with resources like extra monitoring and check-ins.” 

And another thing: why would doctors be using this technology in the first place? Well, of course, there are some good reasons beyond the general prediction about risk to a patient’s health. 

The Key Application of This Technology

For one, if the predictions are indeed trustworthy then it could better help the physician figure out the best route to take in prescribing pain medicine. 

That, really, is a key application for this technology, since a concerning amount of people can develop addictions to opiates following a prescription to them. We all saw or read Dopesick, did we not? 

And so, if this technology is successful in creating accurate predictions then doctors could indeed mitigate the risk of making a prescription that ends up causing a lot of harm in a patient’s life, no matter what pain it may relieve. 

So, what does this risk-assessment involve from the A.I.? 

Tracking Patients’ Risk Levels

One of the key areas that this technology will focus on is assessing how a risk factor for an opioid addiction may fluctuate in pertinence. 

Of course, no one has a metaphysically assigned addiction risk factor, at least not to the best of this writer’s knowledge. 

So the point here, then, is that the A.I. is not going to be giving clinicians a one-and-done stat about how likely a patient is. 

Instead, changes in, say, the recovery from a fractured femur could potentially affect a specific patient’s likelihood of developing an addiction to the prescribed pain meds. That risk may be higher in the early stages in the treatment, when both the pain and the drugs are stronger. Then later on, things may be much lower. 

As such, a doctor for a particular patient that the A.I. identifies as high risk could better handle the prescription outline. Some at-risk would-be addicts may need a smaller dosage of a pain med, or not have a particular prescription drug at all. 

Potential Limits of This Technology

So of course any medical A.I., like all A.I., will not suddenly be some infallible oracle that can completely surpass human reasoning in accuracy and trustworthiness of predictions. 

Part of the reason here is that there are simply some things that a doctor and the doctor’s A.I. companion or assistant just will not be able to overcome in gathering pertinent information. 

One of them is that some patients may be disinclined to disclose certain risk factors. Or, not even know of risk factors. 

Imagine if you will a teenaged football star much like one of those wunderkinds in Varsity Blues. This adolescent runs the ball for a big W during the district playoffs, but gets tackled hard in the end zone. Besides the glory of victory, this star gets a fractured tibia. 

So the family pediatrician and the A.I. okay him for some pretty strong meds, as both place him low for addiction. But here is the issue: he is actually at huge risk for addiction, because both of his parents are high-functioning opiate addicts who have managed to keep their almost-all-consuming addiction a secret from their son. And so there are genetic and environmental risk factors that are unknown to the A.I. and doctor alike. And would fain not reveal it to the pediatrician, either, out of shame. 

Cases like this, where there are simply unknowns, are something that human doctors should keep in mind when using a medical A.I., as its apparently strong prediction power may not be founded on all the relevant data. So, keep a healthy skepticism and always mitigate for human risks of the unknown.