Are you prepared for your doctor or nurse practitioner to be replaced by an algorithm? Well, H.R. 238 by Rep. David Schweikert (R-Ariz.), could do just that. Here is the bill as it has been introduced:
To amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration, and for other purposes.
Reducing the human equation in medical care may seem like a good idea since all of us, at one time or another, have fought with our medical insurance companies, and everyone has more than enough reason to distrust Big Pharma. But if you are tired of dealing with faceless entities that see you only as a number, wait until you try to appeal a decision by AI. Wait until you have to sue AI for malpractice.
Just for kicks (and a little dose of irony), I asked ChatGPT if there would be any problems with an AI prescriber. Why not ask AI about AI? Well, ChatGPT did not think that an AI doctor was all Schweikert thinks it may be cracked up to be. Here is the response:
1. **Lack of Human Judgment**: AI may not consider the nuances of a patient's unique medical history, emotional state, or lifestyle factors that a human healthcare provider would.
2. **Data Bias**: If the AI is trained on biased datasets, it may not make accurate recommendations for all patient populations, leading to health disparities.
3. **Errors in Dosage or Interaction**: AI might miscalculate dosages or fail to recognize harmful drug interactions, potentially putting patients at risk.
4. **Over-reliance on Technology**: Patients and providers might become overly dependent on AI, neglecting the importance of clinical intuition and personalized care.
5. **Legal and Ethical Issues**: Questions around accountability arise if an AI's prescription leads to negative outcomes. This could complicate medical liability.
6. **Patient Trust**: Some patients may distrust AI, preferring the reassurance and empathy of a human provider, which could impact adherence to prescribed treatments.
7. **Cybersecurity Risks**: Systems used for AI prescriptions could be vulnerable to hacking, potentially exposing sensitive patient information or leading to incorrect prescriptions.
8. **Limited Contextual Understanding**: AI may not fully understand the context of a patient's symptoms or the subtlety of their condition, leading to inappropriate treatments.
9. **Inadequate Monitoring**: AI systems might lack the capability to monitor a patient's progress and adapt prescriptions accordingly, which is essential for effective treatment.
10. **Lack of Empathy**: AI cannot provide the emotional support and understanding that human providers offer, which can be vital for patient recovery and compliance.
11. **Resistance from Healthcare Professionals**: Some healthcare providers may resist using AI tools, fearing job loss or feeling that their expertise is undervalued.
12. **Quality of Data**: The efficacy of AI is heavily dependent on the quality of input data. Poor data can lead to incorrect recommendations.
13. **Inconsistent Regulations**: The lack of standardized regulations and guidelines governing AI in healthcare can lead to variability in practice and patient safety issues.
14. **Integration with Existing Systems**: AI systems may face challenges integrating with current healthcare technologies, creating potential disruptions in care.
15. **Patient Privacy Concerns**: The use of AI often involves collecting sensitive patient data, raising concerns about privacy and data security.
These issues highlight the importance of cautious and well-regulated implementation of AI in healthcare settings.
Even AI has reservations about AI practicing medicine.
AI runs off the data it has been given. Yes, it has the capacity to learn, but who teaches it? Whoever that may be, that will be the one who makes the choices about your healthcare. We already have too many decision-makers on both sides of the aisle whose leashes are held by corporations and Big Pharma. To whom or to what will these new people or algorithms be beholden? I was under the impression that healthcare needed to be less institutionalized and more personal, not the reverse.
Right now, many conservatives are giddy over the fact that we are entering a new "Golden Age." But there is no guarantee that things will stay that way. Let us say for a moment that sometime in the future, the opposition will regain control of the government and America will embrace euthanasia with the same vigor as Canada. We already know how the Left feels about abortion. What decisions will AI make about who does and does not receive life-saving care?
Most of all, any good practitioner will tell you there is more to a diagnosis and treatment plan than cold data. There is the human equation, a sense of empathy, and even an instinct that AI cannot replicate.
Finally, in the wake of COVID-19 and the videos of dancing nurses, people may be tempted to say, "The hell with doctors and nurses; replace them all." But keep this in mind: if AI can replace your doctor, replacing you will be child's play. AI is all fun and games until you get your pink slip.
At some point, all of this wondrous technology may only be available to people who can afford it. And that may be a precious few. Even if we all enter a glorious new age where AI meets all our needs and we do not need to do anything, we will stop being participants in our lives and become passengers, that is, until AI or whoever programs it decides we are excess baggage.
Join the conversation as a VIP Member