Imagine walking into a pharmacy, scanning your health records, and having AI prescribe your medication—without ever speaking to a doctor. Sounds futuristic, right? Well, a new bill in Congress, the Healthy Technology Act of 2025, could make this a reality. This bill would allow AI to qualify as a prescriber, as long as it’s FDA-approved and authorized by individual states. But is AI ready for such a big responsibility? Experts aren’t so sure.
AI in Medicine: Where Are We Now?
Doctors already use AI to help with note-taking, analyze patient data, and even suggest possible treatments. But no AI today can safely prescribe medications without human oversight.
Researchers are developing AI models that:
- Predict how a patient will respond to certain drugs
- Create “digital twins” to test treatments on virtual patients
- Analyze medical records to flag potential health risks
While these tools sound promising, they still rely on human doctors to make the final call.
The Risks of AI Prescribing
Giving AI the power to prescribe comes with big challenges:
- Accuracy Issues – AI can make mistakes, misinterpret symptoms, or “hallucinate” (make up false info).
- Lack of Human Judgment – Doctors consider personal factors, lifestyle, and emotions when prescribing meds—AI doesn’t.
- Bias in AI – If AI is trained on biased data, it could give worse recommendations for certain groups.
- Legal Questions – If AI prescribes the wrong drug, who’s responsible? The patient, the developer, or the government?
So, What’s Next?
For now, AI prescribing is just an idea, and this bill may never become law. But the discussion is heating up: How much should we trust AI in healthcare? Some experts believe AI could safely assist in low-risk cases—like routine prescriptions for common conditions. But full independence? We’re not there yet.