AI-driven decision tools are increasingly determining what post-acute care services patients receive, and what they don’t. As a health tech CEO working with hospitals, skilled nursing facilities (SNFs), and accountable care organizations (ACOs) nationwide, I’ve witnessed algorithms recommending against needed services in ways that raised red flags. In one striking case, an insurer’s software predicted an 85-year-old patient would recover from a serious injury in precisely 16.6 days. On day 17, payment for her nursing home rehab was cut off, even though she was still in agony and unable to dress or walk on her own. A judge later blasted the decision as “speculative,” but by then she had drained her savings to pay for care she should have received. This example, sadly, is not an isolated incident. It underscores how algorithmic bias and rigid automation can creep into coverage determinations for home health aides, medical equipment, rehab stays, and respite care.
Researchers have found that some healthcare algorithms inadvertently replicate human biases. One widely used program for identifying high-risk patients was shown to systematically favor less-sick White patients over sicker Black patients, because it used health spending as a proxy for need. Fewer dollars are spent on Black patients with the same conditions, so the algorithm underrated their risk, effectively denying many Black patients access to extra care management until the bias was discovered. This kind of skew can easily translate into biased coverage approvals if algorithms rely on demographic or socioeconomic data.
I’ve observed AI-based coverage tools that factor in non-clinical variables like a patient’s age, zip code, or “living situation,” which can be problematic. Including social determinants in algorithms is a double-edged sword: in theory it could improve care, but experts warn it often reproduces disparities. For example, using zip code or income data can decrease access to services for poorer patients if not handled carefully. In practice, I’ve seen patients from underserved neighborhoods get fewer home health hours approved, as if the software assumed those communities can make do with less. Biases may not be intentional, but when an algorithm’s design or data reflects systemic inequities, vulnerable groups pay the price.
Flawed assumptions in discharge planning
Another subtle form of bias comes from flawed assumptions baked into discharge planning tools. Some hospital case management systems now use AI predictions to recommend post-discharge care plans, but they don’t always get the human factor right.
One common issue with AI-based discharge planning, respite care and medical equipment decisions is algorithms making assumptions about family caregiving or additional support. In theory, knowing a patient has family at home should help ensure support. However, these systems don’t know if a relative is able or willing to provide care. We had a case where the discharge software tagged an elderly stroke patient as low risk because he lived with an adult son, implying someone would help at home. What the algorithm didn’t know was that the son worked two jobs and wasn’t home most days. The tool nearly sent the patient home with minimal home health support, which could have ended in disaster or an emergency hospital visit if our team hadn’t intervened. This isn’t just hypothetical anymore as even federal care guidelines caution never to assume a family member present in the hospital will be the caregiver at home. Yet AI overlooks that nuance.
These tools lack the human context of family dynamics, and the understanding of the difference between a willing, capable caregiver and one who is absent, elderly, or overwhelmed. A clinician is able to catch that distinction; a computer often would not. The result is that some patients end up without the services they truly need.
Steps towards rectifying mistakes in algorithmic care
With advanced technology being implemented throughout the healthcare continuum at an accelerated rate, and particularly being used throughout post-acute critical care, mistakes like I mention above are bound to happen. The difference is that the impact of those mistakes is felt more deeply by vulnerable and diverse patient populations that already face major challenges, especially within our most critical care areas. Non-White patients often find themselves at higher risk of hospital readmissions, with an additional increase to risk due to low income and lack of insurance.
If there’s a silver lining, it’s that the healthcare industry is starting to reckon with these issues. Shining a light on biased and opaque AI solutions has prompted calls for change – and some concrete steps forward. Regulators, for one, have begun to step in. The Centers for Medicare & Medicaid Services recently proposed new rules limiting the use of black-box algorithms in Medicare Advantage coverage decisions. If accepted, starting next year, insurers must ensure predictive tools account for each patient’s individual circumstances, rather than blindly applying a generic formula. Qualified clinicians will also be required to review AI-recommended denials to ensure it squares with medical reality. These proposed policy moves echo what front-line experts have been advocating: that algorithms should assist, not override, sound clinical judgment. It’s a welcome step towards change and fixing the mistakes made thus far, though enforcement will be key.We can and must do better to make sure our smart new tools actually do see the individual – by making them as transparent, unbiased, and compassionate as the caregivers we would want for our own families. In the end, reimagining post-acute care with AI should be about improving outcomes and fairness, not saving money at the cost of vulnerable patients.
Photo: ismagilov, Getty Images

Dr. Afzal is a visionary in healthcare innovation, dedicating more than a decade to advancing value-based care models. As the co-founder and CEO of Puzzle Healthcare, he leads a nationally recognized company that specializes in post-acute care coordination and reducing hospital readmissions. Under his leadership, Puzzle Healthcare has garnered praise from several of the nation’s top healthcare systems and ACOs for its exceptional patient outcomes, improved care delivery, and effective reduction in readmission rates.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
