Let’s imagine the possibilities and theoretically demo the results based on current knowledge:

  1. yes AI made the process fast and the patient did not die unnecessarily.

  2. same but the patient died well.

  3. same but the patient died.

  4. same as either 1, 2, or 3 but AI made things slower.

Demo:

Pharmacy: Patient requires amoxicillin for a painful infection of the ear while allergic to penicillin:

AI: Sure! You will find penicillin in Isle 23 box number 5.

Pharmacy: the patient needs amoxicillin actually.

AI: Sure! The Patient must have an allergic reaction to more commonly used anti inflammatory medications.

Pharmacy: actually amoxicillin is more of an antibiotic, where can I find it?

AI: Sure! While you are correct that amoxicillin is an antibiotic, it is a well studied result that after an infection inflammation is reduced. You can find the inflammation through out the body including the region where the infection is located.

Pharmacy: amoxicillin location!

AI: Sure! Amoxicillin was invented in Beecham Research Laboratories.

  • Apytele@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    4 months ago

    I’ve mostly found that smart alerts just overreact to everything and result in alarm fatigue but one of the better features EPIC implemented was actually letting clinicians (like nurses and doctors) rate the alerts and comment on why or why not the alert was helpful so we can actually help train the algorithm even for facility-specific policies.

    So for instance one thing I rated that actually turned out really well was we were getting suicide watch alerts on pretty much all our patients and told we needed to get a suicide sitter order because their CSSRS scores were high (depression screening “quiz”). I work in inpatient psychiatry. Not only are half my patients suicidal but a) I already know and b) our environment is specifically designed to manage what would be moderate-high suicide risk on other units by making most of the implements restricted or completely unavailable. So I rated that alert poorly every time I saw it (which was every time I opened each patient’s chart for the first time that shift then every 4 hours after; it was infuriating) and specified that that particular warning needed to not show for our specific unit. After the next update I never saw it again!

    So AI and other “smart” clinical tools can work, but they need frequent and high quality input from the people actually using them (and the quality is important, most of my coworkers didn’t even know the feature existed, let alone that they would need to coherently comment a reason for their input to be actionable).

    • zea@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      Listening to employees when making decisions, what a concept! It’s a shame many places don’t do that.