ETT blockages

🫁 Recently bronched a patient with a gunked up ETT and wonder how often this happens?
369 ETTs cut and measured degree of blockage (see pic)
πŸ“Š Moderate blockage common (CICU 28%, MICU 17%)
πŸ”‘ Risks: coagulopathy, longer ventilation, closed suction
⚠️ Ppeak ↑ but not clinically useful as many blocked tubes did not have this elevation!
Full paper: https://lnkd.in/gKjXsmsj

What does ‘purulent sputum’ even mean?

Just how reliable are bedside sputum assessments anyway? πŸ“„ Schuiteman et al in hashtagjournal_CHESTCritCare American College of Chest Physicians Hayley Gershengorn https://lnkd.in/gD4BQUJu

πŸ“ 10 ventilated pts, videos/photos shown to 383 ICU staff
πŸ“ Gold standard = gram stain PMNs

Results: βœ… Accuracy: 69% πŸ” Sensitivity: 58% πŸ” Specificity: 92% No difference by role.

Agreement was poor: 🎨 Color Ξ±=0.40 πŸ’§ Viscosity Ξ±=0.21 πŸ“¦ Volume Ξ±=0.17

Take-home: Bedside purulence checks = low accuracy + low consistency β†’ risk of VAP overdiagnosis & unnecessary antibiotics!

 

More proning variation

Does it ever feel like some attendings prone everyone and some attendings prone no one? You’re not imaging things! From #CLIFconsortium rockstar Anna Barker and #UMichMed:

1) 514 ICU pts eligible for proning (P/F ≀150, FiOβ‚‚ β‰₯60%, PEEP β‰₯5): only 17% were actually proned. (why are we still so bad at this?)

2) 48 attendings analyzed β†’ huge variation: πŸ“Š Adjusted rates: 14.9%–74.2% πŸ“ˆ Median OR for being proned by one attending vs another = 2.6 Greater effect than a 30 mmHg drop in P/F ratio.

3) Variation persisted even with ARDS documented. Predictors of proning: COVID status, code status, lower P/F ratio.

Take-home: Who your attending is may matter more than your oxygen level. #journal_CHESTCritCare https://www.chestcc.org/article/S2949-7884(25)00031-0/fulltext

OpenAI+Penda study

http://cdn.openai.com/pdf/a794887b-5a77-4207-bb62-e52c900463f1/penda_paper.pdf

🚨 New preprint from OpenAI + Penda Health, a large network of primary care clinics in Nairobi, Kenya.

Unlike most LLM research that lives in theory or simulation (for example, testing on NEJM Challenge Cases or benchmarking questions), this was tested live across nearly 40,000 real patient visits.

They deployed AI Consult, an LLM that reviews clinician notes and flags potential issues (see example in screenshot). It uses traffic-light colors to indicate level of concern.

Half of the clinicians were randomly given access, half were not.

Key Results:

  • 🩺 32% reduction in history-taking errors
  • πŸ” 16% reduction in diagnostic errors
  • πŸ’Š 14% reduction in treatment errors
  • βœ… 100% of surveyed clinicians who responded (only 67% completed it) said it was helpful
  • ⚠️ No safety harms identified

The “left in red” rate (visits where the final AI Consult call was red) in the AI group dropped to 20% from 35-40% (similar to non-AI group at start), while the non-AI group’s rate remained ~40%. This indicates clinicians were acting on the most severe alerts.

This is one of the clearest real-world wins for LLMs in healthcare to date. Yes, OpenAI funded and helped analyze the studyβ€”so read with a grain of saltβ€”but the results are promising and cool!