π« Recently bronched a patient with a gunked up ETT and wonder how often this happens? 
369 ETTs cut and measured degree of blockage (see pic)
π Moderate blockage common (CICU 28%, MICU 17%)
π Risks: coagulopathy, longer ventilation, closed suction
β οΈ Ppeak β but not clinically useful as many blocked tubes did not have this elevation!
Full paper: https://lnkd.in/gKjXsmsj
Monthly Archives: August 2025
What does ‘purulent sputum’ even mean?
Just how reliable are bedside sputum assessments anyway? π Schuiteman et al in hashtagjournal_CHESTCritCare American College of Chest Physicians Hayley Gershengorn https://lnkd.in/gD4BQUJu
π 10 ventilated pts, videos/photos shown to 383 ICU staff
π Gold standard = gram stain PMNs

Results: β
Accuracy: 69% π Sensitivity: 58% π Specificity: 92% No difference by role.
Agreement was poor: π¨ Color Ξ±=0.40 π§ Viscosity Ξ±=0.21 π¦ Volume Ξ±=0.17
Take-home: Bedside purulence checks = low accuracy + low consistency β risk of VAP overdiagnosis & unnecessary antibiotics!
More proning variation
Does it ever feel like some attendings prone everyone and some attendings prone no one? You’re not imaging things! From #CLIFconsortium rockstar Anna Barker and #UMichMed:
1) 514 ICU pts eligible for proning (P/F β€150, FiOβ β₯60%, PEEP β₯5): only 17% were actually proned. (why are we still so bad at this?)

2) 48 attendings analyzed β huge variation: π Adjusted rates: 14.9%β74.2% π Median OR for being proned by one attending vs another = 2.6 Greater effect than a 30 mmHg drop in P/F ratio.
3) Variation persisted even with ARDS documented. Predictors of proning: COVID status, code status, lower P/F ratio.
Take-home: Who your attending is may matter more than your oxygen level. #journal_CHESTCritCare https://www.chestcc.org/article/S2949-7884(25)00031-0/fulltext
OpenAI+Penda study
http://cdn.openai.com/pdf/a794887b-5a77-4207-bb62-e52c900463f1/penda_paper.pdf
π¨ New preprint from OpenAI + Penda Health, a large network of primary care clinics in Nairobi, Kenya.
Unlike most LLM research that lives in theory or simulation (for example, testing on NEJM Challenge Cases or benchmarking questions), this was tested live across nearly 40,000 real patient visits.
They deployed AI Consult, an LLM that reviews clinician notes and flags potential issues (see example in screenshot). It uses traffic-light colors to indicate level of concern.
Half of the clinicians were randomly given access, half were not.
Key Results:
- π©Ί 32% reduction in history-taking errors
- π 16% reduction in diagnostic errors
- π 14% reduction in treatment errors
- β 100% of surveyed clinicians who responded (only 67% completed it) said it was helpful
- β οΈ No safety harms identified
The “left in red” rate (visits where the final AI Consult call was red) in the AI group dropped to 20% from 35-40% (similar to non-AI group at start), while the non-AI group’s rate remained ~40%. This indicates clinicians were acting on the most severe alerts.
This is one of the clearest real-world wins for LLMs in healthcare to date. Yes, OpenAI funded and helped analyze the studyβso read with a grain of saltβbut the results are promising and cool!