Late diagnoses, routine cultures, and more

Research summaries from ACP Hospitalist Weekly.


Heart failure may be missed despite frequent health care contacts before hospitalization

Patients hospitalized for acute heart failure often have increased contact with the health care system without a heart failure diagnosis in the period before their index admission, according to a recent study.

Researchers studied three matched population cohorts in Ontario, Canada, from 2006 to 2013 that were divided into patients with incident hospitalization for acute heart failure, incident hospitalization for chronic obstructive pulmonary disease (COPD), or stable heart failure. The goal of the study was to determine the pattern of health care contacts in patients hospitalized with acute heart failure versus patients in the other two groups. The primary outcome was the aggregate number of health care contacts, defined as total number of outpatient physician visits, hospitalizations for unrelated conditions, or ED visits, in each of the thirteen 28-day periods in the year before the index hospitalization. The results were published Nov. 11, 2020, by JACC: Heart Failure and appeared in the Dec. 1, 2020, issue.

Overall, 79,389 patients were included in the study, 26,463 in each population cohort. Each cohort included 51.1% women and had a mean age of approximately 75 years. Health care contacts increased significantly in patients with new hospitalization for acute heart failure as the index hospitalization approached, with a 28% increase in the last time period before hospitalization (adjusted rate ratio, 1.28; 95% CI, 1.25 to 1.31; P<0.001) versus matched COPD controls and a 75% increase (adjusted rate ratio, 1.75; 95% CI, 1.71 to 1.79; P<0.001) versus matched controls with stable heart failure. The rate of increase in health care contacts was more significant among heart failure patients ages 20 to 40 years than among those 65 years of age or older (adjusted rate ratio, 1.18; 95% CI, 1.08 to 1.28; P<0.001).

The authors noted that their findings could have been affected by misclassification bias and that they could not account for clustering or for all potential confounders. They concluded that initial hospitalizations for heart failure are preceded by an increased rate of health care contacts that do not always result in a heart failure diagnosis. “Our study confirms that challenges exist in the timely recognition of HF [heart failure] before acute decompensation occurs, which is an important but poorly understood part of the HF patient trajectory,” they wrote. They noted that late recognition of heart failure will lead to more severe disease and higher costs and said that higher suspicion and timely access to tests and imaging could decrease heart failure deterioration and decompensation.

An accompanying editorial said the study is an early step in developing systems based on real-world evidence that can be used by health systems to identify at-risk patients. “The health system is well equipped to react to acute symptoms but less well positioned to be proactive in identifying symptoms across time that may ultimately lead to a health event that requires hospitalization and may even be life-threatening. But, in a digital data world, where clinical, patient-generated, and patient-reported data may flow into analytic engines, there is the prospect of producing platforms that can dynamically assess risk and trigger interventions as necessary to mitigate the threat,” the editorialists wrote. They called for efforts to engage clinicians in primary care, internal medicine, and emergency medicine, who are likely to be the first to evaluate patients, and to improve implementation of monitoring devices, wearables, and other digital tools.

Routine blood cultures at ICU admission may help increase detection of bloodstream infections

Routine blood culture collection for nonelective ICU admissions resulted in increased detection of bloodstream infections in a recent study.

Researchers in the Netherlands performed a before-after analysis of patients admitted to a tertiary care hospital's ICU between January 2015 and December 2018. On Jan. 1, 2017, automatic orders were implemented to collect a single set of blood cultures for each patient immediately upon ICU admission. The researchers compared blood culture results and contaminated blood cultures for 2015 to 2016 (the before period) and 2017 to 2018 (the after period). Any positive blood cultures were classified as bloodstream infection or contamination. The results were published Nov. 9, 2020, by Critical Care Medicine and appeared in the January 2021 issue.

Overall, blood cultures were obtained in 573 of 1,775 patients (32.3%) in the before period and in 1,582 of 1,871 patients (84.5%) in the after period (P<0.0001). Mean age was 61 years, and most patients (61.4% in the before group and 63.2% in the after group) were men. Bloodstream infection diagnoses increased, from 95 patients (5.4%) in the before group to 154 patients (8.2%) in the after group (relative risk, 1.5; 95% CI, 1.2 to 2.0; P=0.0006). Based on the average number of blood cultures obtained per patient, an estimated 1,009 additional cultures were obtained in the after period. This yielded 59 bloodstream infections, corresponding to a number needed to culture of 17 to detect one additional infected patient. Forty patients in the before group (2.3%) and 180 patients in the after group (9.6%) had blood culture contamination (relative risk, 4.3; 95% CI, 3.0 to 6.0; P<0.0001). No difference was seen in rate of vancomycin use or presumed episodes of catheter-related bloodstream infections treated with antibiotics between the study periods.

Among other limitations, the authors noted that they did not collect information on vancomycin use or catheter-related bloodstream infections after patients' ICU stays and that their increased detection of bloodstream infection was caused by low adherence to sepsis guidelines, meaning their findings may not be generalizable to settings with better adherence. They concluded that in a setting where blood culture collection for clinically suspected infection was suboptimal, universal cultures for ICU admissions increased detection of bloodstream infections as well as the proportion of patients with contaminated blood cultures, without increasing vancomycin use in the ICU. “Our findings illustrate the potential effect of obtaining blood cultures in every critically ill patient, even when a clinical suspicion of infection is not (yet) obvious,” they wrote.

Inpatient exercise improved functional and cognitive outcomes in elderly type 2 diabetes patients

An inpatient exercise intervention combated cognitive and functional decline among hospitalized elderly adults with type 2 diabetes.

The study was an ancillary analysis of a trial that randomized acutely hospitalized older adults (all ≥75 years of age, mean age ~87 years) to an exercise intervention or usual care. It included 103 patients with type 2 diabetes, 54 who exercised during their stay and 49 controls. The intervention consisted of 20-minute sessions twice a day five to seven days per week. During morning sessions, patients did progressive resistance, balance, and walking exercises under supervision. In the evenings, they performed functional unsupervised exercise and walked. Results were published by the Journal of Clinical Endocrinology and Metabolism on Nov. 5, 2020, and appeared in the February 2021 issue.

Image by Getty Images
Image by Getty Images

The study's primary endpoint was change in functional status from baseline to hospital discharge as assessed with the Barthel Index and the Short Physical Performance Battery (SPPB). The intervention groups saw improvements in these measures from baseline, which were significantly different from the declines seen among controls (between-group differences: SPPB, 2.7 points [95% CI, 1.8 to 3.5 points]; Barthel Index, 8.5 points [95% CI, 3.9 to 13.1 points]; P<0.001 for both comparisons). The intervention also showed benefits on the secondary endpoints of cognitive status, depression, and handgrip strength. Length of stay was similar between groups (median, 8 days), as were mortality and readmissions in the three months after discharge.

The results “confirm that inhospital interventions can be effective even in a population at particular risk of frailty and hospitalization-related negative outcomes such as the diabetic oldest old,” the authors said. They noted that the positive effects seen in this subgroup of patients were actually greater than those observed in the overall trial. It may have been important to the success of the intervention that it included both mobility and strengthening exercises, the authors said.

Limitations of the study include its small sample size and single-center setting. However, strengths include that it may be the first study to assess the effects of exercise during hospitalization on older patients with diabetes, and exercise trials rarely include patients with so many comorbidities (patients had a mean of nine in this study). The authors called for additional research into whether an intervention like this could have any effect on longer-term outcomes.

Study finds race, gender disparities in detection of substance use disorders in inpatients

Patient characteristics, such as race and gender, may impact clinicians' detection of substance use disorders during hospitalization, a recent study found.

To assess clinicians' detection of substance use disorders among medical inpatients, researchers used data from a cluster randomized controlled trial that tested the effectiveness of three strategies for screening patients for substance use disorders and delivering a brief intervention. Data sources included patient questionnaires, a diagnostic interview for substance use disorders that uses DSM-5 criteria, and medical records. Clinician detection was determined by diagnoses documented in medical records. Results were published online on Oct. 27, 2020, by the Journal of General Internal Medicine.

Overall, the study included 1,076 patients (mean age, 46.0 years; 54.5% men; 55.2% White; 31.3% Black) receiving care on 13 general medical units at a large teaching hospital. Most patients (73.8%) had a nicotine use disorder, 50.6% had an alcohol use disorder, and between 12.2% and 15.1% had cocaine, opioid, or cannabis use disorders. The detection rate was highest for nicotine use disorder (72.2%) and lowest for cannabis use disorder (26.4%). Across substances, rates of specificity for detection were high, with the lowest specificity for tobacco (80%) and highest for cannabis (94%). In accuracy analyses, tobacco and alcohol were detected with the lowest accuracy (74%) and cocaine with the highest (89%).

Detection of alcohol use disorder was more likely among male compared to female patients (odds ratio [OR], 4.0; 95% CI, 1.9 to 4.8). Compared to in White patients, alcohol use disorder (OR, 0.4 [95% CI, 0.2 to 0.6]) and opioid use disorder (OR, 0.2; 95% CI, 0.1 to 0.7) were less likely to be detected among Black patients, while alcohol (OR, 0.3; 95% CI, 0.0 to 2.0) and cocaine (OR, 0.3; 95% CI, 0.1 to 0.9) use disorders were less likely to be detected among Hispanic patients. Clinicians were more likely to detect nicotine, alcohol, opioid, and other drug use disorders among inpatients with higher addiction severity (ORs, 1.20 [95% CI, 1.08 to 1.34]; 1.62 [95% CI, 1.48 to 1.78]; 1.46 [95% CI, 1.07 to 1.98]; and 1.38 [95% CI, 1.00 to 1.90], respectively) compared to those with lower addiction severity.

Limitations of the study include limited generalizability of the findings, as patients and clinicians were from a single urban academically affiliated hospital in the northeastern United States, the authors noted. They added that study procedures did not include searching for whether substance use may have been documented in the progress notes.

“For the large proportion of people without a primary care physician, inpatient medical hospitalizations are one of few interactions patients have with the healthcare system and potentially the only opportunity to receive SUD [substance use disorder] screening,” they concluded. “Future research should examine whether implementing universal screening procedures improves provider detection of inpatients with SUD and reduces race and gender disparities in SUD detection rates.”

Rapid intermittent bolus therapy may be better for symptomatic severe hyponatremia

Patients with symptomatic severe hyponatremia may be less likely to need therapeutic relowering treatment and more likely to achieve target glucose-corrected serum sodium levels in one hour if they receive hypertonic saline via rapid intermittent bolus (RIB) versus slow continuous infusion (SIC), a study found.

In the SALSA (Efficacy and Safety of Rapid Intermittent Correction Compared With Slow Continuous Correction With Hypertonic Saline in Patients With Moderately Severe or Severe Symptomatic Severe Hyponatremia) trial, the risk of overcorrection with RIB versus SCI was compared in patients receiving hypertonic saline for symptomatic hyponatremia. Patients from three general hospitals in the Republic of Korea were included if they were older than age 18 years and had moderately severe to severe hyponatremia and glucose-corrected serum sodium levels of 125 mmol/L or less. The primary outcome measure was overcorrection at any given period, defined as an increase of 12 or 18 mmol/L in serum sodium level within 24 or 48 hours, respectively. The efficacy and safety of the treatment approaches were among the secondary and post hoc outcomes. Results were published Oct. 26, 2020, by JAMA Internal Medicine.

Image by Getty Images
Image by Getty Images

Mean age of the study patients was 73.1 years, 44.9% were men, and the mean serum sodium level was 118.2 mmol/L. Eighty-seven patients were assigned to the RIB group, and 91 were assigned to the SCI group; these 178 patients were included in the intention-to-treat analysis. Seventy-two and 71 patients, respectively, completed the study, with most withdrawals due to clinician error or protocol violation. Patients received RIB or SCI of hypertonic saline, 3%, for 24 to 48 hours according to symptom severity, and serum sodium concentrations were measured every six hours for two days. Hypertonic saline was initiated in the ED in 73.6% of patients and on the general ward in 25.8%.

Overcorrection occurred in 15 of 87 patients in the RIB group and 22 of 91 patients in the SCI group (17.2% vs. 24.2%; absolute risk difference, −6.9% [95% CI, −18.8% to 4.9%]; P=0.26). Relowering treatment was less common in the RIB group than in the SCI group (41.4% vs. 57.1%, respectively; absolute risk difference, −15.8% [95% CI, −30.3% to −1.3%]; P=0.04; number needed to treat, 6.3). No difference was seen between groups in efficacy or in symptom improvement. However, RIB appeared to have better efficacy for achieving a target correction rate within one hour. In the intention-to-treat analysis, 32.2% of RIB patients achieved target correction versus 17.6% of SCI patients (absolute risk difference, 14.6% [95% CI, 2% to 27.2%]; P=0.02; number needed to treat, 6.8), while in a per-protocol analysis, these percentages were 29.2% versus 16.4%, respectively (absolute risk difference, 12.7% [95% CI, −0.8% to 26.2%]; P=0.07).

The researchers noted that the patient withdrawal rate was higher than expected and that they did not adjust for secondary and post hoc outcomes, among other limitations. They concluded that while saline therapy with either RIB or SIC is safe and effective for treating hyponatremia and that risk for overcorrection between the two did not differ, RIB led to less therapeutic relowering treatment and appeared to have better efficacy in achieving serum sodium levels within an hour. “RIB therapy could be suggested as the preferred treatment of symptomatic hyponatremia, which is consistent with the current consensus guidelines,” the authors wrote.

Use of noninvasive ventilation at end of life rapidly increased in recent years

In the past two decades, the use of noninvasive ventilation in older patients hospitalized in the last 30 days of life rapidly increased, a recent study found.

The population-based cohort study assessed trends in the provision of ventilatory support using a 20% random sample of Medicare fee-for-service beneficiaries who had an acute care hospitalization in the last 30 days of life from 2000 through 2017. Researchers used validated ICD-9 and ICD-10 procedure codes to identify use of noninvasive ventilation, invasive mechanical ventilation, both, or neither in Medicare beneficiaries with chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), cancer, or dementia. Measures of end-of-life care included in-hospital death in an acute care setting, hospice enrollment at death, and hospice enrollment in the last three days of life. The researchers adjusted analyses for sociodemographic characteristics, admitting diagnosis, and comorbidities using Medicare claims data. Results were published online on Oct. 19, 2020, by JAMA Internal Medicine.

A total of 2,470,435 Medicare beneficiaries (55% women; mean age, 82.2 years) were hospitalized within 30 days of death. Compared with 2000, there was higher use of noninvasive ventilation in 2005 (0.8% vs. 2.0%; adjusted odds ratio [AOR], 2.63 [95% CI, 2.46 to 2.82] and 2017 (0.8% vs. 7.1%, AOR, 11.84 [95% CI, 11.11 to 12.61]. Meanwhile, the change in mechanical ventilation use was much smaller: 15.0% in 2000, 15.2% in 2005, and 18.2% in 2017 (AORs, 1.04 [95% CI, 1.02 to 1.06] and 1.63 [95% CI, 1.59 to 1.66], respectively).

In subanalyses comparing 2000 with 2017, similar trends found increased noninvasive ventilation use among patients with CHF (1.4% vs. 14.2%; AOR, 14.14 [95% CI, 11.77 to 16.98]) and COPD (2.7% vs 14.5%; AOR, 8.22 [95% CI, 6.42 to 10.52]), with reciprocal stabilization in invasive mechanical ventilation use among patients with CHF (11.1% vs. 7.8%; AOR, 1.07 [95% CI, 0.95 to 1.19]) and COPD (17.4% vs. 13.2%; AOR, 1.03 [95% CI, 0.88 to 1.21]). Noninvasive ventilation use also went up in those with cancer (0.4% vs. 3.5%; AOR, 10.82 [95% CI, 8.16 to 14.34]) and dementia (0.6% vs. 5.2%; AOR, 9.62 [95% CI, 7.61 to 12.15]), while invasive mechanical ventilation use changed less dramatically (6.2% to 7.6% and 5.7% to 6.2%, respectively).

Patients on noninvasive ventilation differed significantly from those on invasive mechanical ventilation in the study's end-of-life care measures: in-hospital death (50.3% [95% CI, 49.3% to 51.3%] vs. 76.7% [95% CI, 75.9% to 77.5%]), hospice enrollment in the last three days of life (57.7% [95% CI, 56.2% to 59.3%] vs. 63.0% [95% CI, 60.9% to 65.1%]), and hospice enrollment (41.3% [95% CI, 40.4% to 42.3%] vs. 20.0% [95% CI, 19.2% to 20.7%]).

Among other limitations, the study used Medicare claims data and not clinical data (e.g., disease severity, patient preferences for end-of-life care), the authors noted. They added that Medicare claims files were included only for fee-for-service beneficiaries, and therefore results may not be generalizable to other populations or patients with Medicare managed plans.

While the use of noninvasive ventilation in older adults with terminal respiratory failure may seem to be a good option, there is a dearth of high-quality evidence supporting its use across serious illnesses, an accompanying commentary noted. “Broad [noninvasive ventilation] use at the end of life, even as a means of palliative ventilatory support, could have unintended consequences in certain subgroups, such as older adults with advanced cancer and dementia,” the editorialist wrote. “Future research on terminal respiratory failure in these vulnerable populations and their bereaved caregivers should explore in more detail the feasibility, acceptability, and palliative benefits of [noninvasive ventilation], especially in comparison to high-flow nasal cannula oxygenation.”

Algorithms predicted antibiotic resistance profiles of inpatients' bacterial infections

Machine learning algorithms used inpatients' electronic health records to accurately predict the antibiotic resistance profiles of bacterial infections and may aid in the decision to start empiric antibiotics, a recent study found.

Researchers looked at a dataset containing more than 16,000 antibiotic resistance tests in patients who had positive bacterial culture results at one hospital in Israel from May 2013 to December 2015. They applied three machine learning models, as well as an ensemble combining their results, to predict antibiotic resistance to five antibiotics commonly tested for resistance: ceftazidime (n=2,942), gentamicin (n=4,360), imipenem (n=2,235), ofloxacin (n=3,117), and sulfamethoxazole-trimethoprim (n=3,544). They trained the models on early samples and evaluated them on later distinct samples (the test set). They also compared the different variables most influencing antibiotic resistance prediction. Results were published online on Oct. 18, 2020, by Clinical Infectious Diseases.

The ensemble model outperformed the separate models and produced accurate predictions on test-set data. When no knowledge regarding the infection bacterial species was assumed, the ensemble model yielded area under the receiver-operating characteristic scores of 0.73 to 0.79 for different antibiotics. When information regarding bacterial species was included, the area under the receiver-operating characteristic scores increased to 0.80 to 0.88. In analyses to determine the influence of variables on the ensemble model predictions, the two variables with the highest average effect across all five antibiotics involved the proportion of past antibiotic-resistant infections: previous same-bacterial species resistance to the same antibiotic and to any antibiotic when including information on bacterial species, and previous any-bacterial species resistance to the same antibiotic and to any antibiotic when excluding information about the infecting bacterial species.

Among other limitations, the study design did not enable a direct comparison of the results to doctors' predictions of resistance, the authors noted. They added that some potentially important predictors for resistance (e.g., antibiotic use outside the hospital, residency location, microbiome composition, diet, exercise) were unavailable.

The authors concluded that their machine learning approach can serve as a template for other hospitals. “The methods applied here should be generalizable to other healthcare facilities. . . . The implementation of such systems should be seriously considered by clinicians to aid correct empiric therapy and to potentially reduce antibiotic misuse,” they wrote.

ICU bundles do not appear to improve delirium but may help other outcomes

Bundles of recommended care reduced length of stay and mortality for ICU patients, but didn't affect their rates of delirium, according to a systematic review.

Researchers looked at studies published from January 2000 to July 2020 to evaluate the impact of bundle interventions on delirium prevalence, duration, and other adverse outcomes in the ICU. Studies were included if they were randomized clinical trials or cohort studies that examined prevalence and duration of ICU delirium, proportion of patient-days with coma, ventilator-free days, mechanical ventilation days, ICU or hospital length of stay, and ICU, in-hospital, or 28-day mortality in adults and incorporated at least three components of the ABCDEF bundle (Assess, prevent, and manage pain; Both spontaneous awakening trials for sedative patients and spontaneous breathing trials if patients were on mechanical ventilators; Choice of analgesics and sedatives; Delirium monitoring or management; Early exercise/mobility; and Family engagement and empowerment). The results were published Dec. 16, 2020, by Critical Care Medicine and appeared in the February 2021 issue.

Overall, 11 studies with 26,384 adult patients were included in the meta-analysis. Five studies (three randomized clinical trials and two cohort studies) that included 18,638 patients found no reduction in prevalence of ICU delirium when bundles of care were used (risk ratio, 0.92; 95% CI, 0.68 to 1.24). Bundle interventions were not associated with shorter duration of ICU delirium (mean difference, –1.42 days [95% CI, –3.06 to 0.22 days]; two randomized clinical trials and one cohort study), increased ventilator-free days (mean difference, 1.56 days [95% CI, –1.56 to 4.68 days]; three randomized clinical trials), decreased mechanical ventilation days (mean difference, –0.83 day [95% CI, –1.80 to 0.14 days]; four randomized clinical trials and two cohort studies), length of ICU stay (mean difference, –1.08 day [95% CI, –2.16 to 0.00 days]; seven randomized clinical trials and two cohort studies), or in-hospital mortality (risk ratio, 0.86 [95% CI, 0.70 to 1.06]; five randomized clinical trials and four cohort studies). Bundle interventions did appear to be effective in reducing proportion of patient-days with coma (risk ratio, 0.47 [95% CI, 0.39 to 0.57]; two cohort studies), hospital length of stay (mean difference, –1.47 days [95% CI, –2.80 to –0.15 days]; four randomized clinical trials and one cohort study), and 28-day mortality (risk ratio, 0.82 [95% CI, 0.69 to 0.99]; three randomized clinical trials).

The authors noted heterogeneity among the included studies and that only a few studies examined ICU mortality, among other limitations. They concluded that while bundle interventions did not appear to affect delirium in the ICU, there was clear evidence supporting their benefit in improving other outcomes, such as length of stay and 28-day mortality rates, in ICU patients. “The modifiable risk factors for ICU delirium were not fully addressed by interventions in the majority of the included studies, which may limit the effectiveness of bundle interventions to be shown on ICU delirium prevalence and duration,” the authors wrote. “Future studies, especially well and rigorously designed [randomized controlled trials] and full implementation of ABCDEF bundle intervention, should be considered to test the effect of bundle interventions on ICU delirium prevalence and duration, as well as other related adverse outcomes.”

Antiplatelet therapy linked to increased risk of post-pleural procedure bleeding

Antiplatelet therapy was associated with an increased risk of post-pleural procedure bleeding and serious bleeding in a recent study.

Researchers conducted the cohort study in 19 centers in France (eight respiratory care departments and 11 ICUs in 14 university teaching hospitals and five general hospitals). All adult patients who had bedside thoracentesis, closed pleural biopsy, or chest tube insertion from November 2011 to January 2014 were considered for inclusion. Patients with hemothorax or recent chest surgery and those receiving full anticoagulation or extracorporeal membrane oxygenation were excluded.

The main outcome was the occurrence of bleeding, defined as hematoma, hemoptysis, or hemothorax, during the 24 hours following the bedside pleural procedure. The secondary outcome was the occurrence of serious bleeding events during the 24 hours after the pleural procedure, defined as bleeding requiring blood transfusion, respiratory support, endotracheal intubation, embolization, or surgery, or when death occurred. Results were published online on Dec. 5, 2020, by CHEST and appeared in the April 1, 2021, issue.

Overall, 1,124 patients (median age, 62.6 years; 66% men) were included in the study, 182 who were on antiplatelet therapy and 942 who were not. Fifteen patients experienced a bleeding event, including eight serious bleedings. The 24-hour incidence of bleeding was 3.23% (95% CI, 1.08% to 5.91%) in the antiplatelet group and 0.96% (95% CI, 0.43% to 1.60%) in the control group. The occurrence of bleeding events was associated with antiplatelet therapy in both univariate analysis (odds ratio [OR], 3.44 [95% CI, 1.14 to 9.66]; P=0.021) and multivariate analysis (OR, 4.13 [95% CI, 1.01 to 17.03]; P=0.044) after adjustment for demographic data and the main risk factors for bleeding. Antiplatelet therapy was also associated with serious bleeding in both univariate analysis (OR, 8.61 [95% CI, 2.09 to 42.3]; P=0.003) and multivariate analysis (OR, 7.27 [95% CI, 1.18 to 56.1]; P=0.032) after adjustment for the number of risk factors for bleeding.

The main limitation of the study was missing data, especially concerning renal function, the authors noted. They added that the low number of bleeding events limited the multivariate analysis and that the study was not designed to evaluate the risk of bleeding among antiplatelet agents.

The authors noted that current international guidelines for pleural procedures do not address antiplatelet drug management. They concluded that while clinicians should take these findings into account when considering the benefits and risks of a pleural procedure in a patient taking antiplatelet drugs, “[T]he rates of bleeding and severe bleeding were low, advocating that pleural procedures may be performed with an acceptable risk when antiplatelet therapy cannot be interrupted.”

Machine learning estimated individual effects of steroids in septic shock

A machine learning-derived estimation-based treatment strategy to decide which patients with septic shock to treat with corticosteroids yielded positive net benefit, regardless of potential steroid-related adverse effects, in a recent cohort study.

Researchers estimated the individual treatment effect of steroids in adults with septic shock in ICUs using an ensemble machine learning approach. They used individual-patient data from four trials on steroid supplementation in adults with septic shock, which were selected because they yielded conflicting results regarding the mortality benefit, as a training cohort to model the individual treatment effect.

In the trials, the control was either the placebo or usual care compared with IV hydrocortisone (50 mg dose every 6 hours) for five to seven days, with or without enteral fludrocortisone, 50 µg daily, for seven days. The main outcome was all-cause 90-day mortality. The researchers also evaluated the net benefit of steroids when the decision to treat is based on the individual estimated absolute treatment effect. For external validation, they used data from a double-blinded, placebo-controlled randomized clinical trial comparing hydrocortisone with placebo. Results were published Dec. 10, 2020, by JAMA Network Open.

Overall, 2,548 participants (median age, 66 years; 65% men) were included in the development cohort. The median Simplified Acute Physiology Score (SAPS II) was 55 (interquartile range, 42 to 69), and the median Sepsis-related Organ Failure Assessment score on day one was 11 (interquartile range, 9 to 13). The crude pooled relative risk of death at 90 days was 0.89 (95% CI, 0.83 to 0.96) in favor of corticosteroids. According to the optimal individual model, the estimated median absolute risk reduction was 2.90% (95% CI, 2.79% to 3.01%). In the external validation cohort (n=75), the area under the curve of the optimal individual model was 0.77 (95% CI, 0.59 to 0.92).

For any number willing to treat (i.e., the number of patients who one is willing to treat and expose to potential harms in order to save one life) less than 25, the net benefit of treating all patients with hydrocortisone versus treating none was negative, meaning that treating all patients was worse than treating no one. At a number willing to treat of 25, the net benefit was 0.01 for the universal hydrocortisone strategy, −0.01 for a universal hydrocortisone and fludrocortisone strategy, 0.06 for a treat-by-SAPS II strategy, and 0.31 for the treat-by-optimal-individual model strategy (i.e., more beneficial). The net benefit of the SAPS II and the optimal individual model treatment strategies converged to zero for a smaller number willing to treat, but the individual model was consistently superior to the model based on the SAPS II score.

Among other limitations, the results may not be generalizable to all patients due to the trials' inclusion and exclusion criteria, the study authors noted. They added that to refine the individualized treatment strategy, one would need to choose the appropriate number willing to treat, accounting for the frequency and the severity of adverse effects.

For such machine-learning models to improve decisions in practice, they must be integrated within the clinical workflow, trusted by clinicians, and tested prospectively, an accompanying editorial comment noted. “In the meantime, this study highlights the challenge and nuance of applying clinical trial data to decision-making for individual patients,” the editorialists wrote.

Tookit with patient, family engagement may help reduce falls during hospitalization

A fall-prevention toolkit that includes family and patient engagement throughout hospitalization may help reduce falls, according to a recent study.

Image by Getty Images
Image by Getty Images

Researchers performed a nonrandomized controlled trial to examine whether a nurse-led fall-prevention intervention with continuous engagement of patients and families was effective at preventing falls and injurious falls during hospitalization. The Fall Tailoring Interventions for Patient Safety (TIPS) toolkit was a nurse-led, evidence-based fall-prevention intervention that used bedside tools to communicate patient-specific risk factors for falls as well as a tailored prevention plan. A previous randomized controlled trial found that while the toolkit reduced falls by 25%, it did not reduce fall-related injuries, and a follow-up case-control study indicated that most falls were due to patient nonadherence. The researchers therefore conducted research with patients, families, and health care professionals to make the toolkit more patient-centered and more focused on engaging patients and families. The revised toolkit included high-tech and low-tech Fall TIPS modalities, could be used by nursing staff and integrated into various hospital workflows, and supported patient activation and engagement.

The current trial was conducted at 14 medical units in three academic medical centers in Boston and New York City from Nov. 1, 2015, through Oct. 31, 2018, and included all adult inpatients in the participating units. The trial used a stepped-wedge design and an interrupted time-series evaluation. Units had differing start dates for the intervention, but each included 21 months of preintervention data and 21 months of postintervention data, after a two-month implementation and wash-in period. Rate of falls per 1,000 patient-days in targeted units was the primary outcome, and rate of falls with injury per 1,000 patient-days was the secondary outcome. The results were published Nov. 17, 2020, by JAMA Network Open.

Of the 37,231 patients included in the study, 17,948 were evaluated before the intervention and 19,283 were evaluated after the intervention. The mean age was 60.56 years and 60.92 years, respectively, 54.17% and 53.54% were women, and 62.57% and 60.17% were White. Mean hospital length of stay was 7.53 days in the preintervention period and 7.39 days in the postintervention period. After implementation of the fall-prevention toolkit, there was an overall adjusted 15% reduction in falls (2.92 falls vs. 2.49 falls per 1,000 patient-days; adjusted rate ratio, 0.85 [95% CI, 0.75 to 0.96]; P=0.01) and an adjusted 34% reduction in injurious falls (0.73 vs. 0.48 injurious falls per 1,000 patient-days; adjusted rate ratio, 0.66 [95% CI, 0.53 to 0.88]; P=0.003).

Among other limitations, the authors noted that a larger study is needed to determine the generalizability of their results. They concluded that their study indicates a role for hospital-based fall-prevention interventions that routinely engage patients and families. “Various modalities of the tool kit allow for integration into existing clinical workflows in diverse hospital settings,” the authors wrote. “This tool kit appears to addresses the gap among nursing assessment of fall risk, tailored fall-prevention interventions, and engagement of patients throughout the fall-prevention process.”