PubTransformer

A site to transform Pubmed publications into these bibliographic reference formats: ADS, BibTeX, EndNote, ISI used by the Web of Knowledge, RIS, MEDLINE, Microsoft's Word 2007 XML.

Public Health - Top 30 Publications

Evidence Supports Action to Prevent Injurious Falls in Older Adults.

What We Know About Tuberculosis Transmission: An Overview.

Tuberculosis remains a global health problem with an enormous burden of disease, estimated at 10.4 million new cases in 2015. To stop the tuberculosis epidemic, it is critical that we interrupt tuberculosis transmission. Further, the interventions required to interrupt tuberculosis transmission must be targeted to high-risk groups and settings. A simple cascade for tuberculosis transmission has been proposed in which (1) a source case of tuberculosis (2) generates infectious particles (3) that survive in the air and (4) are inhaled by a susceptible individual (5) who may become infected and (6) then has the potential to develop tuberculosis. Interventions that target these events will interrupt tuberculosis transmission and accelerate the decline in tuberculosis incidence and mortality. The purpose of this article is to provide a high-level overview of what is known about tuberculosis transmission, using the tuberculosis transmission cascade as a framework, and to set the scene for the articles in this series, which address specific aspects of tuberculosis transmission.

Drivers of Tuberculosis Transmission.

Measuring tuberculosis transmission is exceedingly difficult, given the remarkable variability in the timing of clinical disease after Mycobacterium tuberculosis infection; incident disease can result from either a recent (ie, weeks to months) or a remote (ie, several years to decades) infection event. Although we cannot identify with certainty the timing and location of tuberculosis transmission for individuals, approaches for estimating the individual probability of recent transmission and for estimating the fraction of tuberculosis cases due to recent transmission in populations have been developed. Data used to estimate the probable burden of recent transmission include tuberculosis case notifications in young children and trends in tuberculin skin test and interferon γ-release assays. More recently, M. tuberculosis whole-genome sequencing has been used to estimate population levels of recent transmission, identify the distribution of specific strains within communities, and decipher chains of transmission among culture-positive tuberculosis cases. The factors that drive the transmission of tuberculosis in communities depend on the burden of prevalent tuberculosis; the ways in which individuals live, work, and interact (eg, congregate settings); and the capacity of healthcare and public health systems to identify and effectively treat individuals with infectious forms of tuberculosis. Here we provide an overview of these factors, describe tools for measurement of ongoing transmission, and highlight knowledge gaps that must be addressed.

Research Roadmap for Tuberculosis Transmission Science: Where Do We Go From Here and How Will We Know When We're There?

High rates of tuberculosis transmission are driving the ongoing global tuberculosis epidemic, and there is a pressing need for research focused on understanding and, ultimately, halting transmission. The ongoing tuberculosis-human immunodeficiency virus (HIV) coepidemic and rising rates of drug-resistant tuberculosis in parts of the world add further urgency to this work. Success in this research will require a concerted, multidisciplinary effort on the part of tuberculosis scientists, clinicians, programs, and funders and must span the research spectrum from biomedical sciences to the social sciences, public health, epidemiology, cost-effectiveness analyses, and operations research. Heterogeneity of tuberculosis disease, both among individual patients and among communities, poses a substantial challenge to efforts to interrupt transmission. As such, it is likely that effective interventions to stop transmission will require a combination of approaches that will vary across different epidemiologic settings. This research roadmap summarizes key gaps in our current understanding of transmission, as laid out in the preceding articles in this series. We also hope that it will be a call to action for the global tuberculosis community to make a sustained commitment to tuberculosis transmission science. Halting transmission today is an essential step on the path to end tuberculosis tomorrow.

Designing and Evaluating Interventions to Halt the Transmission of Tuberculosis.

To reduce the incidence of tuberculosis, it is insufficient to simply understand the dynamics of tuberculosis transmission. Rather, we must design and rigorously evaluate interventions to halt transmission, prioritizing those interventions most likely to achieve population-level impact. Synergy in reducing tuberculosis transmission may be attainable by combining interventions that shrink the reservoir of latent Mycobacterium tuberculosis infection (preventive therapy), shorten the time between disease onset and treatment initiation (case finding and diagnosis), and prevent transmission in key settings, such as the built environment (infection control). In evaluating efficacy and estimating population-level impact, cluster-randomized trials and mechanistic models play particularly prominent roles. Historical and contemporary evidence suggests that effective public health interventions can halt tuberculosis transmission, but an evidence-based approach based on knowledge of local epidemiology is necessary for success. We provide a roadmap for designing, evaluating, and modeling interventions to interrupt the process of transmission that fuels a diverse array of tuberculosis epidemics worldwide.

Comparison of two cash transfer strategies to prevent catastrophic costs for poor tuberculosis-affected households in low- and middle-income countries: An economic modelling study.

Illness-related costs for patients with tuberculosis (TB) ≥20% of pre-illness annual household income predict adverse treatment outcomes and have been termed "catastrophic." Social protection initiatives, including cash transfers, are endorsed to help prevent catastrophic costs. With this aim, cash transfers may either be provided to defray TB-related costs of households with a confirmed TB diagnosis (termed a "TB-specific" approach); or to increase income of households with high TB risk to strengthen their economic resilience (termed a "TB-sensitive" approach). The impact of cash transfers provided with each of these approaches might vary. We undertook an economic modelling study from the patient perspective to compare the potential of these 2 cash transfer approaches to prevent catastrophic costs.

Reaching global HIV/AIDS goals: What got us here, won't get us there.

In a Perspective, Wafaa El-Sadr and colleagues discuss tailored approaches to treatment and prevention of HIV infection.

Initiation of a Transanal Total Mesorectal Excision Program at an Academic Training Program: Evaluating Patient Safety and Quality Outcomes.

Short-term results have shown that transanal total mesorectal excision is safe and effective for patients with mid to low rectal cancers. Transanal total mesorectal excision is considered technically challenging; thus, adoption has been limited to a few academic centers in the United States.

Sleep Apnea and Cardiovascular Disease: Lessons From Recent Trials and Need for Team Science.

Emerging research highlights the complex interrelationships between sleep-disordered breathing and cardiovascular disease, presenting clinical and research opportunities as well as challenges. Patients presenting to cardiology clinics have a high prevalence of obstructive and central sleep apnea associated with Cheyne-Stokes respiration. Multiple mechanisms have been identified by which sleep disturbances adversely affect cardiovascular structure and function. Epidemiological research indicates that obstructive sleep apnea is associated with increases in the incidence and progression of coronary heart disease, heart failure, stroke, and atrial fibrillation. Central sleep apnea associated with Cheyne-Stokes respiration predicts incident heart failure and atrial fibrillation; among patients with heart failure, it strongly predicts mortality. Thus, a strong literature provides the mechanistic and empirical bases for considering obstructive sleep apnea and central sleep apnea associated with Cheyne-Stokes respiration as potentially modifiable risk factors for cardiovascular disease. Data from small trials provide evidence that treatment of obstructive sleep apnea with continuous positive airway pressure improves not only patient-reported outcomes such as sleepiness, quality of life, and mood but also intermediate cardiovascular end points such as blood pressure, cardiac ejection fraction, vascular parameters, and arrhythmias. However, data from large-scale randomized controlled trials do not currently support a role for positive pressure therapies for reducing cardiovascular mortality. The results of 2 recent large randomized controlled trials, published in 2015 and 2016, raise questions about the effectiveness of pressure therapies in reducing clinical end points, although 1 trial supported the beneficial effect of continuous positive airway pressure on quality of life, mood, and work absenteeism. This review provides a contextual framework for interpreting the results of recent studies, key clinical messages, and suggestions for future sleep and cardiovascular research, which include further consideration of individual risk factors, use of existing and new multimodality therapies that also address adherence, and implementation of trials that are sufficiently powered to target end points and to support subgroup analyses. These goals may best be addressed through strengthening collaboration among the cardiology, sleep medicine, and clinical trial communities.

Whom to Treat: Postdiagnostic Risk Assessment with Gleason Score, Risk Models, and Genomic Classifier.

Management of prostate cancer presents unique challenges because of the disease's variable natural history. Accurate risk stratification at the time of diagnosis in clinically localized disease is crucial in providing optimal counseling about management options. To accurately distinguish pathologically indolent tumors from aggressive disease, risk groups are no longer sufficient. Rather, multivariable prognostic models reflecting the complete information known at time of diagnosis offer improved accuracy and interpretability. After diagnosis, further testing with genomic assays or other biomarkers improves risk classification. These postdiagnostic risk assessment tools should not supplant shared decision making, but rather facilitate risk classification and enable more individualized care.

Prediagnostic Risk Assessment with Prostate MRI and MRI-Targeted Biopsy.

Prostate MRI is commonly used in the detection of prostate cancer to reduce the detection of clinically insignificant disease; maximize the detection of clinically significant cancer; and better assess disease size, grade, and location. The clinical utility of MRI seems to apply to men with no prior biopsy, who have had a previous negative biopsy, and men who are candidate for active surveillance. In conjunction with traditional clinical parameters and secondary biomarkers, MRI may allow more accurate risk stratification and assessment of need for prostate biopsy.

Whom to Biopsy: Prediagnostic Risk Stratification with Biomarkers, Nomograms, and Risk Calculators.

This article describes markers used for prostate biopsy decisions, including prostrate-specific antigen (PSA), free PSA, the prostate health index, 4Kscore, PCA3, and ConfirmMDx. It also summarizes the use of nomograms combining multiple variables for prostate cancer detection.

Progress with the implementation of rotavirus surveillance and vaccines in countries of the WHO African Region, 2007–2016.

Update on vaccine-derived polioviruses worldwide, January 2016–June 2017.

Choice of implant combinations in total hip replacement: systematic review and network meta-analysis.

Objective To compare the survival of different implant combinations for primary total hip replacement (THR). Design Systematic review and network meta-analysis. Data sources Medline, Embase, The Cochrane Library, ClinicalTrials.gov, WHO International Clinical Trials Registry Platform, and the EU Clinical Trials Register.Review methods Published randomised controlled trials comparing different implant combinations. Implant combinations were defined by bearing surface materials (metal-on-polyethylene, ceramic-on-polyethylene, ceramic-on-ceramic, or metal-on-metal), head size (large ≥36 mm or small <36 mm), and fixation technique (cemented, uncemented, hybrid, or reverse hybrid). Our reference implant combination was metal-on-polyethylene (not highly cross linked), small head, and cemented. The primary outcome was revision surgery at 0-2 years and 2-10 years after primary THR. The secondary outcome was the Harris hip score reported by clinicians.Results 77 studies were included in the systematic review, and 15 studies (3177 hips) in the network meta-analysis for revision. There was no evidence that the risk of revision surgery was reduced by other implant combinations compared with the reference implant combination. Although estimates are imprecise, metal-on-metal, small head, cemented implants (hazard ratio 4.4, 95% credible interval 1.6 to 16.6) and resurfacing (12.1, 2.1 to 120.3) increase the risk of revision at 0-2 years after primary THR compared with the reference implant combination. Similar results were observed for the 2-10 years period. 31 studies (2888 patients) were included in the analysis of Harris hip score. No implant combination had a better score than the reference implant combination.Conclusions Newer implant combinations were not found to be better than the reference implant combination (metal-on-polyethylene (not highly cross linked), small head, cemented) in terms of risk of revision surgery or Harris hip score. Metal-on-metal, small head, cemented implants and resurfacing increased the risk of revision surgery compared with the reference implant combination. The results were consistent with observational evidence and were replicated in sensitivity analysis but were limited by poor reporting across studies.Systematic review registration PROSPERO CRD42015019435.

Mechanisms of Very Late Bioresorbable Scaffold Thrombosis: The INVEST Registry.

Very late scaffold thrombosis (VLScT) occurs more frequently after bioresorbable scaffold (Absorb BVS 1.1, Abbott Vascular, Santa Clara, California) implantation than with metallic everolimus-eluting stents.

Outcomes With Transcatheter Mitral Valve Repair in the United States: An STS/ACC TVT Registry Report.

Post-market surveillance is needed to evaluate the real-world clinical effectiveness and safety of U.S. Food and Drug Administration-approved devices.

Progress in Childhood Vaccination Data in Immunization Information Systems - United States, 2013-2016.

In 2016, 55 jurisdictions in 49 states and six cities in the United States* used immunization information systems (IISs) to collect and manage immunization data and support vaccination providers and immunization programs. To monitor progress toward achieving IIS program goals, CDC surveys jurisdictions through an annual self-administered IIS Annual Report (IISAR). Data from the 2013-2016 IISARs were analyzed to assess progress made in four priority areas: 1) data completeness, 2) bidirectional exchange of data with electronic health record systems, 3) clinical decision support for immunizations, and 4) ability to generate childhood vaccination coverage estimates. IIS participation among children aged 4 months through 5 years increased from 90% in 2013 to 94% in 2016, and 33 jurisdictions reported ≥95% of children aged 4 months through 5 years participating in their IIS in 2016. Bidirectional messaging capacity in IISs increased from 25 jurisdictions in 2013 to 37 in 2016. In 2016, nearly all jurisdictions (52 of 55) could provide automated provider-level coverage reports, and 32 jurisdictions reported that their IISs could send vaccine forecasts to providers via Health Level 7 (HL7) messaging, up from 17 in 2013. Incremental progress was made in each area since 2013, but continued effort is needed to implement these critical functionalities among all IISs. Success in these priority areas, as defined by the IIS Functional Standards (1), bolsters clinicians' and public health practitioners' ability to attain high vaccination coverage in pediatric populations, and prepares IISs to develop more advanced functionalities to support state/local immunization services. Success in these priority areas also supports the achievement of federal immunization objectives, including the use of IISs as supplemental sampling frames for vaccination coverage surveys like the National Immunization Survey (NIS)-Child, reducing data collection costs, and supporting increased precision of state-level estimates.

Harmful Algal Bloom-Associated Illnesses in Humans and Dogs Identified Through a Pilot Surveillance System - New York, 2015.

Cyanobacteria, also known as blue-green algae, are photosynthetic, aquatic organisms found in fresh, brackish, and marine water around the world (1). Rapid proliferation and accumulation of potentially toxin-producing cyanobacteria characterize one type of harmful algal bloom (HAB). HABs have the potential to cause illness in humans and animals (2,3); however, the epidemiology of these illnesses has not been well characterized. Statewide in 2015, a total of 139 HABs were identified in New York, 97 (70%) of which were confirmed through laboratory analysis; 77 independent beach closures were ordered at 37 beaches on 20 different bodies of water. To better characterize HAB-associated illnesses, during June-September 2015, the New York State Department of Health (NYSDOH) implemented a pilot surveillance system in 16 New York counties. Activities included the collection of data from environmental HAB reports, illness reports, poison control centers, and syndromic surveillance, and increased outreach to the public, health care providers, and veterinarians. During June-September, 51 HAB-associated illnesses were reported, including 35 that met the CDC case definitions*; 32 of the cases occurred in humans and three in dogs. In previous years, New York never had more than 10 HAB-associated illnesses reported statewide. The pilot surveillance results from 16 counties during a 4-month period suggest that HAB-associated illnesses might be more common than previously reported.

Vaccination Coverage Among Children Aged 19-35 Months - United States, 2016.

Vaccination is the most effective intervention to reduce morbidity and mortality from vaccine-preventable diseases in young children (1). Data from the 2016 National Immunization Survey-Child (NIS-Child) were used to assess coverage with recommended vaccines (2) among children aged 19-35 months in the United States. Coverage remained ≥90% for ≥3 doses of poliovirus vaccine (91.9%), ≥1 dose of measles, mumps, and rubella vaccine (MMR) (91.1%), ≥1 dose of varicella vaccine (90.6%), and ≥3 doses of hepatitis B vaccine (HepB) (90.5%). Coverage in 2016 was approximately 1-2 percentage points lower than in 2015 for ≥3 doses of diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP), ≥3 doses of poliovirus vaccine, the primary Haemophilus influenzae type b (Hib) series, ≥3 HepB doses, and ≥3 and ≥4 doses of pneumococcal conjugate vaccine (PCV), with no changes for other vaccines. More direct evaluation of trends by month and year of birth (3) found no change in coverage by age 2 years among children included in combined data from the 2015 and 2016 NIS-Child (born January 2012 through January 2015). The observed decreases in annual estimates might result from random differences in vaccination coverage by age 19 months between children sampled in 2016 and those sampled in 2015, among those birth cohorts eligible to be sampled in both survey years. For most vaccines, 2016 coverage was lower among non-Hispanic black* (black) children than among non-Hispanic white (white) children, and for children living below the federal poverty level(†) compared with those living at or above the poverty level. Vaccination coverage was generally lower among children insured by Medicaid (2.5-12.0 percentage points), and was much lower among uninsured children (12.4-24.9 percentage points), than among children with private insurance. The Vaccines for Children(§) (VFC) program was designed to increase access to vaccines among children who might not otherwise be vaccinated because of inability to pay. Greater awareness and facilitating use of VFC might be helpful in reducing these disparities. Efforts should also be focused on minimizing breaks in continuity of health insurance and eliminating missed opportunities to vaccinate children during visits to health care providers. Despite the observed disparities and small changes in coverage from 2015, vaccination coverage among children aged 19-35 months remained high and stable in 2016.

Implementation of Rotavirus Surveillance and Vaccine Introduction - World Health Organization African Region, 2007-2016.

Rotavirus is a leading cause of severe pediatric diarrhea globally, estimated to have caused 120,000 deaths among children aged <5 years in sub-Saharan Africa in 2013 (1). In 2009, the World Health Organization (WHO) recommended rotavirus vaccination for all infants worldwide (2). Two rotavirus vaccines are currently licensed globally: the monovalent Rotarix vaccine (RV1, GlaxoSmithKline; 2-dose series) and the pentavalent RotaTeq vaccine (RV5, Merck; 3-dose series). This report describes progress of rotavirus vaccine introduction (3), coverage (using estimates from WHO and the United Nations Children's Fund [UNICEF]) (4), and impact on pediatric diarrhea hospitalizations in the WHO African Region. By December 2016, 31 (66%) of 47 countries in the WHO African Region had introduced rotavirus vaccine, including 26 that introduced RV1 and five that introduced RV5. Among these countries, rotavirus vaccination coverage (completed series) was 77%, according to WHO/UNICEF population-weighted estimates. In 12 countries with surveillance data available before and after vaccine introduction, the proportion of pediatric diarrhea hospitalizations that were rotavirus-positive declined 33%, from 39% preintroduction to 26% following rotavirus vaccine introduction. These results support introduction of rotavirus vaccine in the remaining countries in the region and continuation of rotavirus surveillance to monitor impact.

Update on Vaccine-Derived Polioviruses - Worldwide, January 2016-June 2017.

In 1988, the World Health Assembly launched the Global Polio Eradication Initiative (GPEI) (1). Among the three wild poliovirus (WPV) serotypes, only type 1 (WPV1) has been detected since 2012. Since 2014, detection of WPV1 has been limited to three countries, with 37 cases in 2016 and 11 cases in 2017 as of September 27. The >99.99% decline worldwide in polio cases since the launch of the GPEI is attributable to the extensive use of the live, attenuated oral poliovirus vaccine (OPV) in mass vaccination campaigns and comprehensive national routine immunization programs. Despite its well-established safety record, OPV use can be associated with rare emergence of genetically divergent vaccine-derived polioviruses (VDPVs) whose genetic drift from the parental OPV strains indicates prolonged replication or circulation (2). VDPVs can also emerge among persons with primary immunodeficiencies (PIDs). Immunodeficiency-associated VDPVs (iVDPVs) can replicate for years in some persons with PIDs. In addition, circulating vaccine-derived polioviruses (cVDPVs) can emerge very rarely among immunologically normal vaccine recipients and their contacts in areas with inadequate OPV coverage and can cause outbreaks of paralytic polio. This report updates previous summaries regarding VDPVs (3). During January 2016-June 2017, new cVDPV outbreaks were identified, including two in the Democratic Republic of the Congo (DRC) (eight cases), and another in Syria (35 cases), whereas the circulation of cVDPV type 2 (cVDPV2) in Nigeria resulted in cVDPV2 detection linked to a previous emergence. The last confirmed case from the 2015-2016 cVDPV type 1 (cVDPV1) outbreak in Laos occurred in January 2016. Fourteen newly identified persons in 10 countries were found to excrete iVDPVs, and three previously reported patients in the United Kingdom and Iran (3) were still excreting type 2 iVDPV (iVDPV2) during the reporting period. Ambiguous VDPVs (aVDPVs), isolates that cannot be classified definitively, were found among immunocompetent persons and environmental samples in 10 countries. Cessation of all OPV use after certification of polio eradication will eliminate the risk for new VDPV infections.

Occupational Exposure to Vapor-Gas, Dust, and Fumes in a Cohort of Rural Adults in Iowa Compared with a Cohort of Urban Adults.

Many rural residents work in the field of agriculture; however, employment in nonagricultural jobs also is common. Because previous studies in rural communities often have focused on agricultural workers, much less is known about the occupational exposures in other types of jobs in rural settings. Characterizing airborne occupational exposures that can contribute to respiratory diseases is important so that differences between rural and urban working populations can be assessed.

Factors associated with health-related quality of life among family caregivers of disabled older adults: a cross-sectional study from Beijing.

Because of the aging population and the shortage of standardized institutional solutions for long-term care (LTC) in China, family caregivers in Beijing are increasingly called upon to provide home care for disabled older adults. Caregivers face a heavy care burden, and decreased physical and mental health (MH). This study aims to describe health-related quality of life (HRQoL) and to identify its predictors for Chinese family caregivers of disabled older adults.A total of 766 caregivers were recruited from 5 communities in the Dongcheng District of Beijing. Measures included the 36-item Short-Form Health Survey (SF-36), the Zarit Caregiver Burden Interview (ZBI) scales, and the Chinese Social Support Rating Scale (SSRS). Hierarchical multiple regression (HMR) analysis was used to identify the predictors.HMR analysis showed that each block of independent variables (demographic characteristics of disabled older adults, demographic characteristics of caregivers, caregiving context, and subjective caregiver burden) had contributed significantly to caregivers' physical and mental quality of life. Subjective caregiver burden explained the greatest amount of total variance in all MH subscales and the 2nd greatest amount of variance in most physical subscales. Therefore, subjective caregiver burden was the strongest predictor of HRQoL.Our findings suggest that a decrease in caregiver burden can improve caregivers' HRQoL, and additional social support is important in decreasing the impact of caregiving on HRQoL. Importantly, an LTC system should be established in China as soon as possible.

Effect of crowding on length of stay for common chief complaints in the emergency department: A STROBE cohort study.

Crowding in emergency departments (EDs) is associated with long lengths of stay (LOS); however, it is not known whether the effect is equal across different chief complaints.The aim of the study was to compare the effect of crowding on LOS in the 10 most common medical or surgical chief complaints in the ED.All adult visits to a university hospital ED on weekdays between 8 AM and 9 PM in 2012 (n = 19,200) were stratified based on chief complaint and triage priority. The ED bed occupancy rate was measured and crowding was defined as an occupancy rate over one. The impact of crowding on LOS was calculated for the different groups.During crowding, LOS was longer among all chief complaints (P ≤.01) (except for high-acuity patients with wounds, where the study group was very small). During crowding, LOS increased the most among patients with extremity pain/swelling (145% among high-acuity patients, 125% among low-acuity patients) and flank pain (87% among high-acuity patients, 117% among low-acuity patients) and the least among patients with chest pain (32% among high-acuity patients, 45% among low-acuity patients) or arrhythmia (37% among high-acuity patients, 52% among low-acuity patients).The effect of ED crowding on LOS is unequal across different chief complaints. These findings could be used to improve the processing of specific chief complaints in the ED.

Usability verification of the Emergency Trauma Score (EMTRAS) and Rapid Emergency Medicine Score (REMS) in patients with trauma: A retrospective cohort study.

Early estimation of mortality risk in patients with trauma is essential. In this study, we evaluate the validity of the Emergency Trauma Score (EMTRAS) and Rapid Emergency Medicine Score (REMS) for predicting in-hospital mortality in patients with trauma. Furthermore, we compared the REMS and the EMTRAS with 2 other scoring systems: the Revised Trauma Score (RTS) and Injury Severity score (ISS).We performed a retrospective chart review of 6905 patients with trauma reported between July 2011 and June 2016 at a large national university hospital in South Korea. We analyzed the associations between patient characteristics, treatment course, and injury severity scoring systems (ISS, RTS, EMTRAS, and REMS) with in-hospital mortality. Discriminating power was compared between scoring systems using the areas under the curve (AUC) of receiver operating characteristic (ROC) curves.The overall in-hospital mortality rate was 3.1%. Higher EMTRAS and REMS scores were associated with hospital mortality (P < .001). The ROC curve demonstrated adequate discrimination (AUC = 0.957 for EMTRAS and 0.9 for REMS). After performing AUC analysis followed by Bonferroni correction for multiple comparisons, EMTRAS was significantly superior to REMS and ISS in predicting in-hospital mortality (P < .001), but not significantly different from the RTS (P = .057). The other scoring systems were not significantly different from each other.The EMTRAS and the REMS are simple, accurate predictors of in-hospital mortality in patients with trauma.

Clinical predictors of outcomes in patients undergoing emergency air medical transport from Kinmen to Taiwan.

Emergency air medical transport (EAMT) is indispensable for acutely or critically ill patients in remote areas. We determined patient-level and transport-specific factors associated with all-cause mortality after EAMT.We conducted a population-based, retrospective cohort study using a prospective registry consisting of clinical/medical records. Study inclusion criteria consisted of all adults undergoing EAMT from Kinmen hospital to the ED of Taipei Veterans General Hospital (TVGH) between January 1, 2006 and December 31, 2012. The primary outcome assessments were 7-day and 30-day mortality.A total of 370 patients transported to TVGH were enrolled in the study with a mean age of 54.5 ± 21.5 (SD) years and with a male predominance (71.6%). The average in-transit time was 1.4 ± 0.4 hours. The 7-day, 30-day, and in-hospital mortality rates were 10.3%, 14.1%, and 14.9%. Among them 33.5% (124/370) were categorized under neurological etiologies, whereas 24.9% (90/370) cardiovascular, followed by 16.2% (60/370) trauma patients. Independent predictors associated with 7-day all-cause mortality were age (odds ratio [OR] 1.043, 95% confidence interval [CI] 1.016-1.070), Glasgow Coma Scale (GCS) (OR 0.730, 95% CI 0.650-0.821), and hematocrit level (OR 0.930, 95% CI 0.878-0.985). Independent predictors associated with 30-day all-cause mortality were age (OR 1.028, 95% CI 1.007-1.049), GCS (OR 0.686, 95% CI 0.600-0.785), hematocrit (OR 0.940, 95% CI 0.895-0.988), hemodynamic instability (OR 5.088 95% CI 1.769-14.635), and endotracheal intubation (OR 0.131 95% CI 0.030-0.569). The 7-day and 30-day mortality were not significantly related to transport-specific factors, such as length of flight, type of paramedic crew on board, or day and season of transport. Clinical patient-level factors, as opposed to transport-level factors, were associated with 7- and 30-day all-cause mortality in patients undergoing interfacility EAMT from Kinmen to Taiwan.

Upper and lower gastrointestinal endoscopies in patients over 85 years of age: Risk-benefit evaluation of a longitudinal cohort.

After age 85, upper and lower gastrointestinal (GI) endoscopy may be indicated in 5% to 10% of inpatients, but the risk-benefit ratio is unknown. We studied patients older than 85 years undergoing upper and lower GI endoscopy.We analyzed a retrospective cohort of inpatients older than 85 years between 2004 and 2012, all explored by upper and complete lower GI endoscopy. Initial indications, including iron deficiency anemia (IDA), other anemias, GI bleeding, weight loss, and GI symptoms, were noted, as were endoscopy or anesthesia complications, immediate endoscopic diagnosis, and the ability to modify the patients' therapeutics. Deaths and final diagnosis for initial endoscopic indication were analyzed after at least 12 months.We included 55 patients, 78% women, with a median age, reticulocyte count, hemoglobin, and ferritin levels of 87 (85-99), 56 (24-214) g/L, 8.6 (4.8-12.9) g/dL, and 56 (3-799) μg/L, respectively. IDA was the most frequent indication for endoscopy (60%; n = 33). Immediate diagnoses were found in 64% of the patients (n = 35), including 25% with GI cancers (n = 14) and 22% with gastroduodenal ulcers or erosions (n = 12). Cancer diagnosis was associated with lower reticulocyte count (45 vs. 60 G/L; P = .02). Among the 35 diagnoses, 94% (n = 33) led to modifications of the patients' therapeutics, with 29% of the patients deciding on palliative care (n = 10). No endoscopic complications lead to death. Follow-up of >12 months was available in 82% (n = 45) of the patients; among these patients, 40% (n = 27) died after an average 24 ± 18 months. Cancer diagnosis was significantly associated with less ulterior red cell transfusion (0% vs. 28%; P = .02) and fewer further investigations (6.7% vs. 40%; P = .02).Upper and complete lower GI endoscopy in patients older than 85 years appears to be safe, and enables a high rate of immediate diagnosis, with significant modifications of therapeutics. GI cancers represented more than one-third of the endoscopic diagnoses.

The additional benefit of weighted subjective global assessment (SGA) for the predictability of mortality in incident peritoneal dialysis patients: A prospective study.

Although subjective global assessment (SGA) is a widely used tool for nutritional investigation, the scores are dependent on the inspectors' subjective opinions, and there are only few studies that directly assessed the usefulness of SGA and modified SGA in incident peritoneal dialysis (PD) patients. A total of 365 incident PD patients between 2009 and 2015 were enrolled and measured with SGA and calculated using serum albumin and total iron binding capacity (TIBC) levels for weighted SGA. Cox analyses were performed to delineate the association between SGA or weighted SGA and all-cause mortality, and a receiver-operating characteristic was conducted to reveal the additional benefit of weighted SGA on predicting adverse clinical outcomes. The Kaplan-Meier curve showed that the cumulative survival rate in patients with "Good nutrition" (G1) was significantly higher compared to those with "Mild to severe malnutrition" (G2). G2 was significantly associated with an increase in the mortality even after adjusting for several covariates compared with G1. Moreover, a 1-unit increase in weighted SGA was also significantly correlated with mortality after adjustment of the same covariates, while G2 was not significantly associated with an increase in the mortality among young-aged (under 65 years) groups. Meanwhile, a 1-unit increase in weighted SGA was significantly related to an increase in mortality in all the subgroup analyses. Furthermore, the AUCs of weighted SGAs in all groups were significantly increased compared with those of SGA alone. In conclusions, the evaluation of nutritional status based on SGA in incident PD patients might be useful for predicting mortality. However, weighted SGA with serum albumin and TIBC can provide additional predictive power for mortality compared with SGA alone in incident PD patients.

Muscle ultrasound: A useful tool in newborn screening for infantile onset pompe disease.

Our study aimed to evaluate the utility of muscle ultrasound in newborn screening of infantile-onset Pompe disease (IOPD) and to establish a system of severity grading. We retrospectively selected 35 patients with initial low acid alpha-glucosidase (GAA) activity and collected data including muscle ultrasound features, GAA gene mutation, activity/performance, and pathological and laboratory findings. The echogenicity of 6 muscles (the bilateral vastus intermedius, rectus femoris, and sartorius muscles) was compared to that of epimysium on ultrasound and rated either 1 (normal), 2 (mildly increased), or 3 (obviously increased). These grades were used to divide patients into 3 groups. IOPD was present in none of the grade-1 patients, 5 of 9 grade-2 patients, and 5 of 5 grade-3 patients (P < .001). Comparing grade-2 plus grade-3 patients to grade-1 patients, muscle ultrasound detected IOPD with a sensitivity and specificity of 100.0% (95% confidence interval [CI]: 69.2%-100%) and 84.0% (95% CI: 63.9%-95.5%), respectively. The mean number of affected muscles was larger in grade-3 patients than in grade-2 patients (4.2 vs. 2.0, P = .005). Mean alanine transaminase (ALT), aspartate transaminase (AST), creatine kinase (CK), and lactate dehydrogenase (LDH) levels were differed significantly different between grade-3 and grade-1 patients (P < .001). Because it permits direct visualization of injured muscles, muscle ultrasound can be used to screen for IOPD. Our echogenicity grades of muscle injury also correlate well with serum levels of muscle-injury biochemical markers.