Objective. We sought to determine whether information on hospital charges (prices) would affect test-ordering and quality of patient care in a pediatric emergency department (ED).
Design. Prospective, nonblind, controlled trial of price information.
Setting. Urban, university-affiliated pediatric ED.
Methods. We prospectively assessed patients 2 months to 10 years of age with a presenting temperature ≥38.5°C or complaint of vomiting, diarrhea, or decreased oral intake. The assessments were done during three periods: September 1997 through December 1997 (control), January 1998 through March 1998 (intervention), and April 1998 (washout). In the control and washout periods, physicians noted tests ordered on a list attached to each chart. In the intervention period, physicians noted tests ordered on a similar list that included standard hospital charges for each test. Records of each visit were reviewed to determine clinical and demographic information as well as patient disposition. In the control and intervention periods, families of nonadmitted patients were interviewed by telephone 7 days after the visit.
Results. When controlled for triage level, vital signs, and admission rates, in a multivariate model, charges for tests in the intervention period were 27% less than charges in the control period. The greatest decrease was seen among low-acuity, nonadmitted patients (43%). In telephone follow-up, patients in the intervention period were slightly more likely to have made an unscheduled follow-up visit to a health care provider (24.4% vs 17.8%), but did not differ on improved condition (86.7% vs 83.4%) or family satisfaction (93.8% vs 93.0%). Adjusted charges in the washout period were 15% lower than in the control period and 15% higher than in the intervention period.
Conclusion. Providing price information was associated with a significant reduction in charges for tests ordered on pediatric ED patients with acute illness not requiring admission. This decrease was associated with a slightly higher rate of unscheduled follow-up, but no difference in subjective outcomes or family satisfaction.
In academic institutions, inefficient use of diagnostic studies has long been considered one of the inherent costs of medical education.1 Implicit in such analyses is the assumption that more rational test-ordering behavior could lead to a decrease in health care costs without a decrease in quality (ie, an increase in efficiency).2–4 Yet an optimum educational strategy for helping physicians to recognize these inefficiencies remains elusive.
Providing information regarding the charges for diagnostic studies (and thus, indirectly their relative costs) has shown some promise as an educational tool.5,,6 However, most success has been demonstrated in inpatient, particularly intensive care unit, settings.2,7–9 Outpatient or emergency department (ED) settings have served less often as study sites. Because much of the inefficiency of inpatient care appears to relate to the unnecessary, daily repetition of certain tests, the dynamics of resource overutilization in the ED may differ from inpatient sites.
In the absence of direct financial incentives, we sought to determine whether the provision of charge information could cause ED physicians in an academic setting to exhibit “price sensitivity” in their diagnostic approaches. Furthermore, if such price sensitivity resulted in significantly lower resource utilization, we wished to determine the effect on patient outcomes.
Our investigation took place at an urban, university-affiliated pediatric ED with a total annual volume of ∼39 000 patients. In accordance with hospital policy, a qualified registered nurse triaged all patients presenting to the ED to one of four categories (“emergent,” “urgent high,” “urgent low,” or “nonurgent”). During weekday evenings (5 to 11 pm) and weekends (11 am to 11 pm), ED patients triaged as nonurgent were seen in an onsite urgent care unit.
Physician providers included institution-based pediatric housestaff and rotating residents in emergency medicine and family practice. Board-certified pediatric emergency medicine (PEM) attendings or fellows supervised all housestaff. Some nonurgent patients were seen only by board-certified pediatricians (either in the primary ED or the urgent care unit). These pediatricians were not supervised by PEM faculty. PEM faculty without housestaff saw a few patients in all triage categories. The same attendings and junior pediatric housestaff (postgraduate year [PGY]-1 and PGY-2) were present in both the control and the intervention periods; senior pediatric residents (PGY-3) and outside rotators were present for only 1 month in either period.
From September 1997 to March 1998, a data form was attached to every patient chart at triage. The form asked physician providers to identify patients who met the following criteria: 2 months to 10 years of age; absence of chronic illness (specifically, a history of immunosuppression or immunodeficiency, inborn error of metabolism, and ventriculoperitoneal shunt); and either a triage temperature ≥38.5°C or a complaint of vomiting, diarrhea, or decreased oral intake.
The Yale Observation Scale (YOS) is a clinical tool validated previously to predict risk for serious illness in febrile children.10 It consists of six items: quality of cry, reaction to parent stimulation, state variation, color, hydration, and response to social overtures. For patients younger than 3 years, our study form provided these elements of the YOS with which the physicians scored the child's initial appearance.
The study had three phases: control, intervention, and “washout.” During the control period (September to December), physicians were asked to check tests ordered from a list of 22 common laboratory and radiographic investigations for each visit. During the intervention period (January to March), providers checked tests from a similar list that added the standard hospital charge for each test. Physicians also calculated the total charges for each diagnostic work-up. During the washout period (April), forms reverted to those used in the control period.
The primary investigators (LH and SC) reviewed the medical records of study patients after the visits. Information regarding patient race/ethnicity, insurance status, initial vital signs, provider training levels, patient care setting (ie, primary ED or urgent care unit), diagnostic testing, and disposition (ie, admission or discharge) was extracted.
In the control and intervention periods, 1 week after the visit, up to three attempts were made to interview patient families with working telephone numbers. Respondents who spoke only Spanish were contacted by an investigator fluent in that language (DG). Respondents were asked to describe the child's overall condition since the visit (“better,” “same,” “worse”). They also were asked whether the child had been seen again by a health care practitioner since their visit to our ED. If so, they were asked if that visit was prearranged or unscheduled and the nature of the setting (our ED, their primary care provider office, another ED or urgent care center). Finally, they were asked to describe their overall satisfaction with the initial visit (very satisfied, somewhat satisfied, somewhat unsatisfied, very unsatisfied). If the respondent had not been present with the child at the time of the visit, this final question was omitted.
The brief washout period was included to estimate the effect of stopping our intervention on charges for test-ordering. We did not conduct telephone interviews during this phase of the study.
Data were entered and analyzed in SPSS for Windows, version 6.1.4 (SPSS, Inc, Chicago, IL). For categorical data, χ2 tests were used to compare proportions between groups. Odds ratios [OR] were calculated from 2 × 2 tables. Continuous variables were compared using a two-tailed Student's t test. Because of the nonnormal distribution of charges for diagnostic testing in each group, comparisons were made using a Mann–Whitney U (MWU) test. To isolate the effect of price information, an analysis of covariance (ANCOVA) model incorporating triage categories and admission rates as additional main effects with patient clinical characteristics (age and vital signs) as covariates was constructed. Significance was set at P < .05.
The study protocol was approved by the hospital's Institutional Review Board.
Physician providers properly completed study forms for 5395 patient visits. Review of daily ED records revealed that ∼90% of eligible patient visits were appropriately included. The most common reasons for inappropriate exclusion were failure of the clerical staff to attach a study form to the ED record and failure of the physicians to complete the study form. Eligible patients not included did not differ from included patients on age, vital signs, insurance status, or race/ethnicity.
A total of 2467 and 2414 visits were included in the control and intervention periods, respectively. Table 1 compares the demographic and clinical characteristics of the two groups. The two groups did not differ significantly on race/ethnicity or insurance status.
Patients in the intervention period were slightly more likely to have been triaged into one of the lower acuity categories, “low urgent” or “nonurgent” (OR, 1.4; 95% CI: 1.2, 1.6). A similar proportion of patients in each period was seen in the urgent care unit (21% of controls vs 19% of intervention patients; P = .25). On average, the intervention group was slightly younger than the control group, slightly more febrile, and had slightly higher heart rates. There was no difference in average respiratory rate, and a similar proportion of patients younger than age 3 years in each group had the minimum YOS (ie, a nontoxic, vigorous, alert, interactive, well-hydrated appearance).
Table 2 compares the test charges for patients in the two periods. Overall, mean charges were 37% (95% CI: 25, 48) lower when physicians had been provided with price information. When stratified by triage category, the largest decreases were seen among patients in the lower acuity triage categories. When stratified by disposition, there was no significant decrease in ED test charges for patients who were admitted to the hospital.
Fewer patients had at least one diagnostic test performed in the intervention period (32% vs 53%; P < .01). Of those who had at least one test, mean charges for those tests were 8.6% lower in the intervention period ($181 vs $198; MWU P< .01).
In a multivariate analysis, adjusting for differences in patient age, temperature, triage category, and admission rate, test charges in the intervention period remained 27% lower than in the control period ($87 vs $64; P < .01).
A similar proportion of patients was seen by each of the physician training levels in each period. The adjusted charges for junior pediatric residents in both periods were similar to those of the attending physicians. Adjusted mean charges were higher for senior pediatric residents in the control period and exhibited a smaller decrease in the intervention period. The largest decrease was seen among nonpediatric housestaff rotating monthly from other institutions. If housestaff who were not present in both periods (ie, PGY-3s and rotators) are removed from our multivariate analysis, there was an overall 28% decrease in mean test charges.
Table 3 indicates differences in the frequency of specific test-ordering. An equal proportion of patients in both periods (11%) received a bolus of intravenous normal saline. Yet, despite requiring slightly more intravenous fluids (average 26 mL/kg vs 23 mL/kg; P < .01) patients in the intervention period had serum electrolyte concentrations ordered less often (42% vs 55%;P < .01).
Telephone follow-up was successful for 997 and 1052 families in the control and intervention periods, respectively. This represented 61% of the discharged families who had provided registration clerks with a working telephone number and 47% of all nonadmitted patients. Patients in the respective periods for whom follow-up was unsuccessful did not differ significantly on any clinical (age, vital signs, triage category) or demographic (race/ethnicity, insurance status) variables. Likewise, there was no difference in the mean test charges for those lost to follow-up.
Table 4 displays the results of the interviews. The mean time to follow-up was 6.8 ± 3.3 days. Families reported that the child was “better” slightly less often in the intervention period, but this difference fell short of statistical significance (83.4% vs 86.7%; P = .05). A nearly identical proportion in each period reported that they were “very satisfied” or “somewhat satisfied” with the ED visit (93.0% vs 93.8%; P = .48).
Patients in the intervention period were significantly more likely to have visited a physician or nurse practitioner since their ED visit (31.4% vs 40.3%; P < .01). Most of this difference was explained by an increased rate of unscheduled care (17.8% vs 24.4%; P < .01). Of the patients returning for unscheduled follow-up, nearly equal proportions in each period had been seen again in the ED versus their primary care provider.
A total of 515 visits were included in the brief washout period. Overall mean test charges for these patients was $63. However, inclusion of admission rates, triage categories, patient age, and temperature in the ANCOVA model resulted in an adjustment to $74, representing a 15% decrease from the adjusted mean charges in the control period (P = .02) and a 15% increase over the intervention period (P = .02).
In this large sample, providing price information was associated with a significant change in physician test-ordering behavior. After adjusting for differences in overall acuity in the control and intervention periods (presumably attributable to seasonal factors), test charges dropped by 27%. This difference, although large, had little demonstrable effect on the final patient outcomes measured.
Unlike physicians in community practice, who often face direct financial disincentives, academic physicians have functioned in a slightly different environment.11,,12 In addition to their wide variety of payer relationships, the salary structure and culture of university hospitals has insulated them partially from such pressures.13,,14 As a result, many remain unaware of the relative costs of diagnostic testing, and such information is rarely emphasized in residency training.15,,16
There is evidence that even in the absence of direct financial pressure, measures that increase the cost-awareness of academic physicians can increase the efficiency of their practice patterns.4,,16,17 In separate studies, Martin and associates and Manheim and colleagues, have demonstrated that fairly intensive interventions such as weekly chart reviews and seminars can reduce inpatient charges and lengths of stay.18,,19 With varying success, the provision of simple price information also has shown promise as an educational tool in the inpatient setting.7,,20,21 Intensive care unit practice seems especially sensitive to this information.8,,9 Most investigators have posited that this effect is attributable to a reduction in the unconsidered, daily repetition of unnecessary tests.7,,9,22
Cummings and co-workers and Long et al have shown that in hypothetical case studies, price information can affect a clinician's initial diagnostic work-up.5,,6 Tierney and associates saw a practical effect from this intervention in an outpatient internal medicine clinic.23 However, there has been no formal study of this method in the ED setting. It was unclear how this information would be used by physicians managing unfamiliar patients presenting acutely for care.24,,25
In our study, the effect of price information seemed greatest for the least emergent patients. The charges for testing of admitted patients and patients triaged as “emergent” or “high urgent” decreased little, if any, during the intervention period. Presuming that sicker patients often present physicians with firmer, less discretionary indications for testing, these results are not surprising. In addition, the work-up of patients eventually admitted to the hospital may have been influenced by the practice patterns and clinical pathways of inpatient pediatricians and specialists not included in the study.
Of course, the effects of price information on housestaff behavior cannot be separated completely from their attending supervision.16,25–29 All ED attendings became familiar with the intervention forms and were presumably influenced by its information. This study was designed so that junior residents (PGY-1s and PGY-2s) were present in both periods to serve as their own controls. However, we must acknowledge that the clinical learning curve of most pediatric housestaff during the busy winter months may have resulted in a trend of less test-ordering, even in the absence of our intervention.24,,30 However, this potentially confounding effect was not present among the rotators, whose pediatric ED experience was limited to their month at our institution. In addition, the presence of attending supervision of all housestaff should have modulated the effect of their increasing fund of clinical knowledge.
Comparison of the effects of our intervention among different training levels could generate several hypotheses. As PGY-1 and PGY-2 residents are supervised most closely by the attending physicians, the concordance of their charges is expected. However, the wide disparity between the senior residents and the rotators raises some questions. Neither group was present in both periods to serve as its own control, so these differences may simply reflect preexisting differences in practice styles. It also is conceivable that by their third year of residency at our institution, the PGY-3s already possessed a degree of price awareness and were therefore less likely to be affected by knowledge of charges. The fact that their utilization rates were higher than those of attendings in both periods may reflect the combined effects of their incomplete training and their decreased degree of direct supervision.
Examination of the frequency of individual tests ordered provides insight into the decision-making process that led to the overall reduction in charges. The use of most “little-ticket” items (ie, urine dipstick, glucometer measurement, rapid streptococcal test and cultures) showed no significant change. However, large decreases were seen among commonly ordered but more expensive tests (ie, chest radiography, serum electrolyte studies , complete blood count, blood and urine culture studies). Providers appear to have selectively removed these more expensive items from many of their routine evaluations.
The fact that the large majority of discharged patients were reported to be feeling “better” an average of 7 days after their visit in both periods suggests that many of the initial complaints were of an uncomplicated, minor, and self-limited nature. In addition to seasonal disease patterns, the increased rate of unscheduled follow-up care in the intervention period may reflect a failure of ED providers to meet parental expectations for diagnostic evaluations. For example, during the telephone interviews, many parents expressed disappointment when chest radiography had not been performed.
Despite this increased tendency to return to our ED or elsewhere in the intervention period, final patient outcomes were similar for both groups. If this similarity were the result of measures taken during these unscheduled follow-ups, one would expect to have seen a lower rate of parental satisfaction with the initial visit. This was not the case. In addition, there was no difference in the rate of children admitted to the hospital at the time of follow-up.
Generally accepted outcome measures for nonadmitted pediatric ED patients do not yet exist. In the context of patient complaints in this study, we chose telephone interviews as the best reflection of the impact of a single ED encounter.31 It was important to conduct these interviews within a fairly narrow time window. Calling too soon may not have allowed enough time for the expected resolution of symptoms, whereas calling too late increased the chances that the patient may have contracted a second, unrelated illness.
These time constraints, combined with other difficulties contacting our largely indigent population, resulted in a generally low overall follow-up rate. Despite the clinical and demographic similarities between patients for whom follow-up was successful and those for whom it was not, a sampling bias may have occurred. However, there is no evidence that this bias varied from the control to the intervention period. Thus, our relatively low response rate may limit the generalizability of our findings, while preserving internal validity.
The results of the washout period suggest that the effect of our intervention was reduced when the intervention was stopped. This has been reported commonly with other such educational interventions, emphasizing the need for continued reinforcement.32Although our time frame was limited, the residual decrease in charges of 15% is encouraging and may represent a more lasting change in practice.
This investigation had other important limitations. Although the study was conducted uninterrupted through a single winter season, patients in the control and intervention periods may have differed in unmeasured ways that influenced test-ordering behavior. For instance, the differences in utilization of the rapid tests for respiratory syncytial virus and rotavirus simply may reflect the seasonality of those infections. However, the cross-over date of January 1 was chosen to balance these differences, and the total contribution of such tests to the overall decrease was small.
The patient visits included in this study encompassed a rather narrow range of complaints (ie, fever, vomiting, diarrhea, or decreased oral intake) in generally healthy children. The intention of this design was to include cases in which ED providers were most likely to have a large degree of discretion and control over test-ordering. If children with more complicated or serious presentations were included, one might expect the effect of price information to be reduced.
However, the population studied represents a large portion of pediatric ED visits. In the diagnostic evaluation of these patients, providers decreased their resource utilization markedly when presented with price information. By including such educational interventions as part of medical training, academic institutions may better prepare housestaff to become efficient decision-makers.
This work was supported by a Special Project Grant from the Ambulatory Pediatric Association.
We wish to thank Nancy Ryan and our ED housestaff for their participation in this study. We are also grateful to Elizabeth Powell, MD, MPH, Genie Roosevelt, MD, MPH, and Karen Sheehan, MD, MPH, for their assistance in the preparation of this manuscript.
- ED =
- emergency department •
- PEM =
- pediatric emergency medicine •
- PGY =
- postgraduate year •
- YOS =
- Yale Observation Scale •
- MWU =
- Mann–Whitney U test •
- ANCOVA =
- analysis of covariance •
- OR =
- odds ratios
- McCarthy PL,
- Lembo RM,
- Baron MA,
- Fink HD,
- Cicchetti DV
- Greco PJ,
- Eisenberg JM
- Gulson DG,
- Binns HJ
- Copyright © 1999 American Academy of Pediatrics