What device is used to ensure intravenous piggyback medications are given at the correct rate?

More than a decade ago, the introduction of intravenous (IV) smart pumps with drug libraries and dose error reduction systems (DERSs) provided a means for decreasing IV medication administration errors. Before IV smart pumps were available, all pump programming required the user to manually calculate the rate of infusion, then input the desired infusion rate into the pump. Because many different units of measure are used in the administration of IV medications, required calculations often are complex and therefore increase the likelihood of user error.1

In contrast, IV smart pumps have built-in drug libraries and a DERS, which allows the user to choose the desired medication from an approved list and input the required patient information, after which the IV smart pump calculates the infusion rate. Drug libraries contain the most commonly used IV medications, and the DERS alerts the user if the calculated infusion rate exceeds normally acceptable dosing limits. These limits can be expressed as either hard dose limits (i.e., cannot be bypassed by users at the pump, thereby preventing users from starting the programmed infusion) or soft dose limits (which provide a warning that the dose may be too high but will still allow users to start the infusion as programmed after the limits are acknowledged).

IV smart pumps have become indispensable in the administration of medication, fluids, and nutrients. Although the use of IV smart pumps can reduce the incidence of IV adverse drug events and medication administration errors,2 IV infusion continues to be associated with 54% of all adverse drug events,3 56% of medication errors, and 61% of serious and life-threatening errors.4 A study from 2005 found a staggering 67% error rate with the administration of IV infusions in an intensive care unit (ICU).5 Of important note, many of the errors involved labeling and other administrative omissions, while only a portion of the errors resulted in serious harm to patients.

Common sources of error include overriding dose error alerts and, even more concerning, manually bypassing drug libraries and the DERSs completely.6,7 The complexity of the device-user interface, the time required to complete IV smart pump programming, and libraries that lack drug entries that are properly harmonized with how medications are ordered or dispensed in that location are among the most frequently cited reasons for nurses bypassing drug libraries and DERSs.8 Research suggests that the majority of adverse drug events are related to incorrect or incomplete programming.9 Clinicians report that pump programming is frequently rushed and that they often feel forced to make hasty decisions about overriding alerts because of time constraints and competing work demands.6,9

Research has identified three specific IV medication infusion tasks as particularly susceptible to errors.10 The first is administration of multiple IV infusions, including secondary (also referred to as “piggyback”) medication administration. Other infusion tasks associated with a high rate of error include IV bolus medication administration and titrated administration of life-critical drugs or anesthetics responsive to various physiological signals.10 Errors associated with bolus and titrated doses can cause more serious harm to patients than infusions administered at slower rates. An observational study of IV medication preparation and administration in an ICU reported that injection of bolus doses at faster-than-recommended rates was the most frequent type of error.11 Another observational study of IV medication administration in six wards across two teaching hospitals demonstrated that administration by bolus was associated with a 312% increased risk of error, as compared with medications administered using other methods.12

In an era when people in the United States upgrade their smartphones every one to two years,it has been almost 20 years since most IV smart pump device manufacturers have substantially upgraded their basic device designs.

In an era when people in the United States upgrade their smartphones every one to two years,13 it has been almost 20 years since most IV smart pump device manufacturers have substantially upgraded their basic device designs. These safety concerns are well recognized and have become a top priority for the U.S. Food and Drug Administration (FDA), which received 56,000 reports of infusion pump incidents, including 710 deaths, and issued 87 infusion pump recalls between 2005 and 2009.9

The combination of the ubiquitous nature of IV infusion pumps along with a sense of urgency to address IV medication safety has garnered the attention of several organizations tied to patient safety. The Association for the Advancement of Medical Instrumentation and FDA cosponsored a summit in 2010 to make the issue of patient safety and IV infusions a top priority.14 The National Quality Forum conducted an environmental analysis in 2012 that resulted in 13 recommendations for improving safety during IV infusion.9 The leading two hazards on the ECRI Institute’s top 10 list of health technology hazards for 2014 were related to IV infusion pumps, specifically alarm hazards and infusion pump medication errors.15

IV smart pump medication administration is a serious patient safety issue that needs to be addressed with tangible solutions that can be implemented as quickly as possible. Clearly, innovation is needed in the currently available devices, so that they can be made safer and easier to use.16,17 Although there is a ground-swell of effort focused on this important issue, few practical approaches have been studied. A growing body of literature has linked the complexity of IV smart pump programming to medication errors. To our knowledge, the healthcare community lacks quality evidence regarding the safety implications associated with different IV smart pumps, high-risk programming tasks, and IV medication administration errors.

The primary purpose of this pilot study was to measure the differences in programming times and the frequency of programming use error among three IV smart pumps. The specific aims of the study were 1) to compare the differences in programming times among three IV smart pumps on five common programming tasks, 2) to compare the differences in the frequency of programming use error among three IV smart pumps, and 3) to measure the impact of user training on a) programming times and b) use errors.

In June 2014, we completed a pilot study using three IV smart pumps. This study used a within-subjects design, as each participant completed the IV medication tasks on two of the three IV smart pumps. This design allowed individual critical care nurse participants to compare their experience across the two IV smart pumps while controlling for the variable of individual nurse performance and past experience.

Fifteen critical care nurse participants completed five common programming tasks in a simulation laboratory. Critical care nurses were recruited from Boston-area hospitals using the following criteria: currently working at least 20 hours per week in direct critical care, minimum of two years of professional critical care nursing experience, and a minimum of two years of experience operating programmable large-volume IV smart pumps. Nurses who met the study inclusion criteria were provided information about the study both on the phone and by e-mail, along with a consent form. Data collection took approximately 1.5 to 2 hours per participant, and all participants received an honorarium of $175 as compensation for their participation.

Programming time was defined as the time it took (measured in seconds) to complete each programming task and ended when the participant stated that the programming task was completed.

A use error is generally defined as either an inadvertent action or an omitted action that deviates from the most efficient way of doing something, regardless of whether the use error was detected and/or corrected. Use errors related to IV smart pump programming are important to understand because an unintentional wrong or missing action can result in an IV medication administration error. In this study, we counted only use errors that resulted in incorrect final pump programming.

User training consisted of a brief training according to manufacturer’s instructions, including only the IV medication tasks being used in the study.

Institutional review board approval was obtained, and all data collection was done in a nursing simulation laboratory. Upon arrival at the simulation laboratory, nurses were given the chance to ask additional questions regarding the study and after all questions were answered, the consent form was signed. Three different IV smart pumps were used in the study. Two of these pumps account for 65% of the pumps in current clinical use18 and one is a prototype IV smart pump in development, designed to be both smarter and safer17 than currently available devices. Design features on the prototype pump were specifically developed to reduce the risk of programming errors with a simple and intuitive user interface that eliminates unnecessary steps, has a large color touchscreen to allow infusion status to be easily viewed from multiple angles, and incorporates programming that is simple and easy to learn. All individual product branding was concealed, and the pumps were relabeled as pump A (in current clinical use), pump B (in current clinical use), and pump C (prototype).

Because our goal was to ask participants to perform the five programming tasks on an unfamiliar pump, we verified previous IV smart pump experience before initiating data collection. This allowed us to confirm that each participant was assigned to two unfamiliar IV smart pumps (one in current clinical use and the prototype pump).

To mitigate order bias, the order of IV smart pump use in the sequence of programming tasks was determined randomly by a coin toss. Of the 15 study participants, seven participants who were users of pump B completed the programming tasks on a combination of pump A/pump C (prototype) and eight participants who were users of pump A completed the programming tasks on a combination of pump B/pump C (prototype). Thus, each participant completed programming tasks on two of the three pumps (either pump A or pump B), with all 15 participants using pump C (prototype), since no participants were familiar with pump C. The five common programming tasks used in the study included 1) change the rate on a running infusion, 2) deliver an antibiotic as a secondary infusion, 3) deliver a weight-based infusion, 4) titrate a weight-based infusion, and 5) deliver a morphine infusion with a bolus.

Before collecting data with the study participants, we conducted a complete programming session with a pilot participant. This pilot session resulted in several modifications to the data collection forms but did not result in changes to the study procedures or programming tasks.

First, all study participants completed a brief demographic questionnaire. A detailed script then was used to ensure consistency of the research protocol administration. Each of the five IV smart pump programming tasks were completed four times: once on each of the two IV smart pumps before and after user training. Participants had a 5- to 10-minute break between each set of programming tasks. The details of the pump programming sequence are shown in Figure 1.

Programming times were recorded using the stop watch feature on an iPhone. The timing began with the first button push and ended when either the task was complete or a three-minute timeout period was reached. After the completion of each programming task, the pump was reviewed to evaluate for task completion and use errors. We recorded on video all data collection for data verification and additional quantitative and qualitative analysis, if needed.

First, descriptive statistics were generated for each variable of interest in the study. Means (±SD) were computed for all interval data, and frequency counts and percentages were determined for all categorical data. Because the data were collected in a simulation laboratory, there were no missing data. Analyses for testing of means were done using analysis of variance (ANOVA), including homogeneity of variance and post hoc tests (SPSS version 22.0).

Descriptive statistics were used to describe the participant demographics. Critical care nurse participants (n = 15) were from 12 Boston-area hospitals, including both community and university hospitals. Most (73.3%) worked full time on both day and night shifts, had an average of 12.3 years of critical care experience, and 7.8 years’ experience with IV smart pumps. Education level indicated 86.7% had at least a bachelor’s degree. All demographic data are shown on Table 1.

Critical care nurse demographics using descriptive statistics (n=15)

VariableNo. (%)
Gender
Men2 (13.3)
Women13 (86.7)
Highest degree
Associate’s2 (13.3)
Bachelor’s10 (66.7)
Master’s3 (20)
Primary work shift
7:00 am–7:00 pm6 (40)
7:00 pm–7:00 am7 (46.7)
Other2 (13.3)
Current work status
Full time11 (73.3)
Part time4 (26.7)
Type of critical care unit
Critical care unit1 (6.7)
Medical intensive care unit3 (20)
Surgical intensive care unit2 (13.3)
Trauma3 (20)
Mixed6 (40)
Type of hospital
University teaching10 (66.7)
Community5 (33.3)
Nursing certification
CCRN11 (73.3)
Other1 (6.7)
None3 (20)

VariableMean ± SD
Nursing experience (years)17.3 ± 9.1
Critical care experience (years)12.3 ± 7.1
No. of beds in hospital316 ± 129
No. of critical care beds34 ± 28
Length of time using any IV smart pump (years)7.8 ± 3.4
Length of time using current IV smart pump (years)7.3 ± 2.8

ANOVA was used to test the differences in task completion times before and after the user training for each of the three IV smart pumps. A summary of these findings is shown in Table 2. In each case, the programming time was significantly shorter after the user training. In addition, a review of the mean values for each individual task shows that large differences in programming times occurred for all three IV smart pumps. For example, the range of programming times for delivering an antibiotic as a secondary infusion (task 2) before the user training was 86.2 seconds for pump A, 101.2 seconds for pump B, and 52.6 seconds for pump C. Although the programming times decreased significantly for that same task after the user training, large differences among the three IV smart pumps remained at 26.6 seconds for pump A, 58.8 seconds for pump B, and 26.5 seconds for pump C.

Task completion times before and after the user training, using analysis of variance

Programming taskTask completion times (sec)Pump A (n = 8)Pump B (n = 7)Pump C (n = 15)Before(seconds;

mean ± SD)

After(seconds;

mean ± SD)

PaBefore(seconds;

mean ± SD)

After(seconds;

mean ± SD)

PaBefore(seconds;

mean ± SD)

After(seconds;

mean ± SD)

Pa
Task 1: Titrate a running
infusion
53.5 ± 46.18.1 ± 1.30.00764.8 ± 62.36.1 ± 1.20.00515.6 ± 11.13.3 ± 1.5<0.001
Task 2: Deliver anantibiotic as a secondary

infusion

86.2 ± 4036.6 ± 14.6<0.001101.2 ± 1658.8 ± 16.7<0.00152.6 ± 19.926.5 ± 8.8<0.001
Task 3: Deliver a
weight-based infusion
176.4 ± 10.285.7 ± 28.3<0.00197.5 ± 43.248 ± 23.30.00462.1 ± 30.627.5 ± 6.8<0.001
Task 4: Titrate a
weight-based infusion
32.5 ± 3013.9 ± 7.40.00924.8 ± 11.717.8 ± 8.80.00412.6 ± 7.55.2 ± 2.3<0.001
Task 5: Deliver a morphine
infusion with a bolus
119.4 ± 55.759.6 ± 12.00.001127 ± 59.871.6 ± 49.60.00239.8 ± 11.927 ± 8.2<0.001
All five tasks combined
(seconds)
93.640.7883.0640.4636.5417.9

As noted in methods, the maximum time limit to complete each task was three minutes. Of the five programming tasks used in this study, the weight-based infusion (task 3) and the morphine infusion with bolus (task 5) were the most complex. Before user training, the three-minute time limit was reached seven of eight times for task 3 on pump A, one of seven times with pump B, and zero times with the prototype pump C. For task 5, the time limit was reached three of eight times with pump A, one of seven times for pump B, and zero times with the prototype pump C. In each of these instances, 180 seconds was entered into the datasheet as the completion time. These were the only instances in which the time limit was reached.

Table 2 shows that differences existed among all three IV smart pumps in all programming tasks, with the overall task programming times being fastest with the IV smart pump C (prototype). Because all five IV smart pump programming tasks are frequently used in the acute care setting, we also calculated the overall mean time for all five programming tasks by IV pump type. The mean task programming times after the user training for pumps A (40.78 seconds) and B (40.46 seconds) were longer than for the pump C prototype (17.9 seconds).

Regarding significance, differences were found primarily between the two IV smart pumps in current clinical use (pump A and pump B) and the prototype pump C (Table 3). Three of the five nonsignificant differences were found between the two pumps in current clinical use (tasks 1, 4, and 5), one was between pump A and pump C (task 2), and the last one was between pump B and pump C (task 3). These comparisons were done using the after-user-training task programming times.

Significant differences in programming times by pump type, using analysis of variance

Programming taskP (pump A/pump
B)P (pump A/pump
C)P (pump B/pump
C)
Task 1: Titrate a running infusionNS<0.0010.003
Task 2: Deliver an antibiotic as a secondary infusion0.006NS<0.001
Task 3: Deliver a weight-based infusion0.002<0.001NS
Task 4: Titrate a weight-based infusionNS0.007<0.001
Task 5: Deliver a morphine infusion with a bolusNS0.0180.002

Effect size provides an important measure of the magnitude of differences, and Cohen defines a large magnitude of difference as any effect size greater than 0.5.19 The effect sizes for each of the individual programming tasks were calculated using means and SDs. Because programming times were fastest on pump C, effect sizes were computed by comparing the pump C prototype with each of the IV smart pumps in current clinical use (pump A and pump B). The mean effect size for all five programming tasks for pump A was 0.71 (range 0.39–0.86). The mean effect size for pump B was 0.65 (range 0.5–0.77). The effect sizes are shown in Table 4 and indicate large effects in all but one of the comparisons (task 2, pump A).

Effect size compared with prototype (IV smart pump C)

Programming tasksPump APump B
Task 1: Titrate a running infusion0.860.72
Task 2: Deliver an antibiotic as a secondary infusion0.390.77
Task 3: Deliver a weight-based infusion0.820.51
Task 4: Titrate a weight-based infusion0.620.7
Task 5: Deliver a morphine infusion with a bolus0.850.5
Mean effect size0.710.65

The percentage of use errors for each of the three IV smart pumps is shown on Table 5. Before the user training, we found that pump A had the highest percentage of use error (30%), followed by pump B (17%), then pump C (8%). The percentage of use error decreased markedly after the user training for all three IV smart pumps (pump A: 7%; pump B: 3%; and pump C: 1%), with the lowest percentage being associated with pump C.

Percent of use errors before and after the user training, using descriptive statistics

Pump A (n = 8)Pump B (n = 7)Pump C (n = 15)BeforeAfterBeforeAfterBeforeAfter
Use errors (%)30717381

In this pilot study, significant differences were observed in the time required by critical care nurse participants to complete five common IV smart pump programming tasks before versus after user training. These differences have implications for practice. First, although differences were seen in the individual programming tasks between the two IV smart pumps that are in current clinical use (pump A and pump B), the mean time for all five tasks after user training was essentially the same (40–41 seconds). This finding suggests that in general use, there is likely no appreciable programming time difference between these two IV smart pumps over the course of a critical care nursing shift. However, notable differences were observed in use error frequency, with twice as many use errors on pump A compared with pump B, both before and after user training. In the setting of critical care, where patients receive numerous IV medications each day, anything that can be done to decrease the frequency of IV medication administration errors is likely to have a substantial positive impact on overall patient outcomes and cost of care.

When comparing the differences in programming times and use error frequency between the two IV smart pumps in current clinical use (pump A and pump B) with the prototype pump (pump C), our findings support that programming with the prototype pump was fastest for each of the programming tasks tested. In addition, even before receiving user training, the prototype pump C was the only pump for which the three-minute programming time limit was not reached. Finally, the effect sizes for all programming tasks between the prototype pump C and both pumps A and B support a large effect with regard to decreased programming time when using pump C.

The prototype pump also was associated with the lowest frequency of use errors. It seems reasonable to assume that the longer it takes to program an IV smart pump, the more frustrated the user will become, and the more likely it is that the end user will make an IV medication administration error. Thus, any technology improvements that can help simplify the use of these devices and decrease IV smart pump programming time have the potential to decrease IV medication errors in at least two ways. First, decreasing the programming time will minimize the opportunity for interruptions during IV smart pump programming. Second, decreasing the time it takes to program an IV smart pump will reduce the likelihood that a nurse will bypass the DERS due to frustration or time constraints. Speed and efficiency of IV smart pump programming has particular pragmatic relevance to clinical practice, as time constraints are repeatedly highlighted in the literature as a fundamental reason for bypassing the safety features of the DERS. If some of the design features inherent in the prototype tested in this pilot study could be incorporated into the currently available IV smart pumps and shorten the time required for programming, these features likely would have a positive impact on drug library use and compliance.

Speed and efficiency of IV smart pump programming has particular pragmatic relevance to clinical practice, as time constraints are repeatedly highlighted in the literature as a fundamental reason for bypassing the safety features of the DERS.

Our findings support the value of proper user training in helping clinicians learn to operate the IV smart pumps in a more time efficient manner and make fewer use errors. However, with the current economic environment in healthcare, it seems unlikely that future resources for user training will increase. Therefore, any technology innovations that can make IV smart pumps easier to use and reduce the time needed for training would undoubtedly help to address the learning needs of clinical end users and the safety needs of patients.

Any technology innovations that can make IV smart pumps easier to use and reduce the time needed for training would undoubtedly help to address the learning needs of clinical end users and the safety needs of patients.

The goal of this initial programming study was not to highlight any particular commercially available IV smart pump as better than another. Instead, the goal was to highlight that significant differences do exist that have relevance to clinical practice, which is why we chose not to disclose individual brands. However, the overall findings support that the technology present in the prototype IV smart pump had a positive impact on both programming times and use errors for critical care nurses performing commonly used programming tasks. More simply put, current technology is available that can help make our IV smart pumps “smarter and safer.”17

IV smart pump manufacturers have a moral responsibility to fund ongoing hardware and software development efforts that use available technology to make their products as safe and user friendly as possible, in order to reduce the risk associated with their use. An essential need exists for increased collaboration between manufacturers of IV smart pumps and clinical end users to achieve meaningful improvements in this very important area of patient safety.

1. Giuliano KK, Richards N, Kaye W. A new strategy for calculating medication infusion rates. Crit Care Nurse. 1993;13(6):77–82. [PubMed] [Google Scholar]

2. Maddox RR, Danello S, Williams CK, Fields M. Intravenous Infusion Safety Initiative: collaboration, evidence-based best practices, and “smart” technology help avert high-risk adverse drug events and improve patient outcomes. In: Henriksen K, et al., editors. Advances in Patient Safety: New Directions and Alternative Approaches. Vol. 4. Rockville, MD: Agency for Healthcare Research and Quality; 2008. Technology and Medication Safety. [Google Scholar]

3. Kaushal R, Bates DW, Landrigan C, et al. Medication errors and adverse drug events in pediatric inpatients. JAMA. 2001;285(16):2114–2120. [PubMed] [Google Scholar]

4. Vanderveen TM. Averting highest-risk errors is first priority. [Accessed June 1, 2015]; Available at: //psqh.com/mayjun05/averting.html. [Google Scholar]

5. Husch M, Sullivan C, Rooney D, et al. Insights from the sharp end of intravenous medication errors: implications for infusion pump technology. Qual Saf Health Care. 2005;14(2):80–86. [PMC free article] [PubMed] [Google Scholar]

6. McAlearney AS, Chisolm DJ, Schweikhart S, et al. The story behind the story: physician skepticism about relying on clinical information technologies to reduce medical errors. Int J Med Inform. 2007;76(11–12):836–842. [PubMed] [Google Scholar]

7. Kirkbridge G, Vermace B. Smart pumps: implications for nurse leaders. Nurs Adm Q. 2011;35(2):110–118. [PubMed] [Google Scholar]

8. Carayon P, Hundt AS, Wetterneck TB. Nurses’ acceptance of Smart IV pump technology. Int J Med Inform. 2010;79(6):401–411. [PMC free article] [PubMed] [Google Scholar]

9. National Quality Forum. Critical paths for creating data platforms: patient safety: intravenous infusion pump devices. Washington, DC: National Quality Forum; 2012. [Google Scholar]

10. Cassano-Piché A, Fan M, Sabovitch S, et al. Multiple intravenous infusions phase 1b: practice and training scan. Ont Health Technol Assess Ser. 2012;12(16):1–132. [PMC free article] [PubMed] [Google Scholar]

11. Fahimi F, Ariapanah P, Faizi M, et al. Errors in preparation and administration of intravenous medications in the intensive care unit of a teaching hospital: an observational study. Aust Crit Care. 2008;21(2):110–116. [PubMed] [Google Scholar]

12. Westbrook JI, Rob MI, Woods A, Parry D. Errors in the administration of intravenous medications in hospital and the role of correct procedures and nurse experience. BMJ Qual Saf. 2011;20(12):1027–1034. [PMC free article] [PubMed] [Google Scholar]

13. Entner R. International comparisons: the handset replacement cycle. Recon Analytics. 2011:1–8. [Google Scholar]

14. Association for the Advancement of Medical Instrumentation. Infusing patients safely: priority issues from the AAMI/FDA Infusion Device Summit. Arlington, VA: Association for the Advancement of Medical Instrumentation; 2010. [Google Scholar]

15. ECRI Institute. Top 10 health technology hazards for 2014. Plymouth Meeting, PA: ECRI Institute; 2013. [Google Scholar]

16. Giuliano KK, Niemi C. The urgent need for innovation in I.V. smart pumps. Nurs Manage. 2015;46(3):17–19. [PMC free article] [PubMed] [Google Scholar]

17. Weinger MB. Why are our infusion pumps not smarter or safer? [Accessed July 23, 2015]; Available at: //aamiblog.org/2015/01/28/matthew-b-weinger-why-are-our-infusion-pumps-not-smarter-or-safer. [Google Scholar]

18. IHS. Infusion pumps and accessories–world. Englewood, CO: IHS; 2013. [Google Scholar]

19. Cohen J. Statistical power analysis. Current Directions in Psychological Science. 1992;1:98–101. [Google Scholar]

Toplist

Latest post

TAGs