Counting clicks


This month, EPMonthly ran an article about the cost of poorly-designed EHR on ED operations. The EPMonthly authors - Augustine and Holstein - ask some good questions and made some good points. But the data they used to ground their piece came from a peer-reviewed article that unfortunately leaves a lot to be desired.

We ran an editor's note at the end of the EPMonthly piece, succinctly stating my objections to the original peer-reviewed research. But since this "4000 clicks" study has gotten traction elsewhere, I felt compelled to make my detailed criticisms of the article publicly available:


The authors of the "4000 clicks" study chose to evaluate ED physicians' performance with McKesson Horizon. In KLAS surveys of ED physicians, McKesson scored at or near the bottom for many fields, including provider satisfaction, perceived workflow integration, and speed of charting. 
The authors assert that many other studies where EHRs improved ED throughput metrics and reduced errors are biased - because these studies use data from "the most innovative institutions" with presumable customizations. They also note that "software vendor" has been the leading factor in determining ED improvement and provider satisfaction. So I'd expect they'd want to pick a system with average scores, implemented at a fairly typical ED. Yet the authors don't make mention of any ED characteristics and chose one of the worst-ranked ED information systems. 
The authors justified their choice of McKesson's ED software because many hospitals use McKesson products - though the same rationale could justify an evaluation of GE software because the hallways are lit with GE bulbs. McKesson's ED software has small and declining market share, and given that the majority of EHR systems scored better - many scored much better - it undermines the author's goal of a more accurate representation of the ED EHR experience. 
Also, the authors looked at the behavior of 16 providers - a mix of attendings, residents and PAs/NPs. Lumping different providers types together can be risky, depending on the environment and documentation expectations. Yet the authors don't describe the environment, and the variability of time spent charting (17.5% to 67.6%) suggests very different roles were being combined haphazardly. In another section, they confusingly make reference to their "30 subjects." Nor do the authors describe their subjects' experience in emergency medicine, or with the system (were they newly-minted physicians? new hires? Was this day one of the system installation?).
These methodological deficiencies cast doubt on the authors' headlining "4000 clicks" sum - which seems to me more like sensationalism than a scholarly analysis of EHR's impact on ED productivity.

You can talk about the deficiencies of peer review all you want, but if this paper were assigned to me, I would've asked for a number of straightforward things that would've made this a better paper, and would've informed debate.

There's no doubt that the poor usability of EHR is affecting ED operations. EHR design has always been motivated by improving charge capture, not improving throughput. But we need data that quantifies what a lot of ED doctors suspect - that EHRs are chaining us to desktops, keeping us typing and clicking instead of spending time with patients. And while the "4000 clicks" study is certainly provocative, it's not the solid, well-conducted study we need to advance the dialog very far. First, the authors evaluated one of the worst-reviewed ED information systems, with a small and declining market share. Second, they don't tell us much about the ED they studied or the providers they followed. The headlining "4000 clicks" is a big extrapolation, with unwieldy error bars. This is a question that needs more scholarship and less sensationalism.