Clinical Safety and sharing data

Aug 28, 2013

An institution has a health record eco-system that is distributed and poorly connected. Due to technical, procedural and policy issues, the data is divided into a series of different silos, and there’s not a lot of inter-connection between them. Though - presumably - the systems have connection points through the patient clinical process, because of differences in perspective and purpose, different information is collected, and because of various system design approaches and various lapses in system continuity (the fog of war), the data is a little out of sync. This is hardly unusual in healthcare.

Then, you create a new integration point between two of the silos, and present newly available information at the point of care. Mostly, this is good, because the clinical decision makers have more information on which to base their decision. It’s more likely that they’ll pick up some fact that might otherwise have been ignored.

Only there’s a problem: the new information is occasionally wrong. The clinical users aren’t fully aware of the various error vectors in the new data. Processing the new data takes extra time. And then the refrain starts - This is Clinically Unsafe!!! It Must Stop!!!

This is regular problem in integration - it’s almost guaranteed to happen when you connect up a new silo. Here’s yet another example of exactly this: “PCEHR EXPOSES MORE EXAMPLES OF PBS ERRORS”, and, of course, most people will report this as a problem for the pcEHR.

Only, I’m not so sure. The problem is in the underlying data, and it really exists. The PCEHR just exposes the problems. It’s no longer hidden, and so now it will start being dealt with. Isn’t that the point?

Generically, exposing a new data silo will cause these issues to become visible. How do you decide whether this is a less or more clinically safe?

  • To what degree is the information wrong?
  • How much would the clinical decision makers rely on it in principle?
  • How much would the clinical decision makers understand the systemic issues with it?
  • How is the underlying purpose of the data affected by the errors that were previously undetected?
  • What remediation is available when errors are detected - can anything be done to fix them, or to reduce their occurence?

On the subject of less or more clinically safe, many people do not understand one aspect of the clinical safety systems approach. Here’s an illustration:

 

You change a clinical/workflow practice. The change eliminates a set of adverse events, doesn’t eliminate others, and introduces some new ones (less than it solves, hopefully). Ideally, the numbers of the latter two approach zero, but in practice, if the number of the last (new events) is lower than the number of the first (eliminated events), then the change is worth making anyway- probably. There’s two problems with that:

  • The new events are usually worse - more systemic, at the least - then the old ones. That makes for better stories, particularly in the news
  • Distribution of blame to the maker of the change is unequal (introducing new errors is more costly than not eliminating existing errors)

The debate over data quality issues in the pcEHR - and in other contexts - will continue, but it most likely won’t be informed by any insight around how to think about clinical systems change and safety.

But for integration experts: The list I provided above is a useful place to start for issues to consider when doing a clinical safety check of a new integration (cause you do do that, don’t you!). However many real data quality issues are only visible after the integration has been put in place - they don’t even arise during the testing phase. So you need to put systems in place to allow users to report them, at least.