Clinical Safety: Trust between Humans and Machines
Sep 5, 2013My point in telling this story is not to demonstrate how I beat the EEOB’s security – I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name – but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.
This is about security, not safety, but they’re pretty much related. Clinical software users hack the system all the time - usually, the intent is not malicious from their point of view, but the consequences are often as confusing - and they further erode the user’s trust in the system. In the worst case, users will maintain an entirely separate system for some parts of their workflow - and this is made more likely due to the fact that in many cases, a back-up non-IT solution must be maintained for when the system is unavailable.
This same issue applies on machine to machine interfaces too. Typically, when an interface is first implemented, things are carefully crafted and tested to ensure that both sides fully understand each other. But as changes are introduced to one side or another as they are upgraded over the years, the synchronization tends to deteriorate, and system failures start to occur. These cases are very hard to trouble shoot, since they often are sourced from different assumptions about how workflow works, both of which are very nearly correct.