HL7 can have a fresh look because v3 succeeded
Aug 16, 2011Well, yesterday’s post about HL7 v3 failing certainly struck a nerve, particularly in private comments. One person picked up on the fact that it wasn’t the entire story. HL7 does have a Fresh Look Task Force. It has particularly notable participation: the Health IT leads for 5 or 6 national programs, some outstanding recognized individual contributors to the state of the art in health IT, and it’s met several times over the last few months (Hopefully some progress reports will come out soon). This doesn’t come from nowhere - HL7 matters: people care. HL7 is important. And that’s because v3 succeeded. And it has: HL7 v3 has succeeded.
Now a few readers are going to stop at this point, and run off and say “I told you so” to any number of people, possibly including me as the author of the previous post ;-). But I’m actually being serious. In spite of the stuff I put in the previous post, all the reasons why HL7 v3 is not a vehicle to take HL7 forward, let’s stop and think about what has been achieved.
v3 arose out of the concerns about the lack of coherence, cohesion, and consistency in the v2 standard. Initially it was a cross-SDO project with other SDO’s participating. But it pretty soon become evident that quality modeling using a method with teeth was going to be a mighty mountain to climb. Those other SDOs looked at that and walked away from the process. They’re still peddling the same old stuff as ever, so far as I can see.
HL7 - no. We’re not like that. HL7 will see something through. The modeling slowly honed down to a core reference model, and the RIM was born. It became abstract, through the introduction of USAM, and then we made a bunch of home grown tools that turned out to cost millions to replace.
And it gradually become possible to implement it - to actually exchange messages with it. About that time, several national programs were getting going; they needed more than could reasonably be done with v2, and all that they really had to chose was v3. It wasn’t easy, but they made it work.
And also, along the way, CDA emerged. Now you can say, like I did in my previous post, that CDA might be successful in spite of the fact it uses the v3 methodology, there’s no denying that CDA is very successful, and that there is a whole heap of clinical content being exchanged using v3 and benefiting from the core underlying ideas.
What made it successful?
- a well understand and highly examined base information model
- a end-to-end demonstration of what modeled, semantically interoperable standard for exchange would look like, and that such a ting was possible
- a broad range of clinical information being modeled to a very fine granularity
- all of this informed by actual clinical use in large scale programs
- an enormous requirements gathering stack, and the ability to organize these requirements into a coherent semantic framework
These are major achievements - v3 succeeded at these things. And it’s because of these successes that there’s other projects out there using the same fundamental methodology (openEHR), and there’s people out there producing elegant analyses of how the underlying ontology isn’t quite theoretically right.
If v3 hadn’t succeeded, there wouldn’t be a Fresh Look Task Force, because no one would care.
So how do you preserve the good bits? What are we going to do now?