Monday, August 3, 2009
Off topic: future of Windows
I have been using Windows 7 for testing for the last few weeks and it seems so far to be as stable as Windows XP (only one BSOD so far) and unscrewed up a lot of Vista's folly.
The biggest implication is that for most business users is that they can probably expect to make the transition on their next laptops (which always come with some OS pre-installed) rather than having to request that Vista be downgraded to Windows XP.
That said, it looks like Windows is nearing the end of its usefulness, and even Windows 7 lags behind Linux which has a range of graphic user interfaces (KDE 4.3 in particular is visually stunning, fast and very easy to use) can run most Windows software (via WINE or in some virtual machine), and is much more stable (three weeks w/ one BSOD is pretty good for Windows, most Linux desktop users only reboot for kernel updates and unexpected/unrecoverable crashes are less than once a year in my experience. Laptop support is still a bit of an issue, since support for hot switching display settings and problems w/ WiFi after suspend-resume isn't quite there, but for the most part, things work well, easily and predictably).
As far as the future of Windows: I predict they will take (another) a page from Apple and pitch the whole OS and start over (well, Apple started with a venerable open source Unix) with a microkernal design (maybe even doing to Minix* what Apple did to BSD), security by design, with legacy support via WINE. The big part of the OS will be CLR (the .NET runtime standard) with a return to having POSIX as a Windows service (it was dropped from NT 4.0 along w/ OS/2 support when M$ released Win2K). This will be a change for Microsoft, since it will give them a modern OS, but at the cost of making it easier for application developers (including games) to use the same code base to generate software for Linux, Mac OS/X and Windows. This will run contrary to the long standing practice of making it as hard as possible for Windows developers to port their software to non-Windows platforms.
Beyond the fortune telling (which only time will bear out): KDE 4 is available for pretty much any OS which supports C++ and for which there are QT4 libraries. This includes Windows! You can replace the entire Windows desktop with KDE 4 and enjoy all the eye candy and fancy visual effects that Microsoft had to drop from Vista and Windows 7. I don't know if you can drop it in as easily on Mac OS/X, but it runs fine on just about any other UNIX-like OSs with a few odd balls that I don't know for sure (like QNX).
So, if you use XP and are happy, you are not missing any major functionality that I could fine. If youa re using Vista you should probably upgrade unless you got lucky and everything works. Once KDE 4.3 (currently in release candidate status) is stable, you will have a richer UI and more stable OS with Linux. If you are buying a new laptop, it is worth waiting until the vendor offers Windows 7 and go with that, rather than downgrade to XP or get stuck with Vista.
*Minix is a "UNIX-like" operating system that has some similarities to Linux, but uses a very different kernel design. Linux has the option to add modules to the kernel, and you can end up with a very functional, but large, runtime. For desktop and modern laptop PCs this isn't a big deal. For running Linux on your toaster, cell phone, wrist watch or other minimal platforms it usually takes some work to get a conventional distribution to fit on a non-traditional platform. Minix treats most of those things people usually think of as core kernal functions as services and this gives you some very interesting options for slimming things down to the bare minimum, and, if done right, is theoretically easier to design an ultra-stable platform. There are other microkernal OS's out there (QNX being one--it is stable enough that it is used for nuclear power plant control, the controls on a submarine, and the guideance and launch systems used on ICBMs).
Next top:
Sunday, July 26, 2009
One small (overdue) example.
You can see some early prototypes of detailed clinical models with the Clinical Data Definition work I did with Tolven a few years ago. A reasonable example would be the chest exam which isn't a complete, flushed out, all bells and whistles example, but it gives a basic idea of where we were two years ago.
One of the components, shown here, are the attributes of a heart ascultation finding. It has a main finding, or entry point, (just called Murmur) which has a fairly value set for the Observation.value drawn from a handful of SNOMED-CT concepts which describe the bulk of the murmurs we routinely encounter in practice.
Tuesday, June 23, 2009
Always behind our times: Isn't time we started planning on moving to the 21st century?
We still have a dramatic gap in the US between what we could do (given some time and space to collaborate, financial support, and endorsement by HITSP) and what we are doing. This is an example where we are so far behind we do need to recalculate our trajectory and start warning people that we will be living in the past for a while longer.
One of the "big" changes, currently scheduled for October, 2013 (originally was slated for 2011) is CMS moving away from the use of ICD9 (International Classification of Disease, 9th edition) for claims submission to ICD10.
For those who are not tracking the fast-paced world of coded classification systems, ICD9 was put into use in 1975 and is the system used to provide information to payers (CMS being the agency behind Medicare) which is used to justify a claim for healthcare services. It provides groups of codes (21669 as of my last count in the US ICD9-Clinical Modification, ICD9CM) to classify diseases into discrete, mutually exclusive pigeon holes. It was replaced by ICD10 some time ago to report the cause of death to public health agencies, but it is otherwise the most common code system found in outpatient clinics which convey information about a diagnosis.
It has some well known shortcomings which make its use in electronic health records, particularly problem lists (or health concern tracking as we are now calling it), unwise. For example, there is no way to say that someone has a headache in ICD9. There are codes for various types of headache, but they all exclude each other in their definitions. In particular the non-specific “visit code” for a head ache specifically excludes migraine head aches and tension head aches. Which is why using ICD9 to convey health information for patient care reasons (as opposed to billing/administrative/public health reporting use) gives me a head ache.
That aside, it is what everyone seems to be using since we have a ton of data from our billing systems which bears some semblance to what we actually diagnose people as having and, voilĂ , we have ourselves coded data! The problems are it has been shown to have a only a casual correlation with what people are really diagnosed (e.g. by their physician), and that the codes are chosen to help optimize reimbursement, and most systems have a fixed number of codes that can be captured at any given encounter so that the choice of what codes get put into the slots is more often than not motivated by optimizing billing return rather than reflecting what is important to know about a patient.
Back to my original musing about this super huge big deal of switching to ICD10 in four years. ICD10 is marginally better than ICD9. It allows some diagnosis to be in more than one category (well, not really, but it tries, and by “more than one” I mean two, but only if the second category is an infectious disease or cancer) so that a diagnosis of bacterial pneumonia can now be considered both a bacterial disease -and- (drum roll please) a disease of the lung. It also now uses a letter in the code, rather than all numbers. It does have more diagnosis codes, but is otherwise pretty much the same old.
ICD10 has been around since 1994, or four more years since Jim Cimino's seminal Desiderata for Controlled Medical Vocabularies in the Twenty-First Century (Methods of Information in Medicine 1998;37(4-5):394-403), which called for moving away from ICD-like vocabulary systems in favor of more expressive, more flexible and more logical terminologies (such as SNOMED-CT which was released in 2002 by merging SNOMED-RT with the UK NHS Clinical Terms aka the Read Codes. SNOMED traces its roots back to the Systematic Nomenclature of Pathology circa 1965, or back when we were using ICD8!). You can find much more information on SNOMED at the National of Library website (http://www.nlm.nih.gov/research/umls/Snomed/snomed_faq.html) and from the IHTSDO website at (http://www.ihtsdo.org/snomed-ct/).
So the effective result is that in 4 years we are moving from a 1970's era code system, to a 1990's era code system, which we have known for at least the last ten years to be inadequate for use in health record systems.
At some point we need to start making provisions to actually use technology that is more contemporary, such as ICD11, which is slated for release the year before the US makes the leap to ICD10. A more realistic option is to educate clinicians (they are not so stupid as some IT types make them out to be) about the options and the consequences of doing thing right, particularly when we want to start using data to support "pay for performance" where ICD9/10's shortcomings will be very evident or for personalized medicine where using blunt instruments like ICD9/10 will just work when we start talking about trying to customize medication (and other therapy) based upon knowledge about individual's molecular structure.
More later on the rationale, problems, and proposed use of SNOMED-CT for health care information in the United States, circa early 21st century.
Saturday, April 18, 2009
Welcome
Detailed Clinical Models is a growing effort to bring together the collective wisdom and requirements of the health care informatics community for the ability to create clinical content which reflects the clinical (biomedical) reality in a fashion which is semantically inter-operable between domains, users, applications and applications.
Detailed Clinical Models (DCM) are explicitly intended to support multiple uses of the model and of the information.
- Clinical documentation. This is at the level and expressiveness needed for continuity of care, communication between (human) health care providers, consultation reports, billing, medical record keeping regulations (including those required by the US Drug Enforcement Agency DEA for prescribing controlled substances), medico-legal requirements, and, in general, those requirements which currently are met by narrative (written, dictated) paper health record systems.
- Decision support systems.
- Public health reporting.
- Quality of care metrics. This also includes meeting the needs automating an accurate data set needed to meet pay-for-performance incentives and for providing sufficient detail that allow meaningful comparisons based on patient complexity and acuity.
- Clinical research. This includes requirements for regulated clinical trials and disease registries.
- Personalized medicine.
- Translational bioinformatics.
- Other secondary uses of information.