Monday, January 31, 2011

PQRI XML Submissions Required for Certification

One of the challenging aspects of Complete EHR Certification is the PQRI XML needed for submission of quality measures to CMS.

There are 15 required hospital quality measures but the 2 Emergency Department measures are stratified for reporting and must be presented in 3 different ways, so a total of 19 PQRI XML files need to be generated for Complete EHR Certification of hospital systems.

Each of the files uses identical XML.  The only parameters that change are

pqri-measure-number which is set to the NQF measure being submitted such as NQF 0435 (see the graphic above for the list of NQF hospital measure names)

eligible-instances which is the number of patients who meet eligibility requirements to be measured for the time period being submitted

meets-performance-instances which is the numerator of the measure i.e. those patients who had the appropriate treatment or outcome

performance-exclusion-instances which is the number of patients removed from eligible-instances for specific clinical reasons.  The denominator of the measures is always (eligible-instances minus performance-exclusion-instances)

performance-not-met-instances which is the number of eligible patients who did not have the appropriate treatment or outcome.  It can be calculated as (eligible-instances minus meets-performance-instances minus performance-exclusion-instances)

reporting-rate which is a multiplier i.e. for a percentage the reporting rate is 100

performance-rate which is the calculated performance level and is equal to meets-performance-instances/(eligible-instances minus performance-exclusion-instances)*reporting-rate

Let's do a real example so this becomes clear.   If we want to create PQRI XML for NQF Measure 0435, which is "Ischemic stroke patients prescribed antithrombotic therapy at hospital discharge", we need to go the HITSP TN906 document and gather the definition from pages 48-52

In this case,

eligible-instances is defined as "Patients admitted to and discharged from the hospital for inpatient acute care with a diagnosis of ischemic stroke (ICD9 433.00-438.99)"

meets-performance-instances is defined as "eligible-instances patients prescribed anti-thrombotic therapy(page 355-358 of HITSP specification) at hospital discharge"

performance-exclusion-instances is defined as
"Patients with age < 18
Patients with length of stay >120 days
Patients with comfort measures only documented
Patients enrolled in clinical trial
Patients admitted for elective carotid intervention
Patients discharged/transferred to another hospital for inpatient care
Patients who left against medical advice or discontinued care
Patients who expired
Patients discharged/transferred to a federal healthcare facility
Patients discharged/transferred to hospice
Patients with a documented reason for not prescribing anti-thrombotic therapy at discharge"

Suppose that 110 patients are eligible, 80 received anti-thrombotic therapy, and 10 were excluded.

performance-not-met-instances would be (eligible-instances minus meets-performance-instances minus performance-exclusion-instances) or  110-80-10=20

performance-rate would be meets-performance-instances/(eligible-instances minus performance-exclusion-instances)*reporting-rate or 80/(110-10)*100 = 80%

The PQRI XML generated would be

<submission xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:noNamespaceSchemaLocation="Registry_Payment.xsd" type="PQRI-REGISTRY" option="TEST" version="1.0">
  <file-audit-data>
    <create-date>31-01-2011</create-date>
    <create-time>04:22</create-time>
    <create-by>Your Organization Name Goes Here</create-by>
    <version>1.0</version>
    <file-number>1</file-number>
    <number-of-files>1</number-of-files>
  </file-audit-data>
  <registry>
    <registry-name>Your Application Name Goes Here</registry-name>
    <registry-id>123456</registry-id>
    <submission-method>C</submission-method>
  </registry>
  <measure-group ID="X">
    <provider>
      <npi>1111111112</npi>
      <tin>123456</tin>
      <waiver-signed>Y</waiver-signed>
      <encounter-from-date>2010-01-01T00:00:00</encounter-from-date>
      <encounter-to-date>2010-12-31T00:00:00</encounter-to-date>
      <pqri-measure>
        <pqri-measure-number>NQF 0435</pqri-measure-number>
        <eligible-instances>110</eligible-instances>
        <meets-performance-instances>80</meets-performance-instances>
        <performance-exclusion-instances>10</performance-exclusion-instances>
        <performance-not-met-instances>20</performance-not-met-instances>
        <reporting-rate>100.00</reporting-rate>
        <performance-rate>80</performance-rate>
      </pqri-measure>
    </provider>
  </measure-group>
</submission>

For the ED measures, which capture time measurement rather than patient counts, you need to create 3 files for each measure, stratified by

1. All patients that were admitted via the ED, excluding the ICD-9 range for PSYCH (ICD9 290-319) and Observation Patients.

2. All observation patients that were admitted via the ED, excluding the ICD-9 range for PSYCH (ICD9 290-319)

3. All psychiatric patients that were admitted via the ED, including only the ICD-9 range for PSYCH (ICD9 290-319)

eligible-instances is recorded in the same way as other measures.

meets-performance-instances is used to record the median time data.

performance-exclusion-instances, performance-not-met-instances, reporting-rate, and performance-rate are set to zero.

Hopefully this explanation makes it easier for hospitals and vendors to create the necessary PQRI XML for certification.

Friday, January 28, 2011

Cool Technology of the Week

The iPhone and iPad have provided a remarkable platform for mHealth, empowering consumers to manage their wellness, communicate with clinicians, and support treatment workflow via smartphones.

One of the coolest examples is Walgreens new application which supports automated medication management.

Each medication dispensed or sold at Walgreens has a bar code. After registering the application to your nearest/preferred Walgreens pharmacy, you simply scan the label using the iPhone/iPad camera and it automatically uploads to the Walgreens you’ve chosen. There are alerting functions for all parts of the prescription life cycle.   You link your transactions to a credit card on file, drive up to the Walgreens pick up window and they dispense your prescription. The pharmacist asks if you have any questions, just as if  you went into the store.

Walgreens summarizes their application features as:

Pharmacy
•New Express Refills by Scan*: Just scan the barcode on your prescription bottle for instant refills
•Order refills from your account history using your device
•Secure access to your prescription history from the palm of your hand
•Enter a prescription number for refills

Photo
•Upload multiple images simultaneously and pick them up at a local Walgreens in about an hour or have them shipped directly to you

Shopping Features
•New Weekly Ad interface featuring cover flow
•Add items to your shopping cart directly from our local Weekly Ads
•Check product availability and pricing at any store
•Flu Shot and Take Care Clinics locator

Using a smartphone as a bar code reader to support medication management workflow and e-commerce.  That's cool!

Thursday, January 27, 2011

The Birds in my Backyard

This week, we've had bitterly cold weather in the Northeast.   As Wellesley's unofficial weatherman,  I've been tracking temperatures as low as -5F (without wind chill) and -20F with windchill.

I've provided extra safflower, sunflower, and thistle to the winter birds that frequent our yard - Carolina Wrens, Juncos, Nuthatches, Flickers, Doves, Hairy Woodpeckers, House Finches, Cardinals, and Blue Jays.

We've provided a source of drinkable water by heating a bird bath.

An interesting side effect of becoming the food, water and shelter magnet for birds in our neighborhood is that we've also attracted 3 birds of prey - 2 Red-Shouldered hawks and 1 Coopers hawk (above).

While writing about Meaningful Use this weekend, I looked up and saw a cardinal eating safflower.   I looked up again and saw a Red-Shouldered hawk eating the cardinal.

Although I'm a vegan I can appreciate that this is the food chain in action and respect it.

Winter is a remarkable time for mindfulness - to steep yourself in the events of the moment, watch the birds against the snow, and appreciate living in the present.

Wednesday, January 26, 2011

General Principles of a Universal Exchange Language

I've written several posts about the President's Council of Advisors on Science and Technology (PCAST) report on Health Information Technology.

Between January and April, the PCAST Workgroup will think about the high level concepts in the report, which I believe can be distilled into 3 major themes

1. Increase the priority of interoperability to facilitate coordination of care and to enhance research capabilities.
2. Establish a Universal Exchange Language based upon an XML-style approach for exchanging individual data elements.
3. Create a supporting exchange infrastructure that includes a data-element/record locator service along with appropriate privacy and security controls.

I've recently thought a great deal about #2 - the Universal Exchange Language.    I've posted Sean Nolan's examples and provided a briefing about existing approaches to data exchange used on the web.

Before jumping into XML formats and comparisons of existing healthcare standards, I believe we should first define the general characteristics of the ideal Universal Exchange Language.

We should be guided by the HIT Standards Committee's Implementation Workgroup principles for evaluating all future standards work:

Keep it simple.
Don’t let “perfect be the enemy of “good enough.”
Keep the implementation cost as low as possible.
Design for the little guy.
Do not try to create a one-size-fits-all standard.
Separate content and transmission standards.
Create publicly available vocabularies & code sets.
Leverage the web for transport (“health internet”).
Position quality measures so they  motivate standards adoption.
Support implementers.

If I were to invent a Universal Exchange Language using nothing but these guiding principles and my intuition as a doctor and CIO,  I would include the following:

1.  High signal to noise ratio  (some existing standards have significant overhead for every clinically relevant data element).

2.  Easily human readable (by a developer, not necessarily an end user) and computable.

3.  Easy to create, easy to parse.

4.  It should be trivial and clearly understandable how to implement basic exchange such as sending and receiving a medication list, a problem list and an allergy list.

5.  There could be multiple sources of ontology/concept information as long as the receiver of the information understands how the sender packaged and defined the data elements.

6.  To the extent that ontologies are used, they should be viewed as pure notation by developers.  Developers should not need to even understand what an ontology is to be able to understand the content.  As with #5, developers should only need to be able to write code that will consume the metadata that links the content with the context defined in the ontology.

7.  It should be trivial to add additional metadata or additional data without impeding the ability to send and receive the basics.

8.  Data elements should be separable from the document, but we have to be realistic about this, since Problem Start Date really needs to be associated with a Problem Name to be useful.   We could define "data atomic" as the smallest unit of data that makes sense within the context defined by the associated metadata.

9.  An implementation guide should completely define the required and optional data and metadata elements for a given purpose.  Implementers should not have to open a dozen different Standards Development Organization products and implementation guides to understand how to create a message.

10.  Innovators should be free to think about optimal ways to solve this problem without being overly constrained by existing implementations.  Since health information exchange is in its infancy, strict compatibility with previous approaches is not our highest priority.

The PCAST report is likely to be a catalyst for rich discussion.   I look forward to the work ahead to offer ONC options for including PCAST principles in its existing programs and strategies.

Tuesday, January 25, 2011

Reflections on the Certification Experience

On Friday, January 21,  2011, Beth Israel Deaconess Medical Center completed the certification of its enterprise EHR technologies via the CCHIT EHR Alternative Certification for Hospitals (EACH) program.   Here's the press release.  As I've written about previously, BIDMC (like many academic health centers) has a combination of built and bought technologies that collectively provide interoperability, clinical functionality, and security.   We demonstrated all our Intersystems Cache-based hospital systems and our Microsoft SQL Server-based business intelligence systems.

The process was rigorous, requiring us to follow over 500 pages of scripts and implementation guides in a single 8 hour demonstration.

The staff at CCHIT were remarkable, educating us about the NIST script requirements, emphasizing the need to prepare, and clarifying aspects of the NIST scripts that were ambiguous or seemed clinically unusual.

NIST did a great job creating test scripts rapidly enough to enable vendors and hospitals to certify systems in time for meaningful use attestation.  However, there are a few oddities in the scripts that are only discoverable during pilots in real hospital and eligible provider clinical settings.   I strongly recommend that for all future certification, NIST pilot their scripts before they are issued for general use.

Major lessons learned include:

*If hospitals apply for complete EHR certification, adding new functionality to fill functional gaps may take months.   Once gaps are filled, I recommend that hospitals devote at least 2 weeks and 5 FTEs to reviewing the scripts, analyzing the best way to show the necessary functionality, and practicing the demonstration.

*The most challenging aspects of certification are interoperability and security.    Interoperability requires  data entry of the necessary information to support the transactions specified in the Standards Final rule: immunization submission, syndromic surveillance, reportable lab, patient summary data import, and patient summary data export.   During certification, each of these data streams will be thoroughly analyzed for compliance with NIST validation tools and/or manually inspected.

*Several of the NIST scripts require data entry that seems clinically unusual.    For example, you must place a CPOE order for Darvocet for pain control, even though Darvocet has been removed from the market by the FDA.    Many of the medications included in the scripts are unusual brand name medications that may not be on a hospital's formulary.   One data set in the Reportable Lab script requires that you send a public health entity information about an infection the patient does NOT have  (Stool Culture with a negative result for Shigella).

*The NIST scripts require that you demonstrate data entry of information that is not normally entered by clinicians in a hospital.   The typical workflow for labs is that they are ordered from an external provider (Quest, Lab Corp) or processed by internal lab systems then inserted into the EHR via an HL7 transaction.   The NIST scripts should be revised to clarify that labs should only be shown, not entered.    Diagnosis and Procedure codes are typically created by Health Information Management after discharge and are sent from a Utilization Review system to the EHR via an HL7 transaction.  The NIST scripts should be revised to clarify that data elements not entered by the clinician should only be shown, not entered.

*The NIST scripts require demonstration of functions that may not be part of standard clinical workflow.   During certification, I wanted to demonstrate live transmission of transactions to the Massachusetts Department of Public Health and Boston Public Health Commission.   Neither of these real public health transmissions were acceptable because the NIST security script requires the demonstration of encryption and hashing in real time.   This is equivalent to not trusting the HTTPS in your browser and requiring browsers to display the actual encryption taking place for every web page retrieved.  The NIST scripts should be revised to enable attestation of the use of FIPS compliant encryption for Stage 1.   Hopefully for Stage 2, there will be enough specificity in transport standards so that the test can be accomplished by submitting data to a NIST specified website that illustrates adherence to the transport standard (NHIN Direct, NHIN Connect etc.)

Here are my detailed observations about the NIST scripts in the order of certification demonstration:

Interoperability
170.302 (k) Submission to immunization registries - This is a great script!  The clinical examples are accurate and are typical of the data elements captured in the real world.   The NIST validator tests real HL7 2.5.1 transactions and the implementation guide is very clear.

170.302 (l) Public health surveillance - This is a good script.   There are no specific examples to enter.   The only issue is that there is no NIST validator for the HL7 2.5.1 generated.   This is because of a mistake in the original Standards and Certification Final rule that specified an implementation guide for Disease Reporting, not Syndromic Surveillance.   A revision to the rule removed the original implementation guide but did not a specify a new one, Hence there is no implementation guide to certify against.   This is not a NIST problem,  but a regulatory problem that should be fixed soon.

170.306 (g) Reportable lab results - This is an odd script.  The examples are all "send out" labs from outside laboratories but the script requires demonstration of the data being entered.   How can you enter data provided by an outside lab?   This script should be revised to require only display of data received from an outside lab that was incorporated into an EHR.    The first sample data set for this script is reasonable - a lead level.   The other samples are more complex than is necessary for demonstration of public health reporting (three instances of the reason for reporting, a corrected result, and reporting of a disease the patient does NOT have - negative result for Shigella).   This is good example of a script that needed to be pilot tested before requiring its use.

170.306 (f) Exchange clinical information and patient summary record - this is a good script but it duplicates 170.306 (d)(1).   The first part of the script requires incorporation of a CCR and a CCD into the EHR in human readable form.   To do this, the appropriate Extensible Style Sheet (XSL)  reference needs to be inserted into the files and the CCD.xsl  and CCR.xsl need to be downloaded and placed in the same directory.   How is a hospital IT department supposed to figure this out? The appropriate style sheets should be included in the script with instructions on how to use them.

For CCD, insert
<?xml-stylesheet type="text/xsl" href="CCD.xsl"?>
For CCR, insert
<?xml-stylesheet type="text/xsl" href="CCR.xsl"?>
as the second line of the files.

The second part of the script is to generate a CCD based on complex data sets which include multiple elements that clinicians do not normally enter (hospital generated diagnosis and procedure codes).  The script should be revised to require only display of data, rather than entry, followed by CCD or CCR generation.  The CCD validator created by NIST uses the HITSP C83 specification which was published before the Standards Final Rule was developed.   HITSP C83 required problem lists to be coded in SNOMED-CT, but the final rule allows ICD-9-CM and SNOMED-CT.  You'll need to read Keith Boone's blog for step by step instructions to create a CCD with ICD-9-CM codes that passes validation.

170.306 (d)(1) Electronic copy of health information - this script should be eliminated because it is a duplication of 170.306(f) with slightly different data sets for generation of the CCD and CCR.

170.306 (i) Calculate and Submit Clinical Quality Measures - the script is fine, but the HITSP document which underlies it contains a few mistakes.  I feel responsible for this because I was the chair of HITSP at the time.   The exclusion and inclusion criteria for VTE-6 are incorrect.   They should be

Inclusion
Patients who received no VTE prophylaxis (HITSP Specification pages 367-370) prior to the VTE diagnostic test order date

Exclusion
Patients with age<18
Patients with length of stay>120 days
Patients enrolled in clinical trial
Patients with comfort measures only documented
Patients with VTE Present on arrival
Patients with reasons for not administering mechanical and pharmacologic prophylaxis
Patients without VTE confirmed by diagnostic testing

You'll also find the term anti-thrombolytic used throughout the document.   There is no such thing as an anti-thrombolytic.   It should be anti-thrombotic.

The HITSP quality measures specification was created before the Standards Final Rule was developed, so although both ICD-9-CM and SNOMED-CT are allowed by the Final Rule, the HITSP specification  defines the quality measures using only SNOMED-CT terminology.   This means that every vendor and hospital has to create their own mappings to ICD-9-CM for all the quality computations.  Here's what we used

Ischemic stroke (ICD-9-CM 433.00-438.99)
Hemorrhagic stroke (ICD-9-CM 430-432.99)
Atrial Fibrillation/Flutter (ICD-9-CM 427.31-427.32)
VTE (ICD-9-CM 453.40-453.42, 453.50-453.52, 453.6, 453.71-453.79, 453.81-453.89, 415.19, 416.2)
Psychiatric diagnoses  (ICD-9-CM 290-319)

I'll also make a controversial statement that several of the quality measures are too complex.   Many of the exclusion criteria are likely to have such a small impact on the measure that they should be eliminated.    (Was the patient  discharged/transferred to a federal healthcare facility?  Better exclude them.  Huh?).  I hope that future quality measures eliminate esoteric exclusionary criteria.

An unintended side effect of the quality measures is that they require changes in software and workflow not specified by Meaningful Use.   For example, in order to report on discharge medications for stroke and VTE patients,  you must implement electronic script writing at discharge to record the discharge medications and associated RxNorm codes for inclusion/exclusion.

Finally. the PQRI XML specification is ambiguous.   The ED measures only require 2 data elements, yet some validators require 6 data elements, with the last 4 set to zero.

Clinical
170.306 (b) Record demographics - Very well written, no issues.

170.306 (h) Advanced directives - Very well written, no issues.

170.302 (f)(1) Vital signs  - Very well written, no issues.

170.302 (f)(2) Body mass index - Very well written, no issues.

170.302 (f)(3) Plot and display growth charts  - Entering the data required for the demonstration is a lengthy process.  Best to revise the script to require only display of data, then graphing.

170.302 (g) Smoking status - The only challenge is that you must adhere to the CDC's smoking codes precisely (1=Current every day smoker, 2=Current some day smoker, etc.) despite the fact that the regulation lists only text values and doesn’t require the CDC numeric recodes.

170.302 (e) Maintain active medication allergy list - Very well written, no issues.

170.302 (j) Medication reconciliation - Very well written, no issues.

170.302 (d) Maintain active medication list  - Very well written, no issues.

170.302 (a) Drug/Drug and Drug/Allergy Interactions - The only challenge is that you must demonstrate how decision support can be disabled for drug/drug and drug/allergy interactions.  I can understand disabling selected drug/drug interactions to reduce alert fatigue, but I cannot think of a clinical reason to disable drug/allergy interactions.

170.302 (b) Drug formulary checks - Very well written, no issues.

170.302 (c) Maintain up-to-date problem list - Very well written, no issues.

170.306 (c) Clinical decision support - Very well written, no issues.

170.306 (a) Computerized provider order entry - The challenge is that the required medications are very odd.   Cefzil (cefprozil) suspension is likely not on most hospital formularies.  Darvocet has been removed from the marketplace by the FDA.   This script needs to be revised to reflect mainstream medications used in live healthcare settings.

170.302 (h) Incorporate lab results - Hospitals are likely to have their own internal laboratories.   The lab system and their EHR may share a common database.   If not, HL7 2.5.1 should be used to transmit data between a external lab systems and the EHR.   This script does not support integrated databases nor HL7 2.5.1 standards   Instead it requires that any structured format be sent from a lab system to the EHR. I'm not sure what policy or technology benefit such a demonstration creates.

170.302 (m) Patient specific education resources - Very well written, no issues.  Hospitals are free to use externally accessible resources such as UptoDate, Healthwise, or LabTestsOnline.org .

170.302 (n) Automate measure calculation  - Very well written, no issues.

170.302 (i) Generate patient lists - I believe that the spirit of the Policy and Standards Committee was to be able to demonstrate a single analytic query on EHR data.   The script goes much further than that, requiring filtering and sorting on problems, medications, and lab values.   I believe the script should be revised to allow a simpler demonstration of business intelligence using EHR data.

170.306 (d)(2) Electronic copy of health information - Very well written, no issues.  There is a great degree of flexibility to create and save discharge communications intended for providers.   There is no test data and no standards conformance testing.

170.306 (e) Electronic copy of discharge information  - Very well written, no issues.  There is a great degree of flexibility to create and save discharge communications intended for patients.   There is no test data and no standards conformance testing.

Security
170.302 (o) Access control - Very well written, no issues.

170.302 (t) Authentication - Very well written, no issues.

170.302 (q) Automatic log-off - Very well written, no issues.

170.302 (p) Emergency access - I'm truly confused by the intent of this test script.   I do not know of any Policy or Standards Committee intent to demonstrate the ability for users (not administrators) to override security controls and obtain access to clinical data.    We demonstrated it successfully, but I am unaware of this being a mainstream or desirable function in an EHR.

170.302 (w) Accounting of disclosures (optional) - We elected not to demonstrate this.  I'm not sure why optional requirements are included in certification.

170.302 (r) Audit log - The audit log script requires the ability to filter and sort the log by numerous criteria.   I am unaware of any Policy or Standards Committee intent to demonstrate advanced analytics on audit logs.

170.302 (s) Integrity - The final three security demonstrations are all very odd.   HIEs use data integrity protections and encryption to ensure data travels from point A to point B without modification.    The script requires demonstration of a test harness, not a live system, because encryption and hashing are invisible, just as HTTPS in your browser is invisible.   All three criteria should be revised to use attestation, not demonstration.
170.302 (u) General encryption - as above
170.302 (v) Encryption when exchanging electronic health information - as above

To restate my major points - CCHIT and the certification process are great.   NIST did a heroic job generating the scripts in a short time.   However, the scripts need to be piloted before they are placed in general use to avoid some of the challenges listed above.   I believe the scripts can be revised to substantially reduce the burden on vendors and hospitals without changing ONC/CMS rule compliance and HIT Policy/Standards Committee intent.

I welcome feedback on your own experiences with certification.   Meaningful Use and its associated processes will be the memories we tell our grandchildren about.

Monday, January 24, 2011

Obtaining Meaningful Use Stimulus Payments

Many clinicians and hospitals have asked me about the exact steps to obtain stimulus payments.

On January 3, 2011, CMS began registering clinicians for participation in meaningful use programs.    Every region of the United States has Regional Extension Centers which can help answer any questions. Here's an overview of the steps you need to take.

1.  Choose between Medicare and Medicaid programs.  If you qualify, Medicaid offers greater incentives and does not require you to achieve meaningful use before stimulus payments begin.
a.  To qualify for Medicaid, 30% of your patient encounters must be Medicaid patients. (20% for pediatricians)
b.  To qualify for Medicare, keep in mind that meaningful use payments are made at 75% of Medicare allowable charges for covered professional services in the calendar year of payment, per the payment maximums below:

Year 1  $18,000
Year 2  $12,000
Year 3  $8000
Year 4  $4000
Year 5  $2000

Thus, a total of $44,000 is available at maximum, but could be less if your allowable Medicare charges are less than

Year 1 $24,000
Year 2 $16,000
Year 3 $10,667
Year 4 $5333
Year 5 $2667

Also, if 90% of your Medicare charges take place in inpatient or emergency department locations, you cannot qualify for the meaningful use program.  This means that emergency physicians, anesthesiologists, radiologists, and pathologists generally cannot participate.  Some professionals may also find that they do not have enough Medicaid or Medicare charges to benefit from either program.

2.  Once you've chosen Medicare or Medicaid, you must register to participate
a.  You need a National Provider Identifier and password.   If you do not have one, go to the NPPES website.
b.  One you have a password, go to the CMS EHR Incentives Website and register as an eligible professional
c.  Two valuable resources include the Registration User's Guide and the CMS overview of the EHR incentive programs.

3.  The Meaningful Use demonstration period is 90 days beginning January 1, 2011 so the first date that you can attest to meaningful use of Certified EHR technology is April 1, 2011.   Note that the EHR technology you use must be certified by the time you attest.  You can begin your meaningful use reporting period using uncertified EHR technology as long as it is certified by the end of your reporting period.

Medicare payments will begin in May.  Medicaid payments are administered by states and will begin when state governments are ready to administer the program.  Some states are ready now and others will not be ready until August.  Remember that Medicaid payments start before meaningful use is achieved so there is no need to wait for meaningful use measurement and attestation for the Medicaid program.

Hospital requirements are similar
a.  First, you must locate the following, which your Revenue Cycle staff are likely to have:
CMS Identity and Access Management (I&A) User ID and Password.
CMS Certification Number (CCN).
National Provider Identifier (NPI).
Hospital Tax Identification Number.
b.  Go to the CMS EHR Incentives Website and register as an eligible  hospital
c.  The Hospital Registration User's Guide is a valuable resource

Here's a summary of the key dates for the program:

January 1, 2011 – Reporting year begins for eligible professionals.
January 3, 2011 – Registration for the Medicare EHR Incentive Program begins.
January 3, 2011 – For Medicaid providers, states may launch their programs if they so choose.
April 2011 – Attestation for the Medicare EHR Incentive Program begins.
May 2011 – EHR Incentive Payments expected to begin.
July 3, 2011 – Last day for eligible hospitals to begin their 90-day reporting period to demonstrate meaningful use for the Medicare EHR Incentive Program.
September 30, 2011 – Last day of the federal fiscal year. Reporting year ends for eligible hospitals and CAHs.
October 1, 2011 – Last day for eligible professionals to begin their 90-day reporting period for calendar year 2011 for the Medicare EHR Incentive Program.
November 30, 2011 – Last day for eligible hospitals and critical access hospitals to register and attest to receive an Incentive Payment for Federal fiscal year (FY) 2011.
December 31, 2011 – Reporting year ends for eligible professionals.
February 29, 2012 – Last day for eligible professionals to register and attest to receive an Incentive Payment for calendar year (CY) 2011.

I hope this clarifies your next steps.   May your stimulus funds flow quickly in 2011!

Friday, January 21, 2011

Cool Technology of the Week

Harvard Medical School is piloting digital whiteboards from Smart Technologies

It's not a giant iPad, it's more like a giant Wacom tablet paired with a computer and video projector.   The design works surprising well.

Here's the idea:

Mount a wall sized tablet with a reflective surface in the classroom.

Mount a special digital projector above the whiteboard

Connect the digital projector to a computer with HDMI audio/video

Provide digital pens - red/green/blue/black in special holders so that when a pen is removed from a holder, the whiteboard software knows that the red holder holds a red pen and all motions on the table should be in red.

Provide software that includes a pen/erase/capture selections

The end result is that web pages, applications and powerpoint are projected and teachers can annotate them with pens or their fingers.   They can capture and save their work for later display.

Early response of our evaluators has been very positive.   By combining a wall size tablet, screen, and projector with full audio/video interface into one package, we empower faculty to use all these educational technologies.

One great aspect of this is that the screen is separate from the projector, so it can be replaced if it is worn or damaged.   Creating a wall sized iPad would require replacing the display and the digitizer simultaneously.

Digital whiteboards for teaching.   That's cool!

Thursday, January 20, 2011

Winter Boots and Traction

Winter weather in New England is highly variable. We've got freezing rain, snow, sleet, and ice. We have temperatures that vary from 50F to -20F.

Shoveling your driveway, walking in the woods, and mountaineering can be a very slippery experience.

What do I do?

At the extreme, for Winter mountaineering (climbs up Mt. Washington in -20F with 80mph winds), I use Scarpa Omega double plastic boots and Petzl Sarken Sidelock crampons. These boots are the lightest extreme winter mountaineering footwear available and after ten years (they were called the Scarpa Alpha back then) mine are still in great shape. The Petzl Sarken is an extremely durable 12 point crampon for use with boots that support "step-in" style crampons.

For walks on less intimidating terrain, I use Garmont Momentum Snow GTX boots,  which are waterproof, insulated, vegan footwear.   Although I "bareboot" most hiking trails, for significant verglas (thin ice on rock) and long relatively flat ice hikes, I use Kahtoola KTS Aluminium crampons.   Kahtoolas are lightweight, durable, and very effective on ice.

I've had mixed results with Stabilicers, which add a heavy "second sole" to winter boots that already have traction soles.   I've not had good results with Yaktrax which tend to break easily and provide only minor traction while hiking on ice.

Scarpa, Petzl, Garmont and Kahtoola products are great.    For the past decade, I've used them to scale every peak in New England during the winter and to walk hundreds of miles around the Boston suburbs on ice.

Wednesday, January 19, 2011

Standards Validation for Certification

On Friday, Beth Israel Deaconess is doing its certification inspection via the CCHIT EACH program.  The inspection is divided into three major areas - interoperability. clinical functionality, and security.

To prepare for interoperability, my engineers and I have used several resources which I'll share with you.

The NIST site to validate the reportable lab and immunization formats

The NIST site to validate the CCD/C32 Clinical Summary

The Bioportal site to validate the SNOMED codes

HITSP documents that are required to get the C32 correct include C80 and C83

The IHE Site which the NIST validator often refers to.

On Friday, we'll send our HL7 2.51 formats for reportable lab, syndromic surveillance, and immunizations to CCHIT.   We'll send our CCD/C32 for clinical summaries.  We'll also send 19 PQRI-XML files for the 15 quality measures. All our files will incorporate the required LOINC, SNOMED-CT and RxNorm vocabularies.

I'll let you know how it goes.

Tuesday, January 18, 2011

The Proposed Stage 2 and 3 Meaningful Use Recommendations

On January 12, the Health Information Technology Policy Committee published its proposed Stage 2 and 3 Meaningful Use recommendations for public comment.

Robin Raiford from Allscripts created a Quick Guide to the recommendations, making it easy to compare Stage 1, 2 and 3 in a single PDF.

Here's my analysis of the proposed Stage 2 and 3 criteria.

1.  CPOE - Stage 1 requires more than 30% of unique patients with at least one medication in their medication list have at least one medication order entered using CPOE  Stage 2 expands this to 60% of patients for at least one medication, lab or radiology order.  Stage 3 expands this further to 80%.   CPOE orders do not need to be transmitted electronically to pharmacies/labs/radiology departments.   This is a very reasonable rate of CPOE adoption.   The hardest part of implementing CPOE is getting started, which happens in Stage 1.   Adding different types of transactions (without requiring electronic transmission to back end service providers) is more about workflow and behavioral change than technology change.

2.  Drug-drug/drug-allergy interaction checks - Stage 1 requires that interaction technology be enabled.   Stage 2 adds that it will be used for high yield alerts, with metrics for use to be defined.  The idea is that many drug databases contain too many false positive interaction rules, so adoption is slowed by alert fatigue.   If only high yield alerts are required (here's what we've done at BIDMC ), clinicians are more likely to trust drug interaction decision support. Stage 3 adds drug/age checking (such as geriatric and pediatric decision support), drug dose checking, chemotherapy dosing, drug/lab checking, and drug/condition checking.  These are all reasonable goals, but automating chemotherapy protocols is quite challenging.   BIDMC built an Oncology Management System and added a full time research nurse to ensure all chemotherapy protocols are updated and accurate.    It may be asking too much to require chemotherapy dosing decision support nationwide by 2015.

3.  e-Prescribing - Stage 1 requires e-prescribing of 40% of non-controlled substances.  Stage 2 expands this to 50%.  Stage 3 expands it further to 80%.  Electronic faxing is permitted if pharmacies cannot accept e-prescriptions.   These are very reasonable goals and easily achievable if e-prescribing systems are in place, which is required for Stage 1.   E-prescribing of controlled substances, which requires more effort  including two-factor authentication, is not specifically mentioned.

4.  Demographics - Stage 1 requires capture race/ethnicity, primary language, and other demographics for 50% of patients.  Stage 2 expands this to 80%.  Stage 3 expands this further to 90%. Most institutions are already near 100% since they capture this information as part of existing registration and billing processes.

5.  Report quality measures electronically - Stage 2 and 3 will be specified by the Quality Measures Workgroup and CMS.  No further detail is offered at this time.   The hardest part of quality measure reporting is computing the numerators and denominators as is required for Stage 1.  Generating the PQRI XML to send the data to CMS is quite easy.

6.  Maintain problem lists -  Stage 1 requires documentation of at least one problem for 80% of patients.  Stage 2 is unchanged. Stage 3 requires that problem lists be up to date.  This is reasonable.   Clinicians will be motivated to update them because the problem list will be included in clinical summaries sent to the patient after every visit.   I hope that EHR vendors improve the usability of problem list management functions by creating natural language translation into the required SNOMED-CT or ICD9/ICD10 vocabularies.

7.  Maintain active medication lists - Stage 1 requires documentation of at least one medication for 80% of patients.  Stage 2 is unchanged.  Stage 3 requires that medication lists be up to date as part of medication reconciliation, which per requirement 31 below will be a core requirement with 80% compliance in Stage 2 and 90% in stage 3.  In my view, the greatest strength of EHRs should be medication management.

8.  Maintain active medication allergy lists -  Stage 1 requires documentation of at least one allergy for 80% of patients.   Stage 2 is unchanged.  Stage 3 requires that allergy lists be up to date. Clinicians will be motivated to update them because the allergy list will be included in clinical summaries sent to the patient after every visit.  

9.  Vital signs - Stage 1 requires vital sign recording for 50% of patients.   Stage 2 expands this to 80%.  Stage 3 is the same as Stage 2.  This is reasonable.

10.  Smoking status -  Stage 1 requires smoking status documentation for 50% of patients.  Stage 2 expands this to 80%.   Stage 3 expands it to 90%.  This is reasonable.

11.  Use Clinical Decision Support to improve performance on high-priority health conditions -    Implemented properly, decision support can save time while enhancing safety.   Maintaining rules can be challenging and I hope companies evolve to provide cloud-based approaches to decision support.

12.  Formulary checks - In Stage 1, formulary checks are in the menu set with the requirement that clinicians and hospitals have access to at least one internal or external drug formulary for the entire EHR reporting period.  Stage 2 moves this to core.   Stage 3 requires the functionality to be applied to 80% of medication orders.   Formulary checking is generally a part of e-prescribing, so this is reasonable.   The only unknown is how formularies vary in different regions.   Since Massachusetts has only 3 major payers, all regional, formularies have been relatively easy to manage.

13.  Advanced directives - In Stage 1, Advanced Directives are in the menu set with the requirement that 50% of patients 65 and older have advance directive documentation.   Stage 2 moves this to core.  Stage 3 requires 90%.    I believe this is a laudable but a challenging goal to achieve.   My experience is that most hospitals have advanced directives recorded for less than 25% of their patients.

14.  Incorporate lab results as structured data - The Stage 1 menu set requirement is that lab results are incorporated into EHRs as structured data for more than 40% of all clinical lab tests ordered.   Stage 2 moves this to core.  Stage 3 expands the functionality for 90%  of tests ordered and requires they be reconciled with structured orders.   Lab workflow improvement is one of the best ways to save clinicians time and ensure followup of abnormal results.   My only reservation with this requirement is that the standards for content and vocabulary (lab compendiums)  are still a work in process, so reconciliation with electronic orders may be premature.

15.  Generate patient lists - The Stage 1 menu set requirement is to generate at least one report listing patients with a specific condition. Stage 2 moves this to core.  Stage 3 requires such lists are used to manage patients for high priority health conditions.    EHRs must include business intelligence capabilities as part of Stage 1 certification, so these recommendations should be easily achievable.

16.  Send patient reminders - The Stage 1 menu set requirement is to send reminders to more than 20% of all unique patients 65 years/older or 5 years old/younger.   Stage 2 makes this core.   Stage 3 requires 20% of all active patients who prefer to receive reminders electronically receive them.   I'm supportive of this requirement if I can fulfill it by offering all our patients secure web-based reminders via our PHR.   Offering multiple electronic notification options (phone, fax, secure email, PHR, Facebook, Twitter, SMS texting) would be hard to manage.

17. Electronic outpatient notes - Stage 2 adds a new requirement for eligible professionals that 30% of visits have an electronic note.  Stage 3 requires 90%.  This note can be scanned, free text, structured etc.   Offering the option of scanning, dictating, structured and unstructured makes this requirement very reasonable.

18. Electronic inpatient notes - Stage 2 adds a new requirement that 30% of hospital days have at least one electronic note by a physician, NP or PA.   As with outpatient notes, this can be scanned, free text, structured etc.   Very few hospitals have electronic inpatient documentation.   Offering the option of scanning paper notes makes this requirement very reasonable.

19.  Electronic Medication Administration Records - Stage 2 adds a new requirement that 30% of medication orders are tracked via an EMAR.   Stage 3 makes this 80%.   The definition of EMAR will be key.   I think of an electronic medication administration record as positive patient identification of every patient, every drug and every staff member with mobile devices to record all medication events in real time.   Requiring this for the entire country by 2015 is aggressive.

20.  Provide an electronic copy of health information - Stage 1 requires this for 50% of patients who request it.  Stage 2 is unchanged.   Stage 3 requires 90%.   The challenge is implementing the workflow to support this requirement, which has to be done for Stage 1.   Thus Stage 2 and 3 are reasonable.

21.  Provide a copy of discharge instructions -  Stage 1 requires this for 50% of patients who request it. Stage 2 expands this to 80%.  Stage 3 expands it to 90%.  Since printed discharge instructions meet the criteria, this is reasonable.

22.  Patient specific educational resources - Stage 1 requires this for 10% of patients. Stage 2 is unchanged.   Stage 3 expands this to 20% in common specific languages.   This key question is what is a common language?  BIDMC supports 37 different languages.   Offering educational resources in all these languages would be challenging.

23.  Web-based download of inpatient records -  Stage 2 adds the ability to view and download inpatient summaries for 80% of patients.  Stage 3 is the same.   Since BIDMC already has a PHR offered to all its patients, this requirement is actually an easier workflow than providing inpatient record summaries upon demand via a manual process.  This requirement will be challenging for organizations which have not yet widely implemented personal health records.

24.  Provide clinical summaries for each office visit - Stage 1 requires this for 50% of all patients (not just those who ask).  Stage 2 expands this to "view and download" within 24 hours.   Stage 3 is the same.   The challenge is that records may not be completed and signed within 24 hours, especially if dictation or scanning processes are used to generate the electronic record.

25.  Timely electronic access -  The Stage 1 menu set requirement is that more than 10% of all unique patients seen by the clinician are provided timely electronic access to their health information subject to the clinician's discretion to withhold certain information.  Stage 2 and 3 make this core and include the requirement that patients should be able to filter or organize information by date, encounter, etc.   More detail is need about the requirement to organize the data to determine just how difficult this will be.   This requirement will be challenging for practices which have not widely implemented personal health records.

26.  Measures for clinical summaries and timely electronic access - Stage 2 requires that  20% of patients use a web-based portal and Stage 3 expands this to 30%.    It seems a bit odd to measure clinician performance based on patient behavior.  BIDMC has offered comprehensive secure email, timely access to all inpatient/outpatient data, and even full text notes.   After 10 years, no more than 20% of our patients use these functions.   I think a better approach is to require such functions be offered to all patients criteria is that all patients.  Adoption is something beyond clinician control.

27.  Online Secure messaging - Stage 2 and 3 adds a new requirement that online secure messaging be in use.  Our experience with online secure messaging is that it reduces clinician time returning phone calls and enhances both clinician and patient satisfaction.   This is a reasonable requirement.

28.  Patient preference for communication medium - Stage 2 adds a new requirement that patient preference for communication be recorded for 20% of patients.  Stage 3 expands this to 80%.   Per my comment in Timely Electronic Access,  I would prefer offering all patients web-based communications rather than trying to manage multiple communication approaches.

29.  Patient Engagement - Stage 3 includes multiple new patient engagement requirements - electronic self management tools, EHR interfaces to PHRs, patient reporting of care experiences online, and patient generated data incorporation into EHRs.   These need to be defined in much greater detail before their implications can be assessed.

30.  Perform test of HIE - Stage 1 required one test.   Stage 2 expands this to at least three external providers in primary referral networks (but outside the delivery system that uses the same EHR) or a bidirectional connection to at least one health information exchange.   Stage 3 expands this to 30% of external providers or a connection to a health information exchange.   Stage 2 and 3 require the HIE to connect to an entity-level provider directory.   Although some regions with well developed HIE capabilities will find this easy to achieve, many will await the functionality promised by the Direct Project in order to exchange data.

31.  Perform Medication reconciliation - The Stage 1  menu set requirement is medication reconciliation in 50% of care transitions.  Stage 2  moves this to core and expands it to 80%.   Stage 3 expands this to 90%.   Since the Joint Commission required this over 3 years ago, it is very reasonable.  Hard to operationalize, but the right thing to do.

32.  Provide summary of care record - The Stage 1 menu set requirement is a summary of care record for more than 50% of transitions of care and referrals.   Stage 2 makes this core.   Stage 3 expands this to 80%.   The challenge will be defining how this content is transmitted from provider to provider.

33.  List Care members - Stage 2 adds a requirement to provide a list of care team members (including PCP) for 10% of patients.  Stage 3 moves this to 50% via electronic exchange.  Once more details about the electronic exchange are provided, I can better assess this requirement.

34.  Longitudinal care plan - Stage 2 adds a requirement to record a longitudinal care plan for 20% of patients with high-priority health conditions.   Stage 3 expands this to 50%.   Although I'm familiar with pilots in which clinicians and patients jointly develop care plans, I have not seen it widely implemented.  This may be a bit aggressive.

35.   Submit immunization data - The Stage 1 menu set requirement was a single transaction.   Stage 2 moves this to core and requires ongoing submission.  Stage 3 requires query and review of immunization registry data.   The challenge is that many state and local public health departments do not offer the ability to receive and query immunization data.

36.  Submit reportable lab data - The Stage 1 menu set requirement was a single transaction.  Stage 2 moves this to core.   Stage 3 includes the requirement for hospitals to include complete patient contact information in 30% of reports.  As with immunizations, state and local public health departments may find this challenging to support.

37.  Submit syndromic surveillance data - The Stage 1 menu set requirement was a single transaction.  Stage 2 moves this to core.   Stage 3 includes patient self report and something called the Public Health button, which is not defined.  Public Health Departments may find this challenging to support.

38.  Ensure privacy - Nothing new was specified but additional privacy and security objectives are under consideration by the HIT Policy Committee’s Privacy and Security Tiger Team.

Thus my areas of concern are chemotherapy automation, recording patient communication preferences, judging clinician performance based on patient adoption of PHRs, EMAR implementation, maturity of HIE capabilities,  widespread rollout of longitudinal care planning, and public health readiness.

The comment period ends Feb. 25 and the Health IT Policy Committee will consider all of the comments in making its final recommendations this Summer to the Office of the National Coordinator for Health Information Technology at HHS. Here's the work plan as I understand it

Jan, 12, 2011: release draft Meaningful Use criteria and request for comment
Feb-March, 2011: analyze comment submissions and revise Meaningful Use draft criteria
March, 2011: present revised draft Meaningful Use criteria to the HIT Policy Committee
2Q11: CMS report on initial Stage 1 Meaningful Use submissions
3Q11: Final HIT Policy Committee recommendations on Stage 2 Meaningful Use
4Q11: CMS Meaningful Use NPRM

It's great to have a roadmap so we know where we're going between now and 2015.   At present BIDMC is completing its certification inspection tests and is working hard to achieve meaningful use by the end of March so we can attest in April.   Given the challenges of achieving certification and meaningful use for Stage 1,  we welcome a two year window to prepare for Stage 2.

Monday, January 17, 2011

The Bookmarked Final Rules

Robin Raiford of Allscripts has completed the bookmarking of all the healthcare IT final rules:

CMS final rule – EHR incentive program  (14 Megabytes)
ONC final rule – Standards Stage One (500K)
ONC final rule – Permanent Certification Program (500K)


Also, here's the detail of the numerators and denominators of the meaningful use metrics with thresholds.


I'm sure these will help many people inside and outside of government.

Thanks Robin!

Thursday, January 13, 2011

A Universal Exchange Language Example

In preparation for the PCAST Workgroup discussion, the Workgroup chairs asked Wes Rishel and I to find examples of the Universal Exchange Language proposed by the report. We asked Sean Nolan from Microsoft for his comments. His guest post is below:

"As requested, I’ve put together the below thoughts and samples from HealthVault and Amalga and offered my perspective on how they reinforce the core ideas in the PCAST recommendation. I hope they are useful and look forward to any follow up that would be useful.

To me, the most compelling aspect of the PCAST recommendation is the idea of data-atomicity … the idea that where appropriate we start encouraging exchange of the most granular data elements possible, rather than aggregates, snapshots and pre-normalized information. And further, that we defer concerns about harmonization and structure as late as possible in the information lifecycle, rather than requiring all data sources to conform to a fixed set of standards. This is something we embody both in our HealthVault and Amalga product lines.

Note that much of this is most immediately applicable in the “secondary use”/research/CER context --- which is where I know PCAST started out … concepts will certainly apply more broadly, but that’s at least how I’ve been thinking about it.

The challenge with the “traditional” approach to secondary use is that by definition we lose information as we pass the data through filters and “clean things up.” We make pre-judgments about which metadata is important and which is not. We lose granularity by aggregating along dimensions we think are the important ones --- and lose the ability to re-aggregate against others in the future. And so on.

Of course, there is a conservation of energy --- in order to compute on data, it has to be transformed at some point. But in the traditional approach, every source system bears the burden of the work and encodes a fixed set of transformations. This is incredibly brittle and lossy. Instead, with a data-atomic approach the work is delayed until the last moment, when the needs are best understood, the technology can be deployed efficiently, and technology for normalization may have advanced since the original data elements were captured.

















The Extreme Case: EAV

This philosophy leads us to develop exchange mechanisms that are supportive of maximal completeness rather than maximal normalization. In its truly simplest form, in some cases in Amalga we reduce storage to an open-ended “Entity-Attribute-Value” structure, where each data element is known as a bucket of arbitrary name/value pairs. This structure supports the “extrusion” of multiple transformed views of the same information. For example, in the extreme:

Entity ID
Attribute
Value
12345
Type
Blood Pressure
12345
Timestamp
10/22/2007
12345
Given Name
John
12345
Family Name
Halamka
12345
Systolic
116
12345
Diastolic
72
12345
Source
Personal Physicians HealthCare
12345
Device Name
Omron 7 Series
12345
Device Model
BP760
12345
Device Serial Number
0123456789
12345
Pulse
66

The idea is to capture as much metadata as possible from the source system and make sure it survives alongside all of the other data with the item. And what is most important is that while there is a natural “grouping” for the item as the otherwise meaningless Entity ID --- any attribute can serve as the means to construct just-in-time entities for different purposes. For example, I may want a view of all of John’s readings, so I use the demographic or other patient ID attributes to create an extrusion pivoted on the patient. Or I may want to understand the penetration of different device models around the country, so I can use the “device model” attribute to create another extrusion for that. In the Amalga implementation, these “extrusions” are often created automatically as physical transformations under the covers in response to dynamic query patterns.

The “envelope” format in this case is almost trivial --- we typically use XML as a convenient format but virtually anything will work.

Softening the Approach: codable values and common data

The EAV approach works really well for source systems (they just sent what they can and forget about it) and it can be super-effective in many cases, primarily intra-institution. However, everything is a balance, and the burden EAV puts on the receiving system can be inordinately high. In order to encourage an easier onramp to interoperability, we have adopted a number of techniques that are evident in the HealthVault data model. For example:

We capture a common set of core metadata for every item, including:
* a codified item “type” such as “blood pressure reading”
* various meaningful timestamps (e.g., “created”, “updated”, “effective”)
* audit information about the entity that submitted or updated the item

In many cases, there is common data that “just is” as part of that data type. For example, a blood pressure is not a blood pressure without “systolic” and “diastolic”. So we create very simple schemas for the 80% case of data elements that just have to be there to make sense.

We provide “slots” for other common structured data that may or may not be available --- for example, “pulse” is often present with a bp reading but not always --- so we have a place to put it if available, but it is not required.

Wherever data can be coded, we create constructs that facilitate the capture of those codes without allowing data to be lost. We talk about this as the “codable value” --- an XML construct that allows an item to be identified both with “display text” and zero or more codes that self-describe their codeset. Note this model follows very closely to constructs originally created as part of the ASTM CCR.

We always provide the capability for other metadata to be associated and “travel with” the item --- to ensure the completeness principle.

Examples of these elements can be seen in the following HealthVault item XML fragments. The first is a complete blood pressure reading imported from John’s CCD and shows common metadata and core type information. The second and third are from my record and demonstrate optional items and codable values respectively.

<thing>
    <thing-id version-stamp="f4dd8faa-2ba6-410e-b367-b0b5f96f0aaa">15657ae9-7955-4a1d-9f23-bf56e76b640d</thing-id>
    <type-id name="Blood Pressure Measurement">ca3c57f4-f4c1-4e15-be67-0a3caf5414ed</type-id>
    <thing-state>Active</thing-state>
    <eff-date>2007-10-22T00:00:00Z</eff-date>
    <created>
        <timestamp>2011-01-13T05:31:08.79Z</timestamp>
        <app-id name="Microsoft HealthVault">9ca84d74-1473-471d-940f-2699cb7198df</app-id>
        <person-id name="Sean Nolan">11141dc8-eb3c-4923-99aa-0094bd4d0648</person-id>
        <access-avenue>Online</access-avenue>
        <audit-action>Created</audit-action>
    </created>
    <updated>
        <timestamp>2011-01-13T05:31:08.79Z</timestamp>
        <app-id name="Microsoft HealthVault">9ca84d74-1473-471d-940f-2699cb7198df</app-id>
        <person-id name="Sean Nolan">11141dc8-eb3c-4923-99aa-0094bd4d0648</person-id>
        <access-avenue>Online</access-avenue>
        <audit-action>Created</audit-action>
    </updated>
    <data-xml>
        <blood-pressure>
            <when>
                <date>
                    <y>2007</y>
                    <m>10</m>
                    <d>22</d>
                </date>
            </when>
            <systolic>116</systolic>
            <diastolic>72</diastolic>
        </blood-pressure>
        <common>
            Personal Physicians HealthCare
            <related-thing>
                <thing-id>b61900cf-15dc-4ad8-8a4c-88ccb20d9952</thing-id>
                <version-stamp>1c0cbf63-875d-4acb-832f-ac296e7334a7</version-stamp>
                <relationship-type>Extracted from CCD</relationship-type>
            </related-thing>
        </common>
    </data-xml>
</thing>
 
        <blood-pressure>
            <when>
                <date>
                    <y>2010</y>
                    <m>8</m>
                    <d>31</d>
                </date>
                <time>
                    <h>21</h>
                    <m>48</m>
                    <s>9</s>
                </time>
            </when>
            <systolic>114</systolic>
            <diastolic>90</diastolic>
            <pulse>97</pulse>
            <irregular-heartbeat>false</irregular-heartbeat>
        </blood-pressure>

    <name>Glucose</name>
    <clinical-code>
        <text>Glucose</text>
        <code>
            <value>001032</value>
            <family>labcorp</family>
            <type>result</type>
            <version>20090101</version>
        </code>
        <code>
            <value>2345-7</value>
            <family>regenstrief</family>
            <type>LOINC</type>
            <version>2.26</version>
        </code>
    </clinical-code>

Each of these techniques is designed to allow us to hold COMPLETE information first, enable flexible representation of STRUCTURE and CODING, and ease the burden on CONSUMERS of the data where possible. That last requirement is the one that will “give” when needed, because if we have all the data --- we can always improve and reinterpret it over time.

Item Provenance

Especially in a consumer-controlled environment, ability to track provenance of data is very important. Internally HealthVault maintains a full audit log of changes to information (see some of the common metadata highlighted in yellow above) --- but as a more permanent provenance mechanism the platform allows digital signatures to be applied to any data atom. The fragment below shows a sample signature block from a real HealthVault item.

There is no reason that this mechanism could not be applied within the PCAST context. As certificates are becoming more prevalent for systems such as Direct and Federal identity initiatives, the ability to trace the integrity of information back to its source will become more and more important.

<signature-info>
    <sig-data>
        <hv-signature-method>HVSignatureMethod1</hv-signature-method>
        <algorithm-tag>rsa-sha1</algorithm-tag>
    </sig-data>
    <signature xmlns="http://www.w3.org/2000/09/xmldsig#">
        <signedinfo>
            <canonicalizationmethod algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315">
            <signaturemethod algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1">
            <reference uri="">
                <transforms>
                    <transform algorithm="http://www.w3.org/TR/1999/REC-xslt-19991116">
                        <xs:transform version="1.0" xmlns:xs="http://www.w3.org/1999/XSL/Transform">
                            <xs:template match="thing">
                                <xs:copy-of select="data-xml">
                                <xs:value-of select="data-other">
                            </xs:value-of></xs:copy-of></xs:template>
                        </xs:transform>
                    </transform>
                </transforms>
                <digestmethod algorithm="http://www.w3.org/2000/09/xmldsig#sha1">
                <digestvalue>/6KMZjPtWyF4wE2nq0yvrLm4p5c=</digestvalue>
            </digestmethod></reference>
        </signaturemethod></canonicalizationmethod></signedinfo>
        <signaturevalue>z/aYWJKGywqpjb+WNfqORgVJNr72rNb6jT9BRzNNtBMXEZKQtL9NvqQCdkpQbFyO9w61FXgCPqWEIG41dmrZkNFgMBTzEQFniNCUQYZXrap8Rpu7gCEhWucbIbjL0aXuaVXnLxd6Iz0I3JwyTBoRF037Q1DXduamJQQdBc204VI=</signaturevalue>
        <keyinfo>
            <x509data>
                <x509certificate>MIIE5zCCA8+gAwIBAgIQeukUhvXaUhvkc+GQ+lPNxTANBgkqhkiG9w0BAQUFADCB3TELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2UgYXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwNTEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFsaWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBDbGFzcyAxIEluZGl2aWR1YWwgU3Vic2NyaWJlciBDQSAtIEcyMB4XDTA4MDEyNDAwMDAwMFoXDTA5MDEyMDIzNTk1OVowggEeMRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29yazFGMEQGA1UECxM9d3d3LnZlcmlzaWduLmNvbS9yZXBvc2l0b3J5L1JQQSBJbmNvcnAuIGJ5IFJlZi4sTElBQi5MVEQoYyk5ODEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFsaWRhdGVkMTQwMgYDVQQLEytEaWdpdGFsIElEIENsYXNzIDEgLSBNaWNyb3NvZnQgRnVsbCBTZXJ2aWNlMR4wHAYDVQQDFBV2YWxpZHRlc3QgY2VydGlmaWNhdGUxJDAiBgkqhkiG9w0BCQEWFWpvbGVhcnlAbWljcm9zb2Z0LmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA1DRlI13wp313MIeaqb/VN+QfZgpXujJ99Qfm2pnPYjQnEw/PF5zQSf48A/SQnVaJrVuzyCv+y26N6HI6jGopfPkVseKMzmD+zoTMqzvLB1zs8B+nE9ZvDQoID5ZEZd+5NR8WGlIaj0KGch8SrV4FsnxKtbnd0UONscb8yu6hTNMCAwEAAaOB4jCB3zAJBgNVHRMEAjAAMEQGA1UdIAQ9MDswOQYLYIZIAYb4RQEHFwMwKjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYTALBgNVHQ8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMCMBQGCmCGSAGG+EUBBgcEBhYETm9uZTBKBgNVHR8EQzBBMD+gPaA7hjlodHRwOi8vSW5kQzFEaWdpdGFsSUQtY3JsLnZlcmlzaWduLmNvbS9JbmRDMURpZ2l0YWxJRC5jcmwwDQYJKoZIhvcNAQEFBQADggEBAFJPND6M7LXHNtNKUibt9rdOGS4QjVyttyEHVEasb0NCH+48mZCbdZa4KHEoo5wmhwusJwZ5M6B1rqgG7WVlb6v0qBIUYrnxAf7B4YSXQVHHhDV6Firqbv2PYtxHm0rJ+I7blVOhlaftGrcJ8a9gmJyxK/jwjMekrp7jsVI8iYR/XyYNphToe+HIGFzXjOkDWWrFTEXGhOVKi/+qRk5jOHWW9RP6MsoizJxGRt/t4CZn0W0S+CvMH9JE55pJLx3B22ItY5X0WotYVLr+h7wB4BVn7NJAORWSAEzHq1vzXBcNklFuX7G4PNrIkHU7ixlv1zSgQYOZugjmMJhKR+PEovo=</x509certificate>
            </x509data>
        </keyinfo>
    </signature>
</signature-info>

Privacy Intent vs. Anonymization

The PCAST document includes a recommendation to apply “patient privacy wishes” as part of the common metadata that would travel with each data atom. A number of organizations have worked to define “languages” by which this can be represented, and there is no reason to think that information could not easily be transmitted along with any other metadata. Enforcing those wishes in the diverse healthcare IT environment is a very daunting challenge indeed. Digital rights management technology has advanced significantly over the past few years, in particular for situations where attacks are distributed. That is --- a community of thousands may attack DRM on a newly-released movie; many fewer are likely to be targeting any single data atom. Still, this seems like a recommendation that might be considered more directional than immediate.

An alternative --- especially in the case of secondary use --- may be to start with anonymization. If the identifying elements in each data atom are masked and/or skewed before they are submitted to the DEAS environment, there may be an option here that would kickstart the ecosystem without having to solve all of the technical problems at once.

Both Amalga and HealthVault represent privacy and security within their own environments --- that is, there is full control over who sees what information, but once the information is disclosed it is not tracked further. So these comments are speculative only and not the result of our direct experience.

I hope these samples and thoughts are helpful to the committee as it does its work. I would be more than happy to continue to the discussion, answer questions and offer clarification of anything in the above. Note you can also read more about the HealthVault data model and see samples at the following links:

*http://msdn.com/healthvault
*http://developer.healthvault.com/types/types.aspx

I’ve also posted a few different blog entries about Amalga and its internal data structures. This post is a good start."