How Can Health IT Help Reform the CDC?

COVID-19 forced a spike in policy-makers’ interest and willingness to invest in public health; a spike that is unfortunately retreating to the old business as usual. (President Biden, in his State of the Union speech, warned “we remain vigilant” while calling for an end to the emergency, but quickly switched the subject from the urgency of public health to prosecuting fraud.) Luckily, along with increased attention came a raft of intelligent suggestions for changing how public health institutions carry out their mission, starting with the Centers for Disease Control (CDC).

In January, a comprehensive group of about 40 public health experts released recommendations for rebuilding the CDC under the auspices of the Center for Strategic & international Studies. Although most of the group’s recommendations concerned funding, culture, relationships with other agencies, and similar organizational topics, a few recommendations focused on data collection and sharing (pages 23-24).

I consulted several experts in health IT to ask how such IT could improve data collection and sharing in public health. This article summarizes their responses.

Paths to Interoperability

A recent article exposes the woeful silo-ing of public health: Data often has to be faxed and re-entered into new systems manually. I wonder whether the path to complete integration requires adopting a single, worldwide, FHIR-based standard (which is time-consuming and probably requires jettisoning old database systems) or programming these systems to translate data from one format to another.

I heard details about problems with COVID-19 lab reporting at a state level from Dr. Paulo Pinho, vice president & medical director of innovation at Availity Clinical Solutions (formerly Diameter Health). He reported poor standardization of data coming out of test centers, despite the standards for interoperability delineated in the Health Information Technology for Economic and Clinical (HITECH) Act of 2009. Many testing sites still use their own local codes or even no codes at all, instead of conforming to the standards created by such organizations as Health Level Seven (HL7), the accredited global standards development organization for health data, and the United States Core Data for Interoperability (USCDI).

The result was up to 150 different ways to represent the COVID19 PCR test and 40 different ways to represent a negative test result. Furthermore, key demographic information was missing 40% of the time. Pinho said that labs and providers often fail to ask questions because of lack of time in busy workloads, or because of challenges in addressing issues of race, ethnicity, and gender during a clinical episode.

One of the largest states in the country created partnerships with Diameter Health to deploy a new software system that could manage large amounts of data, collecting, cleansing, grading, and normalizing it to meet standards.

Pinho said there were different fixes to each problem. For data reporting, Diameter Health improved lab data to adhere to standard and interoperable formats. The company automated this data translation to avoid manual labor. For missing demographic information, the state, through a partnership, contacted the labs and made sure they captured the desired data.

Diameter Health can also perform targeted free-form text matching of clinical documents to map text to missing interoperable codes. Their data quality improvement tool (Availity Fusion) achieved nearly 100% standards conformity on test names and results, while their quality assessment tool (Data Dashboard) led to overall improvement in source data capture of 10-12% in one calendar year.

Pinho hopes that vaccine registries will standardize in the same manner. And when people test at home, the tests should provide an easy way to upload results to public health sites, to provide a more complete view of the pandemic in real time at the state level, along with better data reports at the national level.

Steven Lane, chief medical officer of Health Gorilla, believes in unified standards. Other strong proponents of harmonizing around FHIR include Kel Pults, chief clinical officer and vice president of MediQuant, as well as Paulo Pinho.

Not only should everyone adopt FHIR, but Lane wants to harmonize regulations that vary from one jurisdiction to another concerning what data to share. When I asked whether the information needed might vary from one region to another, he asserted that 99% of needs are the same across the country. When you move to a radically different places such as Sudan, differences would be greater.

Standards to specify for what data to capture and share are also a leading interest of Sandeep Shah, founder and CEO of Skyscape. This company offers a communications platform called Buzz that he compares to Slack, but with a focus on health care. Shah would like public health agencies to create a consortium to discuss what health-related data they need. These discussions, which could use technology such as Buzz, would enable daily communication about what’s happening at different agencies. Shah says that agencies have to learn to talk freely with each other, and overcome silos.

Lane worries that public health doesn’t look far enough into the future when planning technology acquisitions, because the institutions have so little to spend and feel that they’re always playing catch-up.

And Lane’s own vision for the relationship between public health and the clinical world is remarkably audacious.

He points out that the FHIR-based technologies Health Gorilla uses for data exchange (including public health agencies they support) are bidirectional. In a trivial use, a lab can send test results to the public health agency and the agency can acknowledge receipt. But the agency could also provide information itself in real time.

The technology is available, Lane says, for a clinician to send patient data to the CDC while the patient is sitting in the office and for the CDC to send back a message warning that the patient might have a communicable disease, along with treatment suggestions. A simple rules engine could combine information from the EHR on the patients’ conditions, social determinants of health, location, etc. to make a recommendation. AI could probably do even better.

When I asked whether doctors would accept such advice, Lane pointed out that it’s just an extension of clinical decision support, which modern EHRs offer and is widely approved by physicians. Pinho called for real-time decision making as well.

The FDA has left a lot of clinical decision support free from regulation, but I have a feeling that Lane’s bold system would require FDA approval. Recent guidelines call for regulation if the software “Provides a specific preventative, diagnostic, or treatment course,” or “Provides a specific follow-up directive,” or “identifies a risk probability or risk score for a specific disease of condition” (pages 12-13). If public health enters the field of clinical decision support, I for one would feel better to see some regulation.

Old Data Can Still Be Good Data

The useful life of data varies from industry to industry and application to application. Location data from mobile devices is useful for only a few minutes (because mobile devices are just that—mobile). Information for determining consumer interest may be useful for a few weeks, until a consumer buys what they want. Most health care institutions keep a few months’ worth of data in recent storage and archive the rest.

But these archives are gold mines for public health agencies. Thus, I talked to Kel Pults at MediQuant about their health care service, which they call an “active archive.” Not only does it manage the different tiers at which old data is stored; it presents a graphical interface to make it easy to search and view archived data.

Pults said that the 21st Century Cures Act requires the timely release of patient data, and providers need point-of-care access to better care for patients. To analyze the spread and control of COVID-19, the CDC is interested in data from 2020 and 2021.

Amazon’s S3 service, the classic cloud offering, lets you choose among seven levels of storage ranging from standard (for data you access daily or more often) to Glacier Deep Archive (for data you hardly ever need).

But S3 represents data as opaque “objects,” suitably known as BLOBs in the computer industry. MediQuant helps manage issues crucial to healthcare, such as separating personally identifiable information that should be kept out of most public cloud repositories, and retrieving specific fields in archived data.

Knowing Whom You’re Tracking: Identity is Central

Matching a record to a patient is hard enough in clinical institutions. They use more than a dozen types of data (name, address, gender, etc.) to distinguish one Abdul Muhamed from another, and to determine whether a certain Muhamed and Muhammad are the same person. Public health has to face the same problem without the advantage of the pre-existing relationship that clinicians have with the patient. When instances of an infection are found, the public health agency has to connect the context: health histories, symptoms, etc.

Gus Malezis, CEO of Imprivata, told me that in mid-2020 Israel collected, distributed, and drew conclusions from large amounts of COVID-19 data in a recent study. The government could do that because they had a centralized patient data system and could reliably identify patients.

Many European countries also identify patients reliably, but sharing data is harder in the EU because of privacy regulations.

Digital identity is Imprivata’s business. They can help a hospital reliably identify and distinguish patients through biometric palm vein scanning technology. Using this unique patient identifier, hospitals can achieve a much higher patient record matching rate, but only within that health system’s network.

Malezis says, “Breaking down the barriers to sharing data beyond the enterprise level, between states or even nations, requires more than just technology standardization and interoperability, much of which exists already. Rather, it demands a willingness from state and global leaders to share and trust the accuracy and security of the data. Data governance models must evolve to keep pace with technology to support sharing across borders. Something like the Israeli or European solution is required to let identities cross boundaries as public health requires.”

Malezis also said that verified identities increase the trust that different countries and jurisdictions need to use data outside their borders.

If we gave data back to patients, instead of hoarding it for competitive advantage, they would have more trust in the systems that treat them and guard public health. Everyone could dictate whether their context is shared, or just a narrow swath of data.

Healthcare professionals recognize the value of transparency in sharing data. Malezis mentioned that the OpenNotes initiative advocates for better standards for sharing health information and clinical notes with patients, and that trust and transparency improve the quality and safety of care as well.

Finally, Malezis reminded us that technology cannot solve all aspects of trust and data sharing. Being transparent and engaging the patient in this process is crucial. He said that during the early days of the COVID pandemic, China appeared to delay or ignore global scientific requests for access to information, fueling further questions and uninformed interpretations.

The Context for Reforming the CDC

The paper from the Center for Strategic & international Studies, which I pointed to at the start of this article, also said that the CDC has to become better at communicating recommendations (pages 20-22). During the pandemic, of course, scientific knowledge was spotty and the recommendations were confusing, changeable, and sometimes misleading (e.g., don’t wear a mask). But when there’s a clear and critical message to convey—such as the value of vaccinations—the CDC needs to disseminate the information widely and quickly. I sense that IT solutions can help meet that goal, but none of my respondents discussed that angle.

Finally, one can’t review the CDC without mentioning the bestselling book The Premonition, by Michael Lewis. The CDC he presents is academic, fussy, and cliquish; sensitive to political pressure and thus overly cautious (“malignant obedience”); obsessed with credentials and therefore resistant to ideas from new places; and sometimes frankly incompetent (such as with COVID-19 tests and masking recommendations). But I must point out that Lewis is a talented popularizer with an agenda: He loves to tell stories that pit brilliant mavericks against stifling bureaucracies. So The Premonition may downplay the CDC’s effectiveness.

A body of researchers has suggested a range of non-traditional datasets that can help predict public health problems. Waste water testing is a well-known example.

Many historians have credited public health with the most important improvements in human health throughout history. Although some historians trace the field back to John Snow’s identification of a source of cholera in 1854 London, city governments and other institutions have been resorting to public health measures such as quarantines and levees since the beginning of civilization. Public health in this country have been given impressive powers to intervene in everyday life to protect health; some observers think that an anti-regulatory tradition threatens the effectiveness of these powers.

Modern IT can help address the scandal of our underfunded public health institutions, and in particular their systems for collecting and sharing data. Coordinating thousands of agencies—let alone the clinical sources of data—will be hard, but would pay off by saving thousands of lives.

About the author

Andy Oram

Andy is a writer and editor in the computer field. His editorial projects have ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. A correspondent for Healthcare IT Today, Andy also writes often on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

   

Categories