Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

As such, these developments raise several important and difficult questions. What impact will AI systems have on biomedical research, especially in the context of large-scale data sharing and confidentiality? What kind of control over personal data should be delegated to patients? How can we ensure that AI-based methods and solutions are consistent with established legal and ethical principles and to what extent do these principles need to be changed in order to reflect the new realities of healthcare in a digital age? And how will technological advancements in the MedTech industry be influenced by different legal frameworks? Which regulatory, ethical, and legal principles can best orient the construction of precision public health interventions and the implementation of technology-powered medicine?

To answer these questions, it is important to review new opportunities and challenges of digital healthcare from diverse disciplinary perspectives (particularly legal, ethical, technical, and business). In our recent work, we have identified the following areas where more research is needed:

  • Platforms, Apps and Digital Health: Exploring the impact of software applications – often developed by companies with software and platform expertise, rather than medical expertise.
  • Trust and Design: Looking closely at data protection issues with a particular emphasis on questions of consent and trust in the context of the new data that is created by medical technologies.
  • Knowledge, Risk and Control: Exploring the various risks that arise as a result of the emergence of new forms of knowledge produced by AI-related analysis of medical data.
  • Balancing Regulation, Innovation and Ethics: Examining the challenges of balancing the competing concerns and trade-offs that arise in real-world settings, most obviously in hospitals and the physician-patient relations

Based on an inter-disciplinary dialogue, we hope that effective strategies to ensure that the benefits of this ongoing technological revolution are deployed in a responsible and sustainable way. To learn more about these and related issues, you may read our new volume on AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare published by Cambridge University Press.

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.