Patient privacy laws





Introduction


A core principle upon which medicine is practiced is that of trust, specifically between patient and physician. And key to that trust is not betraying a patient’s fundamental human rights—including the right to privacy. It is worth acknowledging that contemporary surveys of patients suggest they are not worried about the confidentiality of their private information being exchanged through telemedicine ( ). A similar impression is shared by physicians, but empirical evidence suggests that all involved parties are overconfident in the security of such systems and devices.


Many would consider this to be a very clear-cut topic, where there exist Federal laws in the United States, European Union (EU)-wide laws, and other states with defined privacy laws, governing data handling and or patient privacy. However, this is not the case. For healthcare providers handling any patient details, protected health information (PHI), or otherwise, it is worth knowing that some such laws currently do not apply to every patient seeking care, and that some practices, may knowingly, or unknowingly be the source of data used for the exploitation of those same patients. Many, while informed, are not conscious that credit card and other payment providers have granular data on who you are, what doctors you pay with your credit card or other digital payment solution, and how much you spend at the pharmacy counter versus how much you spend on confectionary, cosmetics, or other product categories at the pharmacy. Considering this data repository is longitudinal and long term for many, inference of what exact medical condition, its status, and other details of health conditions can be made with high fidelity. These data points are legally available for purchase from these payment providers, by anyone willing to pay. And many do purchase this information.


A general approach where all patient information should be treated as PHI, and in the same way, for telemedicine consults as it would be for in person visits, is reasonable. However, what might not be so obvious is that by practicing telehealth, there likely are other measures and protocols that should be followed to protect patient information. The famed Health Insurance Portability and Accountability Act (HIPAA) which passed in congress in 1996 is not all encompassing. It is famed as giving patients’control over their information such as the right to examine and obtain records, and it is the US national standard to protect medical records and other personal health information. It applies to health plans, healthcare clearinghouses, and healthcare providers but only those who receive reimbursement from federal sources. To be explicit, private practices do not have any legal requirement to abide by HIPAA and are not prevented from sharing, selling, or otherwise disclosing patient health information by this law.


HIPAA acts as the floor on health information. There are also state health information and privacy laws. Some states have more stringent laws in place in regards to health information. States cannot have laws that are of lesser strength or run counter to HIPAA. If they do, the federal law will prevail. However, states can have much stricter and more robust laws than HIPAA. For example, the State of California has the California Confidentiality of Medical Information Act (CMIA). On the surface, CMIA is a state law that requires healthcare providers to maintain the confidentiality of their patients’ medical information. The CMIA also prohibits healthcare providers from disclosing medical information without the patient’s written consent. It is worth noting that it does not prohibit the third parties such as payment providers from disclosing information. On the other hand, CMIA’s definitions are much broader than HIPAA—including for “provider of healthcare” and “all recipients of healthcare.”


The issue of the location of the provider and the location of the patient is critical again. The applicability of these laws, both federal and regional, does not change because you are using telehealth. Location does not shield from HIPAA or other privacy laws which would normally apply. While the general guideline is to treat a telehealth consult no different than an in person visit, additional safeguards also need to be taken. A simple example is that in the provision of telehealth, providers may use services from vendors that they would not ordinarily use as part of their practice where they see patients who physically present. Examples include using a short message service/text messaging service, or a video consult service—where one would need to meet legal requirements such as specific contracts or a business associate agreement with them in order to be compliant with HIPAA or other relevant legislation.


Potential privacy risks


Lack of control or lack of limits on the collection, use, and disclosure of sensitive personal information is a potential privacy risk of telehealth, and indeed the connected home in general. Sensors or devices that are located in a patient’s home, whether or not they interface with the patient’s body may also have the capacity to collect sensitive information about household activities, which offer a potential opportunity to improve healthcare monitoring, but also risk invading personal privacy.


Devices explicitly made for health purposes may also transmit information, which can disclose or allow other personal and private information to be deduced. Consider sensors used to detect falls; these can transmit information which would provide religious practices or intimate interactions or indeed even indicate when nobody is at home. Whatever about the use of the information by the health provider, the presence of the device maybe because of the provider aiming to provide telehealth. Much like a provider will discuss the risks and benefits of any other potential investigation, intervention, or treatment with a patient, such devices also carry their own risks and benefits.


The routine transmission from such devices used in the investigation, intervention, or treatment of a medical condition may also be collected and stored by the device, or app, or its manufacturer, or shared with third parties. Consider, for example, that many mobile applications, (e.g., the Facebook app) currently collect information on the presence of what other apps are installed on your device, and when they run. Such collection, use and disclosure of information may be beyond what patients reasonably expect given anticipated uses of the technology. This happened in the case of a popular fitness device inadvertently exposing user’s self-reported sexual activity.


This brings to the fore an ethical responsibility of physicians: nonmaleficence. Patients may give consent to have a device implanted, or to wear sensors and be tracked, or to use a health app and share their data. However, consent should not abrogate the universal human right to privacy. All of us frequently do not read or fully understand privacy policies. How many people have actually read the privacy policies of the company’s services we use online every day? Patients place their trust in physicians to care for them and to bring no harm to them. Some use consent as a tool to shift the burden of privacy protection to the patient, who may not be able to make meaningful privacy choices.


Privacy controls in the USA


Privacy is typically protected by laws or operating policies that implement Fair Information Practice Principles (FIPPs). FIPPs are widely accepted practices, including the ability to access one’s own health information and request corrections; limitations on information collection, use, and disclosure; and reasonable opportunities to make choices about one’s own health information. Providing people with choices for information sharing is only one of the FIPPs, bolstered by others that require data holders to establish and abide by contextually appropriate limits on data access, use, and disclosure.


aaa


HIPAA


The HIPAA of 1996 is one of several sectoral federal laws designed to implement these principles. Current laws, however, do not adequately cover the telehealth environment. Thus, there is no guaranteed right (and often little capability) for individuals to request copies of information collected by apps or home monitoring devices. Information use and disclosure are largely determined by technology companies, with few (if any) legal limits or meaningful opportunities for individuals to control information flow.


HIPAA privacy and security regulations provide protections for identifiable health information, but only when it is collected and shared by “covered entities”—healthcare providers who bill electronically using HIPAA standards, health plans, and healthcare clearinghouses. When it applies, HIPAA’s Privacy Rule establishes limits on the use and disclosure of identifiable health information, and its Security Rule establishes technical, physical, and administrative safeguards to be adopted to protect electronic identifiable health information. For example, encryption of data at rest and in transit is an “addressable implementation specification” under the Security Rule, meaning that HIPAA-covered entities are expected to implement it unless it is not “reasonable and appropriate” to do so. In addition, the regulation states, providers are required to adopt identity management protocols and access controls.


HITECH Act 2009


The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 saw the US Congress extend HIPAA to “business associates,” entities that “create, receive, maintain, or transmit” identifiable health information to perform a function or service “on behalf of” a covered entity. Whether a vendor of a patient-facing telehealth technology is a HIPAA business associate depends on whose interests are being served by the technology. Relevant questions include the following: Who provides the technology to the patient (for example, is it a direct-to-patient transaction, or is the technology provided by the doctor)? Who benefits from the technology being offered? Who is responsible for the day-to-day operation of the technology (an indication of who is ultimately responsible)? And who controls the information generated by the technology? Mere connectivity between a device and a healthcare provider does not render the device manufacturer a business associate of that provider.


Federal trade commission regulations


HIPAA has limited applicability to patient-facing telehealth systems. The HITECH Act established breach notification requirements for personal health records. The requirements of this are overseen by the Federal Trade Commission (FTC). This is probably one of the least well-appreciated and possibly one of the greatest privacy protections offered to patients and users of digital tools. The HITECH Act defines a “personal health record” as an electronic record of identifiable information “drawn from multiple sources and … manage, shared, and controlled” by the patient. While many telehealth tools will not meet this definition, as they don’t draw information from multiple sources and most are not typically controlled by the patient, one can see how application data from social media applications can be defined as “ personal health records,” where the information is identifiable, drawn from multiple sources, and it is managed, shared, and controlled (at least in part) by the patient.


The FTC Act allows the FTC to seek redress for unfair or deceptive acts or practices. The FTC has used this authority frequently to penalize companies for failing to abide by commitments regarding data use made in privacy policies and less frequently to stop unfair practices involving data use and collection. Users of digital apps must rely on the company policy regarding use of data, which are almost always offered to users unilaterally. In other words: if you don’t accept the terms, you don’t use the product or service. Unfortunately, in the case of medical devices, patients often do not have a choice.


Oversight by the FDA


If a telehealth technology qualifies as a medical device, the Food and Drug Administration (FDA) may also regulate it. The FDA does not directly address privacy issues but focuses on security to the extent that it affects medical device safety. (The FDA regulation of mobile medical apps is discussed in greater detail elsewhere in this publication.) In June 2013, the FDA issued draft guidance on the “management of cybersecurity in medical devices,” which urges manufacturers to develop security controls to maintain information “confidentiality, integrity, and availability.” In August 2013, the FDA finalized guidance regarding radio frequency wireless technology in medical devices. And in September 2013, the FDA issued broad guidance on the regulation of mobile medical apps, clarifying that some types of mobile medical apps will be considered medical devices and regulated by the FDA as such.


Through these guidance documents, the FDA is establishing a federal baseline for security in telehealth, but the FDA’s authority has limits. The FDA oversees only technologies it considers to be medical devices and focuses only on security protections designed to ensure safety. It does not focus on privacy safeguards that enforce rules or policies regarding collection, use, and disclosure of potentially sensitive health information.


There will ultimately be further privacy laws implemented, so watch this space


A comprehensive federal policy framework protecting the privacy and security of information collected by telehealth technologies is needed to safeguard patients and bolster public trust. Such protections should be consistent with the basic tenets of HIPAA to ensure a rational and predictable policy environment, but they also should respond to threats to privacy and security that are more characteristic of patient- and consumer-facing technologies. Specifically, policy should address issues such as deficiencies in security safeguards, reliance by app companies on advertising within the apps, and consumers’ lack of access to their information. It should account for current and potential future sources of data leaks, which are de facto sources of identifiable health information, and help close loopholes (or the vast chasms) in existing legislation, which allow for patient information to be shared. Such policies should be tailored to address the unique telehealth risks we have identified here. The policies should cover data collection, use, and disclosure, for both the intended purpose of the technology and any secondary data uses, such as for analytics. They should also be flexible enough to support innovation.


There are a number of challenges to crafting such a policy framework. Privacy and security concerns sometimes can conflict with practicality for patients and industry. Privacy and security controls that do not anticipate the needs and preferences of the intended users are less likely to be deployed. For example, Apple (now the world’s largest company by market capitalization) discovered that only half of iPhone users lock their devices with a passcode. Apple, in essence acknowledging the potential risk to an individual by not locking their phone, made a costly decision to integrate a fingerprint reader into newer models of the iPhone to make it easier to lock the device, and subsequently integrating facial recognition to unlock devices; perhaps a defensive move by Apple, whereby they took all practical steps possible to make it easier for users to secure their information and opting out of security that was easy to use and readily available.


An uneasy balance between operational practicality, privacy and security is not unique to healthcare. It exists in banking and telecommunications also, and whatever about millennia old banking, it is damning that telecommunications have done more to address this than healthcare. Consider the Cable TV Privacy Act of 1984 and the Telecommunications Act of 1996, which prevent the disclosure of personal information without consent and also provide protections akin to FIPPs. These act to balance the business and operational needs of cable and telecommunications providers by allowing the sharing of personal information if the customer fails to opt out of such sharing and are really nothing more than a regulatory loophole, which has social implications and allows for-profit enterprises to exploit individuals information for further profit ( ). In banking, the Fair Credit Reporting Act of 1970 and the Gramm–Leach–Bliley Act of 1999 heavily regulate what credit reporting agencies and financial services companies can do with personal information, providing for conspicuous and regular notice of privacy practices and rights of correction and transparency for consumers. However, these laws also favor an opt-out approach for sharing personal information—allowing data to flow by default to other companies unless the customer specifically opts out, opening the door to selling individual-level user information ( ; ).


No federal agency currently has the authority to establish privacy and security requirements for the telehealth industry. However, the US Congress could pass legislation that would vest this authority with a single federal agency. The Department of Health and Human Services (HHS) is a likely candidate for this role, as it already has experience implementing the HIPAA privacy rules and overseeing US health programs. However, HHS does not have any experience with the privacy and security risks posed by consumer-facing commercial technologies. Another possible agency for this role is the FDA, which is responsible for ensuring the safety of telehealth devices. However, the FDA does not have any experience with privacy issues. On the other hand, the FTC—part of the US Department of Commerce—has technical expertise and long experience in evaluating the privacy risks of consumer-facing technologies. Some would argue it is the agency within the federal government most equipped to regulate information privacy, including within networked telehealth systems.


With respect to telehealth, US Congress could give the FTC authority and build on the Department of Commerce’s 2010 outline for “voluntary enforceable codes of conduct” with respect to consumer privacy ( ). The development of voluntary codes of conduct by telehealth manufacturers and other stakeholders, including those that represent the interests of individual innovators and also healthcare professionals, could be facilitated by the FTC or whichever agency ultimately is granted oversight. These codes of conduct should include foundational privacy and security protections at a minimum. As telehealth continues to rapidly evolve, continuous and equitable involvement of stakeholders in further developing these codes of conduct is critical. The FTC of course already has existing authority to regulate deceptive and unfair practices, and so is well placed to enforce such protections and use carrot and stick strategies. For example, through financial penalties and other means, compel manufacturers to adopt them and on the other hand, induce industry to develop and adopt these codes of conduct by granting a safe harbor from enforcement action for those activities governed by the codes. To provide meaningful protection, Safe Harbor should only be granted to codes that the FTC deems adequate to protect consumers ( ).


While the absence of a perfect agency will surely stifle agreement and implementation, it shouldn’t. It is in patients’ best interests that such oversight is in place regardless of whomever is bestowed this responsibility, and it is imperative that they be agile, innovative, and sufficiently informed to respond to privacy and technology concerns, first protecting patients but doing so without stifling innovation or implementation in this space. Ultimately though, policy makers need to take action to place meaningful patient protections in place, as once this information is in the hands of public corporations or private entities, clawing it back will be near impossible.


European regulations and laws


Patient privacy in the EU is governed by a number of laws and regulations, which establish strict rules on how personal data must be collected, used, and protected. Individuals in the EU have a legal right to know what personal data are being collected about them, the right to have that data erased or corrected, and the right to object to its use, including a “right to be forgotten.” Many of these laws and regulations are general privacy laws and apply to healthcare as they do to all sectors, and so in many respects are less prone to loopholes and strategies to sidestep regulations than if they were only applicable to the provision of health, which is in anyway reimbursed by the government.


GDPR—general data protection regulation


The EU through legislation and other efforts aims to address current and impending challenges in the practice and adoption of telehealth, as there are numerous legal, regulatory, and interoperability challenges. A common eHealth framework has been developed (eHealth EU Interoperability Framework). The EU is also aiming to ensure incidents of data misuse by commercial entities including those based outside of the EU are prevented and that a balance be struck between protecting citizens and facilitating medical innovation.


GDPR was introduced in 2016 and became applicable across all member states of the EU on May 25, 2018. GDPR is widely considered to have a broader scope of effect than HIPAA does in the United States of America. The objectives of GDPR are:



  • 1.

    to facilitate free movement of personal data, including cross-border exchange and


  • 2.

    to protect the fundamental rights and freedoms of natural persons with regard to privacy and protection of personal data (Article 1 of GDPR).



There is a fundamental challenge though to the protection of personal privacy in eHealth when digital devices are used: it is often possible to reidentify a person even from “anonymized data” through the combination of metadata, which form a unique digital signature of that user.


Data protection directive—passed October 24, 1995


The EU Data Protection Directive is a set of regulations that member states of the EU must implement in order to protect the privacy of digital data. The directive was passed in 1995 and updated in 2018. It requires member states to provide subjects with the right to access their personal data, the right to rectify inaccurate data, and the right to data portability. It also establishes rules for data processors, such as requiring them to implement security measures to protect data and to only process data for specified purposes. Finally, the directive creates a mechanism for subjects to file complaints with national data protection authorities. The EU Data Protection Directive sets out strict rules about how personal data must be collected, used, and protected.


Personal data must be:




  • Legitimate and necessary for the purposes for which it is being used.



  • Accurately and carefully collected.



  • Used only for the purposes for which it was collected and not used for any other purpose.



  • Erased or destroyed if it is no longer needed and is not being used for any other purpose.



  • Protected against unauthorized or accidental access, destruction, alteration, or unauthorized use.



However, this directive allows for the processing of health data without explicit consent. For example, article 8(3) of the Data Protection Directive permits processing by a health professional subject to confidentiality rules for the purposes of preventive medicine, medical diagnosis, the provision of care or treatment, or the management of healthcare services. The other main weaknesses of the EU Data Protection Directive are that it does not provide for a private right of action, it does not apply to non-EU data controllers, and it does not have a mechanism for enforcing data protection rights.


EU e-privacy directive


The EU e-privacy directive was introduced on May 25, 2018. It is a regulation that requires websites to obtain consent from users before collecting or using their personal data. The directive applies to all electronic communications, including email, instant messaging, telephone calls, and internet use. It requires that electronic communications service providers take measures to protect the confidentiality of communications and gives users the right to access their communications data. The regulation also requires websites to provide users with clear and concise information about their data collection and use practices.


The directive covers a wide range of topics, including the use of cookies, spam, and unsolicited communications. The directive is a strong piece of legislation that provides robust protections for the privacy of electronic communications. It is also flexible, allowing member states to tailor the implementation of the directive to their own national laws and regulations.


For all of its positives, there are also a number of limitations of the directive:




  • It is not well-defined, and there is significant confusion over what it actually covers.



  • It is not enforced consistently, and there are significant loopholes that allow companies to avoid compliance.



  • It does not cover all types of data, including some types that are particularly sensitive (e.g., financial data).



  • It does not provide for adequate penalties for companies that violate the directive.



  • It does not allow individuals to effectively opt out of having their data collected and used.



The directive has also been criticized for its vague and imprecise language and for its potential to conflict with other EU directives, such as the e-commerce directive.


Conclusions


Trust is at the center of the patient–doctor relationship. Lapses in patient privacy erode this trust and irreversibly damage this relationship. To establish greater trust in telehealth and facilitate more meaningful adoption of the promises of telemedicine, health information security which ensures privacy must be prioritized. Stakeholders should seek to implement a comprehensive framework, which accommodates the detail and flexibility needed for the evolving complexities of providing telehealth and e-health services. There is as much patient education as physician education required to secure and ensure privacy of health data. While statutory direction already exists, it is all too quickly made redundant by the successes of e-health innovators and visionaries. It is difficult to differentiate between well-intentioned innovators and predatory profiteers, but patient privacy is too precious to risk, even for the potential rewards of a fully realized e-health system.



Bibliography

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Mar 12, 2023 | Posted by in UROLOGY | Comments Off on Patient privacy laws

Full access? Get Clinical Tree

Get Clinical Tree app for offline access