Where In The Cloud Is Your PHI?

Where In The Cloud Is Your PHI?

Storing Protected Health Information (“PHI”) in the cloud can be a very useful thing for covered entities and business associates.  As we know, HIPAA does permit storing PHI in the cloud if the cloud storage provider executes a Business Associate Agreement.  However, do you know exactly where that PHI is stored by the cloud provider?  In some instances the cloud storage vendor might store, backup, or process the PHI in an overseas location.  How do you protect the PHI, and yourself, in such a situation?

HIPAA does not specifically forbid storing PHI in an offshore location (some states do forbid storing Medicaid data offshore), but it does create challenges.  First, you must determine where your cloud vendors will be storing the information, and whether it will be offshore or not.  If it is offshore, you need to determine the specific location and what local rules might apply to the PHI. Local laws in the international jurisdiction where PHI might be stored might actually allow for access to the data that would be in violation of HIPAA.  The duty is on you, as you contract with the cloud provider, to determine if the security efforts are sufficient or if the location of the data will pose any risks. Furthermore, offshore cloud providers might not be bound by HIPAA, but you – presumably operating in the United States – are.  If your international cloud provider is at fault for a breach but cannot be held accountable, you might determined to be liable even if the only action you took was selecting the wrong vendor.

Without question, storing PHI offshore brings unique challenges. Whether they are worth it or not can only be answered by you. However, if you are considering a vendor that will store PHI internationally, be sure to conduct a risk assessment to ensure you are not putting PHI in increased or unnecessary risk.

Emergency Preparedness Best Practices

Emergency Preparedness Best Practices

In the wake of two damaging hurricanes, the topic of emergency preparedness is at the top of mind for many Covered Entities and Business Associates. The goal of emergency preparedness is to ensure electronic protected health information (ePHI) is secure, and the confidentiality, integrity, and availability of ePHI is not jeopardized both during and after an emergency.

Effective emergency preparedness consists of having a contingency plan which includes a data backup plan, disaster recovery plan, and emergency mode operation plan.  The disaster recovery plan ensures that you have accurate backups of the ePHI, while the disaster recover plan is how you recover from those backups.  The emergency mode operation plan outlines how ePHI will remain secured during the course of the emergency.  While not specifically required, your organization should consider testing your contingency plan and revise it as necessary.

When thinking about putting you plan together, you can follow a seven step process,

  1. Assess your situation;

  2. Identify risks;

  3. Formulate an action plan;

  4. Decide if and when to activate your plan;

  5. Communicate the plan;

  6. Test the plan; and

  7. Treat the plan as an evolving process.

While this process is linear, these steps can take considerable time to finalize.  If you don’t have a contingency plan in place now, you should begin the process to develop and implement one as soon as possible.

Can I Send Patient Information To…?

Can I Send Patient Information To…?

One of the most common questions I hear is, “Can I send patient information to…” with a plethora of situations and organizations completing that sentence.  Not only is this one of the most common questions, but it is also one of the most fundamental from a patient privacy perspective. I encourage everyone to analyze their unique environment and create a reference guide that captures typical disclosures for your organization.  Include when disclosure is appropriate, inappropriate, and when the Privacy Officer should be consulted.

The reference guide should be developed by analyzing the three types of disclosures of Protected Health Information (“PHI”),

  • Required Disclosures:  The instances in which the PHI must be disclosed include,

    • To individuals when requested for access or an accounting of disclosures; and

    • To the Secretary of U.S. Department of Health and Human Services when conducting a compliance investigation, review, or enforcement action.

  • Permitted Disclosures:  These are situations in which the PHI may be disclosed without the patient’s consent, but you are under no obligation to disclose at all.  Permitted disclosures include,

    • For treatment, payment, and healthcare operations to another covered entity or a business associate with whom you have an executed business associate agreement;

    • With the opportunity to agree or object:  Examples include inclusion in a facility directory, and to family, friends, or others involved in the patient’s care or payment for care;

    • Use or disclosure incidental to a disclosure that is otherwise permitted;

    • Public interest and benefit activities, including when required by statute, regulation or court order, for public health activities, victims of abuse, neglect or domestic violence, for health oversight activities, for law enforcement purposes, and several others (find the full list here); and

    • In a limited data set, which is data set which has specified direct identifiers removed for research, operations or public health purposes.

  • Authorized Disclosures:  Authorized disclosures include any disclosure that is not required or permitted.  These disclosures can only be made pursuant to a patient’s authorization.  Patient’s have wide deference in deciding what disclosures to authorize and duly authorized disclosures must be made unless it will bring harm to the patient.  Authorization must include specific items, such as,

    • Be in plain language;

    • Be specific about the information to be disclosed;

    • Identify who is disclosing and receiving;

    • Include a time or event for expiration; and

    • Permit the authorization to be revoked in writing.

While the healthcare industry becomes more complex by the day, all disclosures will still fit into one of these three categories. If it is not permitted or required, it must be authorized by the patient.  By placing typical disclosures within your organization into one of these three categories, you will be able to answer the question of whether you may send the patient information or not. For any atypical disclosures, that do not fit neatly into one of these groups, consult your Privacy Officer for the final determination.

What To Do About Insecure Business Associates

What To Do About Insecure Business Associates

As a Covered Entity or a Business Associate, you know you need Business Associate Agreements with entities that perform a service or a function for you which requires access to Protected Health Information (“PHI”) to carry out (these are Business Associates or subcontractors).  A required element of Business Associate Agreements is that you will not transfer PHI to entities you know are not properly securing the PHI.  Therefore, what should be done in instances when you discover a Business Associate or subcontractor that is not adequately securing PHI?

The first step is see if the issue can be resolved, or to ‘cure.’ Send the Business Associate written communication putting them on notice that they have a specific time (i.e. 30 days) to correct the issue and secure the PHI, otherwise, the contract will terminate and the exchange will end.  The best case scenario is that they cure the issue within the specified time. If the issue is not corrected in time, then the contract can terminate and the exchange of PHI should end.  The only exception would be if termination is not feasible, for instance because there are no other viable options for the service.  In which case, you must notify the HHS Office for Civil Rights of the potential breach.

As the exchange of PHI becomes more prevalent and complex, the chain of trust on which the PHI is exchanged becomes increasingly important.  If one link within that chain is weak, it must be strengthened or removed.

Breach Notification Requirements

Breach Notification Requirements

Most people in the industry believe HIPAA requires notification of a breach to the federal government and affected individuals within 60 days of discovery (unless preempted by state requirements). However, HIPAA’s breach notification timeline is actually “without unreasonable delay,” but not longer than 60 days after the breach was discovered.

Therefore, 60 days is the absolute maximum amount of time permitted, but a shorter timeframe might be reasonable, and thus, ‘required.’

This can be a challenging requirement to comply with, as what is really required is highly fact specific.  There is little – if any – formal guidance to assist in determining what type of delay might be reasonable and what might be unreasonable.  The best tactic is to not focus on the 60 day aspect, but to do a swift and efficient incident investigation and breach determination.  To do so within the 60 day window, and to notify the respective regulators and affected individuals within that timeframe, would eliminate any question whether the notification was reasonable or not. The worst case would be to have been able to effectuate notice sooner, but instead notice was delayed until closer to the 60 day ceiling.  That would seemingly be an unreasonable delay, and could result in increased penalties.

Breach notification is never a pleasant situation, especially not for those potentially affected.  HIPAA is drafted to provide timely notification to those affected, while still allowing flexibility to conduct a thorough and proper investigation.  While HIPAA may allow up to 60 days for notification, a shorter timeframe is often reasonable most appropriate.

Take The Gray Out Of HIPAA – Risk Analysis Will Help

Take The Gray Out Of HIPAA – Risk Analysis Will Help

Most people with even a casual understanding of HIPAA realize there is a great deal of gray area involved in the implementation of the Rule. This is another way of saying lawmakers intended to provide the regulators with flexibility in HIPAA enforcement. After all, this is a Rule that applies to everything from single doctor practices to multiple-site hospital systems. It is this flexibility – specifically regarding “addressable specifications” of the Security Rule – that makes HIPAA such an implementation nightmare. However, navigating the gray areas, and determining what is “reasonable and appropriate” for your organization is not as challenging as it may seem.

First, you must establish what you need to analyze to determine whether a safeguard is “reasonable and appropriate.” HIPAA provides the factors as follows,

  • The size, complexity and capabilities of the organization;

  • The technical infrastructure, hardware, and software capabilities;

  • The costs of the safeguards being considered; and

  • The probability and criticality of potential risks to PHI.

Once the criteria is established, the method of analysis must be determined. The Rule provides the answer to that as well, a Security Risk Analysis. This is a systematic approach to identifying and determining the likelihood of organizational risks and vulnerabilities. There are many of these available on the market, HHS even provides one free of charge. The two most important things to consider when completing a risk analysis is 1) ensure it covers your entire organization, and 2) ensure it is well documented.

Once you are equipped with the information from the risk analysis, you will understand the scope of your risks.
Based on your organization’s size, complexity, technical capabilities, and associated costs you will then be able to clearly determine what safeguards are required.