The Australian Government plans to revamp the troubled PCEHR e-health records scheme to change patient participation from the current opt-in model to opt-out.
In line with the 2013 Royle Review, patient data from hospitals, general practice, pathology and pharmacy will be added by default to a central longitudinal health record, unless patients take steps (yet to be specified) to disable sharing.
The main reason for switching the consent model is simply to increase the take-up rate. But it's a much bigger change than many seem to realise.
The Government is asking the community to trust it to hold essentially all medical records. Are the PCEHR's security and privacy safeguards up to scratch to take on this grave responsibility?
I argue the answer is no, on two grounds.
Firstly there is the practical matter of PCEHR's security performance to date. It's not good, based on publicly available information.
On multiple occasions, prescription details have been uploaded from community pharmacy to the wrong patient's records.
There have been a few excuses made for this error, with blame sheeted home to the pharmacy. But from a system's perspective -- and health care is all about the systems -- you cannot pass the buck like that.
Pharmacists are using a PCEHR system that was purportedly designed for them. And it was subject to system-wide threat and risk assessments that informed the architecture and design of not just the electronic records system but also the patient and healthcare provider identification modules. How can it be that the PCEHR allows such basic errors to occur?
Secondly and really fundamentally, you simply cannot invert the consent model as if it's a switch in the software.
The privacy approach is deep in the DNA of the system. Not only must PCEHR security be demonstrably better than experience suggests, but it must be properly built in, not retrofitted.
During analysis and design, threat and risk assessments (TRAs) and privacy impact assessments (PIAs) are undertaken, to identify things that can go wrong, and to specify security and privacy controls. These controls generally comprise a mix of technology, policy and process mechanisms.
For example, if there is a risk of patient data being sent to the wrong person or system, that risk can be mitigated a number of ways, including authentication, user interface design, encryption, contracts (that obligate receivers to act responsibly), and provider and patient information.
The latter are important because, as we all should know, there is no such thing as perfect security. Mistakes are bound to happen.
One of the most fundamental privacy controls is participation. Individuals usually have the ultimate option of staying away from an information system if they (or their advocates) are not satisfied with the security and privacy arrangements.
Now, these are complex matters to evaluate, and it's always best to assume that patients do not in fact have a complete understanding of the intricacies, the pros and cons, and the net risks. People need time and resources to come to grips with e-health records, so a default opt-in affords them that breathing space.
And it errs on the side of caution, by requiring a conscious decision to participate. In stark contrast, a default opt-out policy embodies a position that the scheme operator believes it knows best, and is prepared to make the decision to participate on behalf of all individuals.
Such a position strikes many as beyond the pale, just on principle. But if opt-out is the adopted policy position, then clearly it has to be based on a risk assessment where the pros indisputably outweigh the cons.
And this is where making a late switch to opt-out is inconscionable.
You see, in an opt-in system, during analysis and design, whenever a risk is identified that cannot be managed down to negligible levels by way of technology and process, the ultimate safety net is that people don't need to use the PCEHR.
It is a formal risk management ploy (a part of the risk manager's toolkit) to sometimes fall back on the opt-in policy. In an opt-in system, patients sign an agreement in which they acept some risk. And the whole security design is predicated on that.
Look at the most recent PIA done on the PCEHR; section 9.1.6 "Proposed solutions - legislation" makes it clear that opt-in participation is core to the existing architecture.
The PIA makes a "critical legislative recommendation" including "a number of measures to confirm and support the 'opt in' nature of the PCEHR for consumers [and] preventing any extension of the scope of the system, or any change to the 'opt in' nature of the PCEHR".
The fact is that if the Government changes the PCEHR from opt-in to opt-out, it will invalidate the security and privacy assessments done to date. The PIAs and TRAs will have to be repeated, and the project must be prepared for major redesign.
The Royle Review did in fact recommend "a technical assessment and change management plan for an opt-out model ..." but I am not aware that such a review has taken place.
To look at the seriousness of this another way, think about "privacy by design", the philosophy that's being steadily adopted across government. In 2014 NEHTA wrote in a submission (pdf) to the Australian Privacy Commissioner:
"The principle that entities should employ “privacy by design” by building privacy into their processes, systems, products and initiatives at the design stage is strongly supported by NEHTA. The early consideration of privacy in any endeavour ensures that the end product is not only compliant but meets the expectations of stakeholders."
One of the tenets of privacy by design is that you cannot bolt on privacy after a design is done. Privacy must be designed into the fabric of any system from the outset.
If the Government was to ignore this core element of its own privacy by design credo, and not revisit the architecture of the PCEHR which was never designed for opt out, it would be an amazing breach of the public's trust in the healthcare system.