Reminds me of a legitimate security concern someone was talking about in a medical system a while back- the gist was, when a patient was given medication, if the system thought it looked fishy for any reason (the dosage seemed wrong, the patient was allergic to that kind of medicine, the medication wasn’t normally used to treat the specified condition) the system would pop up a notification that required you to manually click the approve button. Sounds good, right? Nurses and doctors should have to see the potential risk, read it, and agree that yes, I do want to give this dosage and type of med?
Except the system did this for everything. Dosage slightly higher than what the system thought was correct, even if the patient needed a higher than average dose for any reason? Flagged. Any form of off-label prescribing? Flagged. Two meds have a tiny percent chance of maybe interacting in a bad way? Flagged. It got to a point where you almost couldn’t put in any medicine at all without the system flagging an error with it. So people started ignoring the flagging. Anyway, long story short, a patient almost died because they accidentally logged too high of a dose of meds for a patient, but ignored the popup telling them they’d done so, because they were so used to having to ignore it.
A similar issue happened with passwords. The system required people to change passwords every month or so, except a large percentage of the staff needed to know the existing password to access the computer system to do their job, so every monitor in the building had a post-it note on it with the system password. Which completely defeated the point of a password! A system that is too inconvenient for people to maintain security on is an insecure system.
Remember fellas. Your security protocols need to be convenient enough that your staff don’t circumvent ’em.
Reminds me of a legitimate security concern someone was talking about in a medical system a while back- the gist was, when a patient was given medication, if the system thought it looked fishy for any reason (the dosage seemed wrong, the patient was allergic to that kind of medicine, the medication wasn’t normally used to treat the specified condition) the system would pop up a notification that required you to manually click the approve button. Sounds good, right? Nurses and doctors should have to see the potential risk, read it, and agree that yes, I do want to give this dosage and type of med?
Except the system did this for everything. Dosage slightly higher than what the system thought was correct, even if the patient needed a higher than average dose for any reason? Flagged. Any form of off-label prescribing? Flagged. Two meds have a tiny percent chance of maybe interacting in a bad way? Flagged. It got to a point where you almost couldn’t put in any medicine at all without the system flagging an error with it. So people started ignoring the flagging. Anyway, long story short, a patient almost died because they accidentally logged too high of a dose of meds for a patient, but ignored the popup telling them they’d done so, because they were so used to having to ignore it.
A similar issue happened with passwords. The system required people to change passwords every month or so, except a large percentage of the staff needed to know the existing password to access the computer system to do their job, so every monitor in the building had a post-it note on it with the system password. Which completely defeated the point of a password! A system that is too inconvenient for people to maintain security on is an insecure system.