If you need HELP, SUPPORT or just have a GDPR question please call +44 (0) 208 133 2545 or email us at email@example.com.
Alternatively please visit our contact page
FREE GDPR Helpline
Call +44 (0) 208 133 2545
There’s an awkward truth at the heart of information technology that has been ignored for more than 50 years. Like it or not, GDPR is going to drag it out into the sunlight.
Whenever an organisation hoists the evergreen request for a username and password to authenticate someone, it is simply unable to tell from the information that comes back whether it was entered by their true user or by an impostor. In short, a password is the same irrespective of who entered it.
Sure, when a password is wrong, the sign-in attempt can be safely rejected. But when it is correct, what is the probability that it’s the true user? Systems designers assume that it’s 100% correct. But let’s say the password was written on a sticky note and stuck under the keyboard. What’s the probability now that it was the true user? Is it still 100%, or is it 0%? Perhaps it’s somewhere in between, but what if that the password is used to “protect” other accounts on other systems? Or is up for sale on the dark web? Or has been shared with colleagues?
A password is supposed to be known only to the real user, but this glib assumption is a busted flush. The reality is that wherever access to any online resource is controlled with a password, if you can know it, then so can an impostor. All such knowledge is open to being shoulder-surfed, discovered, leaked, hacked, intercepted and (ahem!) guessed. And, sometimes, it is wilfully shared.
To say that today’s identity methods suffer from a lack of precision is an understatement. Why do organisations persist in asking their users to do something that an impostor can do also? If organisations continue using passwords instead of gathering evidence that can distinguish between users and impostors, then any hope for data protection (in a GDPR context or otherwise) is a mirage.
Jeremy Newman is Founder and Executive Director of ShowUp (www.showup.global)
Jeremy @ Newmodel Identity
This is why a American company I’m an associate of is using physical tokens and cryptographic measures to uniquely define the user who is actually requesting access.
And that even before the portal of the organisation who knows this user is even open to connect to either legitimately or otherwise.
Okay we still need to make sure the token itself can only be used by the rightful owner and only that person, but thats something less difficult.
Another side-effect of this token approach is that the token and it’s authentication mechanism is tested and therefore trusted during the entire session, token removed aka session gone.
The technology is already 11 years old but only now it seems we may be waking up to it’s potential and need for such a solution.
If you are interested and certainly if you are able to host a pilot project for at least 250000 users? Then by all means, contact me directly.
There are three fundamental tenets of security – be somewhere, know something, and have something. Any system that uses only 1 of these is always weak. You need to use at least two. This is security-101, so I guess they aren’t teaching this anymore?
Two factor authentication is secure(ish) and commonplace so there is an answer.