Designing a User-Experience-First, Privacy-Respectful, High-Security Mutual-Multifactor Authentication Solution Conference paper First Online: 24 January 2019
Part of the
Communications in Computer and Information Science
book series (CCIS, volume 969) Abstract
The rush for improved security, particularly in banking, presents a frightening erosion of privacy. As fraud and theft rise, anti-fraud techniques subject user privacy, identity, and activity to ever-increasing risks. Techniques like behavioral analytics, biometric data exchange, persistent device identifiers, GPS/geo-fencing, knowledge-based authentication, on-line user activity tracking, social mapping and browser fingerprinting secretly share, profile, and feed sensitive user data into backend anti-fraud systems. This is usually invisible, and usually without user consent or awareness. It is also, unfortunately, necessary, partly because contemporary authentication is increasingly ineffective against modern attacks, but mostly because the idea of “usable” is confused with “invisible” most of the time. In the mind of a CISO, “stronger authentication” means a slower, less convenient, and more complicated experience for the user. Security and privacy tend to lose most battles against usability, particularly when friction impacts customer adoption or increases support costs.
Keywords Multifactor authentication Usability Security & privacy MitM attack Notes Acknowledgments
We thank the reviewers and numerous industry security experts who freely and eagerly gave up their time to review our solution and their quest to try and find possible oversights in it.
A Problems/Issues with Current 2FA Tech
This appendix supplements Sect.
Most 2FA technology is based on one-time-passwords (OTP). 2FA has many shortcomings. It is important to consider them while designing or evaluating improved authentication.
A-1 Categories and Vulnerabilities of 2FA
This appendix groups the different kinds of 2FA available into ten categories, and outlines the drawbacks and vulnerabilities of each. To avoid repetition, Subsect.
afterwards addresses general failures that all ten 2FA categories suffer. A-1.11 A-1.1 OTP Hardware
Hardware-based or keyring-style OTP tokens are the most well-known 2FA category. They generate new random codes every one minute or so based on a per-token ID, the time, and seed or key material programmed by the vendor. Codes are typically valid for double or more the length of time they’re displayed (to accommodate clock skew and slow typists). When invented
in 1984 (8 years before the invention of the world-wide web), time-limited OTP passcodes had better chance of improving security because networked machines and real-time attacks were rare (Fig.
Security vulnerabilities of hardware OTP include:-
Man-in-the-Middle (MitM) attacks; intermediary can steal OTP.
No channel security; there is no association between OTP code and a secure channel, leaving the protection of codes against theft out-of-scope: it’s the website’s job to use TLS with HSTS and HPKP etc., & the user’s job not get tricked or downgraded.
Spoofing; there is no binding of tokens to resources. Imposters can capture codes, and have several minutes to use them.
Single channel transport; techniques which steal passwords like keyloggers, phishing, malware, and social engineering of the user equally succeed stealing OTP codes too.
No local protection; codes are typically displayed on a screen which has no protection against unauthorized viewing.
No utility for signing transactions; OTP codes bear no relation to user activity so are inappropriate to confirm user instructions.
No malware protection; Because OTP cannot sign transactions, malware can inject/modify instructions, which get innocently permitted by users unaware the OTP code is being hijacked.
Very low resistance to misuse by friends, family, or peers.
Intentional fraud: Sometimes it’s not the bad guys defrauding a user, but bad users defrauding (for example) their bank. Fraud-free guarantees are often abused by unscrupulous customers.
No non-repudiation; OTP does not prove user intent.
No PIN protection; most OTP tokens have no keypad.
Lacking mutual authentication; OTP code-use is one-way only; no mechanism to verify authenticity of the website exists.
Low Entropy; only short numeric codes are supported.
Serverside OTP support typically requires installation of hardware and drivers, which carry their own risks of compromise. The $1.1-trillion hack against the US Office of Personnel Management was ironically facilitated through privilege escalation attack against their OTP Driver software.
Seeds and Keys protection; OTP tokens are based on a master secret, which when stolen, compromises all user OTP tokens at once. This infamously occurred in 2011 when a phishing email stole keys from an OTP vendor which were subsequently used to facilitate military contractor organizations break-ins. Upto 40 million compromised tokens were subsequently replaced.
Most OTP is based on asymmetric cryptography, threatened by quantum computing and advances in factoring techniques.
Drawbacks of OTP hardware include:
Multiple Usability issues: they interrupt and dramatically slow down user authentications. They have no backlight making them sometimes difficult to read. They are bulky and require physical carriage. Usability is so poor; banking customers have switched banks to avoid being forced to use OTP hardware [
They do not scale: Users require a new physical OTP token for every website login requiring protection. At time of writing, this Author (a long-time internet user) has 2838 unique accounts across 2277 websites; if all were protected by OTP-token, that would cost $100,000 in tokens, weigh 93lbs (42 kg), take half an hour to locate the correct one for each login, prevent logins when away from the token-room, and require 56 replacement tokens each week as batteries go flat, taking 40 h to reenroll, costing $20,000 p.a. to buy the replacements.
They fail, expire, and go flat: OTP tokens typically last 5 years. Some policies expire them sooner (prior to battery exhaustion) some fail through clock sync, battery or environmental issues.
Prevent Fast and Automatic logins; OTPs require manual code reading and typing. They cannot support automatic/rapid use.
Slow setup; OTP’s require shipping, and once received, usually require ~30 min setup and enrollment procedures.
rd party trust; OTP keys are typically made at and kept with the token vendor. Any theft of misuse of these keys allows an OTP token to be emulated by an adversary; see (o) above.
Limited offline utility; OTP tokens are rarely used to authenticate customers over the phone or in person.
Single token only; Most OTP client implementations allow for just one user token; there is no provision for users needing more (e.g. one token at home and a second at work).
No self-service; OTP are hardware devices, which require costly deployment/handling which users cannot do themselves.
High costs; OTP devices themselves are expensive, the serverside hardware and licenses are likewise expensive, and the support costs and periodic replacements also expensive.
A-1.2 OTP with Transaction-Signing (OTP+TV)
Some OTP hardware includes a keypad, useable for Transaction Verification (TV). These are typically PIN protected and also capable of providing plain OTP codes for authentication. Signing consists of entering numbers (e.g. PIN, source, destination, and $ amount of financial transfers) to produce a verification code based on all the information keyed in, which the user then types back into the website (Fig.
Security vulnerabilities of hardware OTP+TV include:
When used in OTP-only mode (as opposed to TV mode), these suffer all the same problems as plain OTP except for the ones mitigated through the use of the PIN pad protection.
Rogue transactions via MitM, spoofing, and malware: In banking context, the limited no-prompts OTP-TV display makes it hard for users to understand the meaning of the numbers they key in and to know and check they’re correct in the following three different places: (1) their original transaction they submitted (e.g. through their PC). (2) the on-PC-screen prompts telling the what to type on their OTP+TV keypad, and (3) the numbers they manually enter on it.
An adversary with privilege to modify user screens can substitute the intended receiving account destination with their own, and can adjust transaction amount almost unperceivably. For example: to transfer $100, a user keys in 00010000. If malware told them 00100000 instead, it’s unlikely they’d notice. Similarly, recipient partial-account numbers might be subtly or completely adjusted, and/or the bank to which the payment is intended, not being part of the signature at all, is free to be modified by the attacker.
Partial signatures only: no facility exists to sign the actual submitted transaction (which would include recipient names, routing numbers, banks, dates, other instructions, and notes); signatures are limited only to the least-significant digits of recipient account identifiers; the rest is at risk to malware.
Drawbacks of OTP+TV hardware include:
These tokens also suffer all the drawbacks of OTP tokens discussed in Sect.
Usability; entering every transaction twice on the small and low-quality keypad becomes a major chore for users. Many users, including this author, dread using these exhausting devices so fiercely, that avoiding transactions as much as possible becomes common practice.
A-1.3 Mobile App OTP
Some mobile apps replicate OTP hardware, thus they suffer most of the vulnerabilities and drawbacks discussed in Sect.
in addition to more discussed here (Fig.
Security vulnerabilities include:
Cloning; Mobile-OTP keys live usually without protection on the user’s mobile device.
No Key encryption; most Mobile-OTP does not have PIN or passwords protecting OTP codes. While phones themselves are usually locked, 31% of us still suffer a “snoop attack” against our phones every year anyhow [
Enrollment attacks; Enrolling a Mobile-OTP requires sending the key material to the device; this is usually done via QR code or typeable text string. Intercepting these codes allow adversaries to generate future OTP codes at will.
Serverside break-in; The webserver must store the per-user OTP key in their database; this is usually kept in the same table that usernames and passwords are in. Any webserver flaw resulting in a password breach will also result in the loss of all OTP keys as well. Such break-ins and thefts are common.
Mobile malware; In-device malware might have access to steal user keys. On “rooted” or “jailbroken” devices, and unpatched/older devices with escalation flaws, nothing protects the keys.
Cloud backup; Most mobile devices backup their storage to cloud servers, putting OTP keys at risk of serverside theft.
Drawbacks of Mobile-OTP include:
Usability; while Mobile-OTP enjoys the benefit of being always available to most users most of the time, it does still require the user to unlock their phone, locate the requisite app and open it, then hunt through their list of OTP codes for the one relevant to their account and username, before finding and typing back in their OTP code.
Scalability; finding the right code to use at each login is an N-squared complexity problem. Each extra login makes it slower and harder for all other logins across all accounts every time.
Compatibility; many OTP apps refuse to run on older devices “for security reasons”. Ironically, this misguided protection effort guarantees those users get no protection at all.
Mobile authentication; Using Mobile-OTP to access a Mobile account on the same device requires a competent user who can quickly switch between apps, and remember random 8 digit codes. Millions of users, especially elderly, young children, and others most vulnerable will be unable to do this.
A-1.4 Modern Multifactor Mobile Apps with Signing
Newer mobile apps are significantly more advanced than the Mobile-OTP category, carrying vastly improved usability, good transaction verification (TV) and signing, and sensible protections like password or biometric key protection, thus can guard against some of the more obvious attack scenarios. Since many incorporate GPS, biometrics, device-ids and more, they are more accurately described as multifactor (MFA) than just second-factor.
Mobile phones travel almost everywhere with nearly every person who would want to have 2FA. They’re a central feature in the lives many, who take great care to protect them. They do still get lost or stolen, but we think it’s fair to say that there is no single thing that humans put more collective effort into ensuring not to lose, than their phones.
With their ubiquity, sensors, power, and network connections, mobile phones are ideal authenticators.
Security vulnerabilities include:
No MitM, spoofing; or malware protection; An imposter can cause a legitimate Mobile-MFA user to authenticate the wrong person (the imposter). There are some apps which use a phone camera to scan onscreen codes in a partial attempt to prevent simplistic MitM, but these too fail to prevent authenticating the attacker (since the attacker is free to simply present the scannable challenge to the legitimate user.)
No channel protection; No Mobile-MFA implements working mutual authentication – absent a skilled and attentive user, no protection exists to ensure the users connection to their webserver is uncompromised.
Cloud backup; Modern Mobile-MFA is less susceptible to insecurities of backup data on cloud servers, since they are expected to be making use of PINs, biometrics, device-ids, and protected storage (non-backed-up) features of the modern mobile OS, however, implementations between vendors vary, and not all of them take these precautions.
Downgrade vulnerabilities; most Mobile MFA supports insecure fallback methods such as resorting to code-entry Mobile-OTP for situations where the app has connectivity issues, subjecting them to the vulnerabilities and drawbacks discussed in the previous Sect.
Drawbacks of Mobile-MFA include:
Usability drawbacks vary widely across Mobile-MFA vendors. Some apps auto-open using PUSH and auto-communicate codes and signatures so users don’t need to type things in. Others require users to manually open apps and find tokens.
Banned-Camera policies; Mobile-MFA requiring cameras will not function in workplaces (e.g. military, secure) prohibiting them or their use (especially recording screens with phones).
In-device switching; Using an app or browser on the same mobile device as the Mobile-MFA requires user’s adept at using their mobile OS to switch back and forth between apps.
Offline usage; Mobile-MFA requires a working data (wifi or cellular) connection to function. International travelers and low-credit mobile users will find this expensive and frustrating.
SIM change; Many Mobile-MFA apps cease to function when SIM cards are changed, purportedly for “security reasons” (we assume stolen phones or hijacked apps). Since most international travelers change SIMs when abroad to keep their roaming costs low, this causes cost and usability problems.
Developer mode; again for “security reasons”, many Mobile-MFA apps refuse to open if the phone is in “development mode”. People with “rooted” or “jailbroken” their devices are permanently blocked from using these Mobile-MFA apps.
A-1.5 SMS OTP
Mobile phone text-messages are the mode widespread OTP in use, and the least secure, and the least reliable (Fig.
Security vulnerabilities are:
Number porting; Many ways exist to hijack a user’s phone number and SMS messages; this is a common and successful attack.
SS7 redirection; Cell-network protocols permit unscrupulous operators anywhere in the world to inject commands rerouting (thus intercepting) SMS, voice, and cellular data traffic for any subscriber. Public, with-permission (but without-assistance) attacks against high-profile victims have been demonstrated.
Malicious micro-cells, and radio sniffing; Software-Defined Radios (SDR) sell for under $10 on eBay, and free opensource software turns them into local (and remote) SMS sniffers.
Weak, or no, encryption; Mobile network encryption is weak, taking (depending on generation) between 2 h to less than 1 s to crack on a single PC [
]. Modified cell traffic attacks which disable encryption entirely are relatively easy to mount, are commonly found active in cities, and proceed undetected on all but purpose-designed secure-cell handsets. 5
iMessage sharing; SMS-OTP messages often distribute across different accountholder devices and show up on multiple user screens at once. This further subjects SMS to thefts since intruders with user cloud account access can register their own devices on this account to receive them (Fig.
A governments’ advice to citizens urging to disable SMS-OTP before travelling abroad.
Downgrade situations Many organizations recommend users disable their SMS-OTP when travelling; a risky decision for most users since this is the time they will most need 2FA!
Low local protection; many handsets display messages on lock-screens, with no protection against being observed by malicious 3
Social-Engineering against 3
rd parties; Many customer service workers in the communications industry can be successfully convinced by deception or bribery to affect a SIM porting or other adjustment to deliver SMS-OTP to attackers.
Malicious replacement of SMS-OTP number at the website; Software or operators running the website can be tricked into changing the phone number to which codes get sent. Attacks involving combinations of social engineering against multiple third parties exist which provide an adversary direct access to change the SMS-OTP phone number themselves online.
Mobile Malware; iOS and Android operating systems both include a “permissions” setting which permits Mobile-Apps to read and interfere with SMS. Malicious apps exist which forward SMS to attackers and hide their display to the user.
Third party trust; The SMS-OTP itself travels through many different networks before reaching the user; any breakdown of trust along the way affords malicious opportunity.
Most OTP Hardware vulnerabilities also apply to SMS-OTP; Including: MitM; no channel security; spoofing; single channel transport; keyloggers, phishing, malware, social engineering; no utility for signing transactions; no malware protection (distinct from mobile malware), low resistance to misuse by friends, family, or peers; intentional fraud; no non-repudiation; no mutual auth; (full descriptions in Subsect.
Drawbacks of SMS OTP include:
SIM Change; SMS-OTP stops working when users change phone numbers. This is common for international travelers.
Unreliable delivery; SMS message delivery is often delayed or fails (a significant problem since OTP codes expire quickly).
No offline usage; SMS will never arrive unless a user has a valid connected and paid-up cellular account.
Poor coverage; Many places exist with no cellular coverage.
Usability: SMS-OTP dramatically slows all logins; this can be minutes or more in on poor cellular networks.
SMS-OTP does not scale well and suffers poor portability. Imagine changing your phone number on 1000 accounts.
Prevents Fast/Automatic logins; Waiting for and typing-in an SMS-OTP makes fast and/or automated logins impossible.
No secure self-service replacement; Lost phones (or non-working SMS delivery of any kind) require operator-assisted bypass. Phones often get lost, so help-desks become used to allowing users to bypass SMS-OTP. Spotting malicious users in the flood of legitimate bypasses is difficult.
Expensive support and losses; help desks are needed to handle customer SMS-OTP bypass. Fraud teams and products are needed to mitigate attacks overcoming SMS-OTP protection.
High costs; Sending SMS with reliably delivery costs more.
Banned; NIST 800-63B says not to use SMS, and that it will be banned in future. Many telcos have said this for years.
A-1.6 In-Device Biometrics
Broadly speaking, there are two types of biometrics (Fig.
In-Device, which typically make use of secure hardware within a device to record and later compare user biometric features, but never send biometric features or scans over networks, and
Remote biometrics, where the user biometric (e.g. their voice) is sent to a remote machine for processing. In-Device are considered “secure”, since considerable effort is typically applied by the manufacturer to prevent theft and feature extraction. Remote biometrics are considered extremely dangerous, since raw biometrics data is subject to theft both in transit and at rest. Because biometrics can never be changed once compromised, many jurisdictions and countries completely ban the transmission and/or storage of biometric data through networks for all or part (e.g. just children) of their population.
Security vulnerabilities of In-Device biometrics include:-
Not all phone manufacturers implement biometrics technologies well. Some create purpose-built secure enclaves for biometric processing & offer well designed API interfaces, others do none of that. One popular platform SDK includes a key-enumeration API; any app can extract every fingerprint key from the phone. It also has no biometric cryptography API at all; developers have no option but to write insecure code.
All biometrics reduce overall user security, because they all offer PIN or password bypass for situations where user biometrics fail (e.g. fingerprints after swimming or rough manual labor). An adversary now has 2 different ways to compromise protection; steal a fingerprint or guess a password.
Some argue that passwords become stronger since they’re used less, and thus harder to observe, however, adversaries with that level of access can engineer password-theft scenarios (e.g. fail a fingerprint several times to force the user to enter their code).
False vendor claims; The world’s strongest and most advanced (for those who recall vendor advertising at the time) fingerprint biometrics with subdermal imaging and secure enclave was hacked less than 48 h after release using a laser printer and wood glue. Marketing messages were posthumously amended, the vendor claiming they meant “more secure because more people will use it instead of leave their phones unlocked” (which is true), despite the fact it reduced security for their customers already using passcodes, who opted in.
Most biometrics use extracted features and approximation to calculate probabilities of match, making them unsuitable for hashing-technique protection, yet many vendors make clearly untrue “completely safe against theft” claims on these grounds.
Low entropy (depending on the type of biometric and sensors); biometric efficacy is a tradeoff between false negatives and positives; mimicry can defeat voiceprints 33% of the time.
Easily stolen keys; A fingerprint protected mobile phone will spend almost all its life covered in legitimate user fingerprints.
Easily copied; Custom silicone finger-caps (e.g. to defeat shift-work timeclocks) made to copy any prints you supply cost $20.
Unchangeable keys; there is no recovery after theft.
Widely collected keys; Travelers, criminals, and voters routinely provide fingerprints. Many of these collections are shared or have been hacked and stolen (or will be in future).
Vulnerable to failures in unrelated systems; Biometrics stolen online may be useable to defeat those used in-device.
Drawbacks of In-Device biometrics include:-
False negatives; biometrics often don’t work. (refer Fig.
Why adding extra security makes things weaker.
Environmental reliance; some biometrics rely on the conditions of collection. Face-recognition often fails at night time.
Backups; In-Device biometrics are not useful for protecting remote resources (e.g. cloud storage).
Portability. Complete re-enrollment is needed on new devices.
A-1.7 Biometrics Collected Remotely
These are the worst and most reckless form of security: refer explanation at
(2). They are already widely banned. A-1.6
Security vulnerabilities of remote biometrics include:-
In-Device biometric vulnerabilities also apply to these.
Trivially vulnerable to theft during use, outside of use, from public archives and directly from stored feature databases.
Often transmitted in-the-clear; (e.g. most voice remote-biometrics take place over unsecured telephone networks).
Drawbacks of remote biometrics include:-
Illegal to use in many places and on certain people (e.g. kids).
Easy to steal. No way to change once stolen.
Dictionary attackable; not all remote-biometrics have rate-limits on guessing, and combined with the low entropy of many remote-biometrics, brute-force access is feasible.
Imprecise; most remote-biometrics must suffer the inadequacies of the “weakest acceptable collection device” (e.g. poor voice connections for voice).
Enormous negative privacy implications; biometrics facilitate automated non-consensual surveillance and tracking of subjects in a wide and increasing range of circumstances.
A-1.8 USB Gadgets and Smartcards
These screenless devices which attach to your computer (e.g. pluggable USB keys), or attach to a reader which is itself attached to your computer (e.g. keyboard with card-reader) (Fig.
Security vulnerabilities of connectable gadgets include:
Malware; all connectable gadgets are at full mercy of whatever infections might be present on their host machine.
MitM; USB OTP has 2 options: (1) defend MitM attacks (e.g. certificate-substitution), making them unusable in workplaces with DPI firewalls, or (2) accept intermediaries (and attackers).
Injected transactions; with no on-device screen, the signing user has no means to verify what they’re signing.
Piggyback risks; USB memory sticks can be disguised as USB tokens, facilitating unauthorized carriage and use at work.
Infection vector; USB-OTP tokens are computing devices; programmable to infect host computers. USB attacks like hardware keyloggers, PC wifi bugs, and DMA-memory-theft bootloaders can also be disguised to look like USB-OTP.
Increased social-engineering risks; plausible bypass excuses exist (e.g. tokens left at home, not carried on vacation, etc.) making it hard for help-to desks recognize intruders.
Drawbacks of connectable gadgets include:
Limited compatibility; there are many different kinds of plugs used across phones and PCs, like USB-A, USB-B, Micro-USB, Mini-USB, USB-C, iPhone 30pin, lightening and whatever-comes-next. No USB-OTP supports all these. Users with multiple devices, or who change devices, or don’t have slots on their device may find their USB-OTP will no longer connect.
Workplace bans; security conscious organizations do not allow the use or connection of USB devices.
Storage security; Workplaces that do allow USB often prohibit the transport of USB devices into or out of the workplace, forcing employees to leave them unattended after hours.
Difficult to scale; different devices, vendors, and standards are incompatible. Multiple different USB-OTP’s will be needed to protect many accounts, each one suitable for only a small subset, leaving it for the user to remember which-is-for-what.
Single-device only; USB-OTP works only with one device at a time usually; there is no way to have a spare for emergencies.
Inconvenience; carrying devices everywhere so you can login when you need also raises the risk of USB-OTP loss or theft.
A-1.9 Client TLS Certificates
Most browsers natively support X.509 client certificates. So does other software, and custom applications exist also making use of similar Public Key Infrastructure (PKI) (Fig.
X.509 PKI diagram
Certificate compromise; client certificates are stealable computer files. They have passwords, but can be brute-force and dictionary-attacked
attacked offline, or passwords stolen.
Malware; PKI offers no protection against malware.
CA Compromise; Certificate Authorities issuing client certificates can and have be compromised.
Checking certificate legitimacy is difficult, (impossible on some devices). Users rarely verify certificates or legitimacy.
Drawbacks of PKI include:
Usability; PKI is one of the least useable 2FA methods. It requires highly competent users. Enrollment, use, and renewal are challenging. Implementation is radically different across devices and vendors, and frequently changes with upgrades.
Compatibility; There are many PKI compatibility differences, file types, encoding formats, ciphers and digests. Only a fraction those work in any particular O/S and software.
Expiry; certificate lifetime is usually short, (typically one year, or much less for trial certificates). Users must re-endure the challenging reissuance process often. Old certificates must still be kept for future signature checking, and these make ongoing usage even worse (user need to select their current login certificate, named identically to all their expired ones).
Cost; Most client PKI requires payment, often high, to a Certificate Authority (CA), usually annually.
CA Revocation; this invalidates all user certificates at once.
Portability; Certificate re-use is possible across many devices, but the steps needed to make this work are extremely complex.
A-1.10 Paper Lists (TAN)
Transactional Access Numbers (TAN) are codes typically printed in a grid requiring users to locate via some index number or a row and column id the OTP code to use. Some are single-use only (Fig.
Security vulnerabilities of TAN include:
TAN’s suffer the same vulnerabilities as OTP hardware listed in Sect.
(a) through (m). A-1.1
TAN Pharming is an attack technique which tricks users into revealing TAN codes to an attacker, who is then free to use them in subsequent attacks. They are facilitated by the predictability of the TAN index (e.g. a TAN card with rows and columns will always have a TAN code at location A1).
A server needing to verify TAN correctness necessarily holds sufficient information to do this, which is then susceptible to theft (and offline dictionary attack if necessary); one server-side break-in can invalidate all issued TANs at once.
Drawbacks of TAN include:
Do not scale; If activated on thousands of accounts, a user would need thousands of individual TAN lists or cards.
Physical replacement issues; If used regularly, expiring TANs would require frequent replacement, and reusable TANs would become reconstructable to eavesdroppers.
A-1.11 Scope Failures Across All 2FA (and Non-2FA)
Within every category, many vendors & products exist, each with their own and differing shortcomings (not covered in this paper). The broadest shortcoming across all 2FA categories (and indeed, most non-2FA alternatives as well) is “scope”. Most vendors push responsibility for “difficult” security problems to their customers.
A-1.11.1 Reliable Initial User Identification
The intersection between identity and authentication is hard to secure; so much so that all 2FA technologies chose not to address this problem. This leaves a gap between the identification of the new user, and their enrollment in 2FA. All 2FA categories leave opportunity for intermediaries to hijack or subvert the deployment process. Many providers mix deployment with verification such as by physically shipping devices, keys, unlock codes, and TANs in postal mail, or by using SMS, phone, or email to deliver PINs or enrollment keys. All those shipping measures are unreliable, offering interception, substitution, and facilitating a range of social-engineering opportunities against both users and staff alike. They also require soliciting personal address information from users. Google, during the 2011 AISA National Conference, revealed the single biggest issue preventing uptake of their SMS-OTP product was user reluctance to provide their phone number.
A-1.11.2 Enrolment Across Compromised Channels
2FA is deployed because risk is identified among users, so it’s clearly an oversight to ignore this risk during the 2FA enrollment.
Assuming a user took delivery of their 2FA solution without incident, none offers satisfactory protection to prevent the attacker (1) either stealing the 2FA for themselves, (2) tricking the 2FA into enrolling the attacker instead of the user, or (3) downgrading the protection or preventing and/or spoofing enrollment entirely.
A-1.11.3 Loss Handling
All 2FA is subject to loss or destruction, or dependent on secrets that users might forget, particularly the elderly, & especially when 2FA is used infrequently. Some 2FA is version dependent, and fails when updates take place (for example; Java) or machines change (e.g. pluggable USB devices when users switch to an iPad), or after certain intervals of time or when batteries go flat.
2FA bypass is an often-exploited shortcoming across all 2FA categories. It is the fault of the 2FA leaving loss-handling outside the scope of protection which caused this problem. Each deployment requires its own re-enrolment procedure, and most make use of fallback/recovery mechanisms that do not use 2FA.
A-1.11.4 Social Engineering of Staff
For all users who cannot log in with their 2FA for any reason (e.g. Sect.
), some method of bypass is introduced. Support staff with access to change or remove 2FA is one common method. Since these staff are so accustomed to dealing with average legitimate users and everyday problems, it becomes very difficult for them to detect an account takeover attack being performed by a social engineer. Many headline news stories of high-profile 2FA-bypass account takeovers and online banking thefts facilitated through 2FA-bypass have been published. A-1.11.3 References
Avast Forum. List of online banking sites in your country.
. Accessed 28 06 2018
Bursztein, E., Aigrain, J., Moscicki, A., Mitchell, J.C.: The end is nigh: generic solving of text-based captchas. In: 8th USENIX Workshop on Offensive Technologies (WOOT 2014). USENIX Association (2014)
Castelluccia, C., Narayanan, A.: Privacy considerations of online behavioural tracking. In: The European Network and Information Security Agency (ENISA) (2012)
Clifton, B.: Understanding Web Analytics Accuracy (2010).
. Accessed 28 06 2018
Dunkelman, O., Keller, N., Shamir, A.: A practical-time attack on the A5/3 cryptosystem used in third generation GSM telephony. Cryptology ePrint Archive: Report 2010/013 (2010).
Krol, K., Philippou, E., De Cristofaro, E., A Sasse, M.: They brought in the horrible key ring thing! Analysing the usability of two-factor authentication in UK online banking. In: NDSS Workshop on Usable Security, USEC 2015 (2015)
Panjwani, S., Prakash, A.: Crowdsourcing attacks on biometric systems. In: The Tenth Symposium on Usable Privacy and Security (SOUPS). USENIX Association (2014)
Schechter, S.E., Dhamija, R., Ozment, A., Fischer, I.: The emperor’s new security indicators an evaluation of website authentication and the effect of role playing on usability studies. In: The 2007 IEEE Symposium on Security and Privacy (2007)
Verizon. 2016 Data Breach Investigations Report (DBIR).
. Accessed 28 06 2018
© Springer Nature Singapore Pte Ltd. 2019