• barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    I disagree it would be the same as a password. They do use only the hash to validate the entry, that is the same. But then they send recovery to the email instead of proceeding in place. An attacker would have to both know the email and be able to access its inbox. (Or, less likely, generate a hash collision with an address they do control.)

    I think they could do verification if they kept the plaintext address just long enough to send something out.

    The UX of only being able to show hashes would be pretty unfortunate, sure. Maybe that’s a potential compromise if they kept just a first letter, likex***@example.com? Same number of stars in the interface regardless of the real length of email, to attempt to leak less info.

    • sudneo@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 months ago

      But the question is “why”? Email addresses are personal but not secrets, there is no reason to add complexity and worsen the UX for such a feature imo. If anybody is not comfortable with this particular piece of data being associated with their account, they can just use a recovery phrase. It is by no means a necessary feature. What would be the advantage of having a recovery email “obscured”? The advantage of the functionality as-is is that it’s trivial to see what you have configured, it’s trivial to change address etc.

      All of this to add an ineffective amount of privacy. If someone is under investigation, having the hash of the recovery email is in many case sufficient. Asking Apple/Gmail/Microsoft if the hash matches any of their customers covers probably 98% of the population. Billions of emails are also available through breaches, so there is very very high chance that if someone used their personal email, it’s either with one of the big providers, or it has been leaked before. If it’s not, and you used a private provider with no data, then there is no problem even if the address is obtained, as that cannot be further used to de-anonymize you.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 months ago

        You’re incorrect. If they salt the hash and use bcrypt it is computationally infeasible for Microsoft to match it against a customer. Or at least expensive enough that Microsoft would insist on warrants and subpoenas.

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Computationally infeasible? It’s as expensive if every user made a single login (if they use bcrypt for passwords).

          They don’t need to do it for every user, they need to do it for one only. Salting is fairly irrelevant in this context. And we are talking about resources for Microsoft, or Google, or Apple. And this is also assuming they can’t further segment the customers by other metadata, such as location (in this case for example, Spanish users), which will drastically reduce the number of users to try. If every Spanish person had a user, you need 47kk hashes. Years ago single rigs pumped more than 10k bcrypt/s. That would be 1h of computation give or take? Assuming a fraction of that and not the immense computing power of big tech, it’s still something completely achievable for an investigation.