Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me this feels spot-on. But I wanted to comment on one thing:

> Fourth, we'll see a resurgence and even fetishization of explicitly "offline" culture, where the "Great Logging Off" becomes literal

I actually think the author undersold this a bit, because even once you've siloed into social buckets with people you already know- as long as it's digital communication, you can't quite be sure it's really them. Even if you're voice-chatting while playing a game together, even if you're video-chatting, we'll reach a point where that could all be faked.

You could rely on cryptography to make sure someone is who they say they are, but that requires extra hoops and a basic understanding of how to use it and what is and isn't trustworthy. Most people probably won't bother

So at some point only physical contact will be fully reliable

(Until, I guess, Musk's brain-computer interface takes off. Then nothing is real)



With cryptographic signatures I could at least verify that something stems from the same account that I previously agreed with, but this would require people to be able to manage their own private keys.

Actually nothing would stop people from signing their own comments today and sometimes it is done.

With e2ee messengers I feel pretty confident that a message comes from the right / real person after I verified their public key.


It can't provide confidence that the message comes from the right/real person, because even without any breach of secrets (which happen even to competent people/organizations) all that gives you is confidence that the message comes from someone authorized by that right/real person. It could be another person, or it could be data generated by an automated system to which they gave the credentials for whatever reason - historically rich people used personal secretaries for writing all kinds of responses including personal ones ("I'm so grateful for your invitation to visit, let's ..." didn't necessarily mean that the actual person even read your invitation), and if an "artificial secretary" becomes good enough, people will use it in future.


> all that gives you is confidence that the message comes from someone authorized by that right/real person.

Which is good enough for many applications, I think. With friends and family, I am pretty confident, that none of them deploy a personal assistant to answer my encrypted messages. As opposed to a messenger where the service provider can inject ads into the messages.

How do I know that a person that I speak to IRL is really saying what they think or even really is who they claim to be if I don't know them well? A rest of uncertainty always remains.


> but this would require people to be able to manage their own private keys

a few generations ago social networking would have seemed infeasible because it would require wide spread literacy (along with many other reasons of course). widespread private key management doesn't seem that infeasible to me.


I agree. Key management can be learned (but still has to be learned). I think that cryptocurrency made the biggest impact in this area so far.


How does one really verify someone’s public key in this situation? E.g. a fake account would presumably be able to generate a fake website of themselves that looks at least semi-legit, and use it to post their pubkey. What would give us confidence the key comes from a legit person?


> So at some point only physical contact will be fully reliable

But then there are twins, so... The "problem" pre-existed.


It doesn't need to be perfect, it just needs to be good enough. In real life, the likelihood of twins perfectly emulating each other, and then using that to deceive others for anything but playful intent is low enough to be acceptable. If a solution is good enough that exceptional corner cases are rare and typically harmless, then it's good enough for adoption.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: