- Check signature of the incoming JavaScript against known good versions.
- Check the signature of the HTML page against known good versions.
- Check that the information posted back to the server 'looks encrypted' vs. plaintext[1].
- Check the external resources that the page is requesting. Is it grabbing Javascript files that are unexpected (e.g. trying to serve up a known-good version of crypto.js, but then overwrite its methods with another Javascript file).
I'm not sure if many of these things would be possible in Chrome/Chromium, but probably in Firefox.
[1] Obviously 'looking encrypted' isn't some sort of binary decision, but I'm guessing there is some amount of checking you could do to see how closely it resembles random noise. If you sent random noise, and it wasn't encrypted, it would probably pass this check, but most people trying to protect something are probably sending something that won't trip this 'alarm.' This is not fool-proof, but adds a layer of protection when used with other things.
| "Looks encrypted" isn't useful; encryption
| under a known key or with unsafe parameters
| "looked encrypted" too.
Those go without saying. It would have to be part of a layered approach, and would catch stuff like plaintext going out.
| Are you sure you've captured every case that
| could influence what functions are bound to
| what symbols in the JS runtime?
I'm not. I wouldn't trust myself to implement such a thing (at least not without a lot of peer review from people I trust as knowledgeable), and even with such a 'detection' plugin, I would be wary of using in-browser crypto.
I'm curious what other inputs into the system you think there could be though. If you verify the HTML, and the external resources against 'known good' versions, then what else is there?
- Maybe there's malware already installed on the client system that's a threat, but that's a threat to everything, not something specific to in-browser crypto.
- A man-in-the-middle attack is mostly mitigated by using SSL (though not 100%).
- A compromised/malicious server, will end up changing the JS and/or HTML, which would (hopefully, if you've done a good job) not pass your verification checks.
- The other possibility would be a browser exploit that somehow is triggered before the plugin can raise a red flag about unverified JS/HTML.
--
The entire point of my posts in this discussion thread was to say that crypto in the browser vs. crypto on the server may have the same threat model (trusting the server + SSL), but they are not exactly the same. With in-browser-js crypto, as the client you have full access to the environment where the crypto is running. If it's happening on the server, it's a blackbox to you. This opens the possibility to have software running on the client side to verify that things are kosher. In the end, by the time that you're writing the software on the client-side to verify things you may as well just be doing the crypto in a browser plugin rather than in JS. I realize that it's mostly an academic argument.