This is useless to me because I have a website, not a web application. I got a 46/100 on PWA which is astonishing because the site being tested is not a progressive web application at all. Imagine vim getting a 40/100 score in the category of web browsers. Weirdly good score.
The accessibility audit is garbage as well. Apparently I should add an image to the site just so I can put an alt attribute in it. I should include audio so I could have a transcription to go with it. Not sure whether Lighthouse decided that 16px or 20px is less than 12px but apparently one of them is and makes up over 60% of the page.
I understand this is not made for people who serve static HTML files and handmade CSS from ~/sites/ but I'm pretty sure I hate the kinds of sites this is designed for. Should have a custom splash screen? Respond with 200 when offline? What's next, I get points for breaking the back button too? Why is using HTTPS and redirecting HTTP to it a PWA thing?
The skew to web applications rather than websites for tooling has been very disappointin and myopic view of what a 'website' is.
It would make sense if I could give it a root URL and then a robot progressively ran tests over a month and then reported back, but one URL is a drop in a vast ocean for most of us.
Properly made "web application" doesn't have to break the web. It can be curled, links work, back button works, etc. You just don't get any interactivity (besides links) without javascript, but it still works as browseable site (rendered server-side). You can have the cake & eat it.
(I'm not saying that every application should behave like that. Often the extra work is not worth it. But a public, content-heavy site should behave like that, whether it was single-page app or tradiotionally implemented.)
Progressive enhancement is different than what the parent comment is suggesting. They are describing how to correctly write SPAs and other webapps.
The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous. Your browser has it. Your screen reader has it. Even web crawlers have it.
> The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous.
WP describes it as
> Progressive enhancement is a strategy for web design that emphasizes core webpage content first. This strategy then progressively adds more nuanced and technically rigorous layers of presentation and features on top of the content as the end-user's browser/internet connection allow. The proposed benefits of this strategy are that it allows everyone to access the basic content and functionality of a web page, using any browser or Internet connection, while also providing an enhanced version of the page to those with more advanced browser software or greater bandwidth.
It's way, way more than JS.
> They are describing how to correctly write SPAs and other webapps.
In the context of "I have a website, not a web app", and web apps that "don't break the web", i.e. also behave well as web pages. If you are suggesting anyone is building backwards to that from a web app, instead of progressive enhancement, do you know an example?
If I understand your question, you're asking about adding functionality to a webapp to make it feel like a webpage rather than enhancing a page to add new features. The best two examples are actually mentioned above.
1. Using history.pushstate to intelligently add to page history for meaningful changes to the page. This ensures pressing "back" in your browser is still reliable.
2. Using server-side rendering on the first render. This keeps SPAs fast while the payload is being transferred.
Regarding Wikipedia's definition, that's a more broad definition than I'm used to seeing (speaking as a web developer). I've always heard it in reference to falling back gracefully from Javascript - usually with a <noscript> tag.
Supporting mobile, weaker networks, accessibility, etc. fall into much larger categories. Many of these topics require their own discussion and best practices.
Those topics are of course still important today (if not even more so).
> The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous. Your browser has it. Your screen reader has it. Even web crawlers have it.
That's only part of the problem: every day I encounter sites which fail because the developers assumed not just that everyone has JavaScript but that they can load tons of assets reliably and instantaneously. The key part of progressive enhancement is thinking about how to degrade gracefully when everything doesn't work perfectly, which also tends to offer a better experience for anyone who doesn't have a very high-speed near-perfect network connection.
A couple of weeks back, I was using a family member's Spectrum “high-speed” cable modem service at a whopping 5Mbps with latency measured in the hundreds of milliseconds. It really highlighted who was doing progressive enhancement and who was doing “works on my machine” when you saw one page load 90 seconds faster than the other.
>A couple of weeks back, I was using a family member's Spectrum “high-speed” cable modem service at a whopping 5Mbps with latency measured in the hundreds of milliseconds.
And that's still great internet compared to some places. I have a house out in the middle of nowhere that is only served by a single satellite internet provider (surrounded by trees that block the view to other providers' sats). I get 20Mbs at ~500-1000ms latency for a few days before I hit the 20Gb cap, then I get 0.5-1Mbs for the rest of the month. Hacker News is one of the few sites on the web that I can browse relatively painlessly when I'm up here.
A couple of years back I was at a conference in Rome. Literally in the heart of the city (the windows overlooked the Forum) and that meant that they had only satellite access because nobody had run cables through the historic buildings. I've never been more glad to have spent time optimizing our site for 2.5-3G performance than when we were demoing it during presentations and it seemed slow but almost everything else was unusable.
I'd put network use in a different category. It is an important issue though.
Thankfully the tools are getting better for this. The recently supported font-display property is a great one. It allows devs to choose how to handle web font rendering over slower internet connections.
Now I just wish more devs would start to take advantage of all the great performance tools available. Those best practices are unfortunately rarely taught.
> I'd put network use in a different category. It is an important issue though.
My rationale for considering it to be included is that as the concept was developed I took the spirit of progressive enhancement to be doing the best with what your users have rather than only catering to people with the same setup you have.
> Now I just wish more devs would start to take advantage of all the great performance tools available. Those best practices are unfortunately rarely taught.
Agreed. I think one of the challenges has been both showing business value from performance — once you're putting things into a cost/benefit comparison it's a lot easier to get people to routinely consider the performance impact of their decisions.
It might depend on main body text versus title text. It's jarring when body text changes so I'd prefer fallback in that case. For a title which might have more branding concerns, I'd prefer swap.
> It would make sense if I could give it a root URL and then a robot progressively ran tests over a month and then reported back, but one URL is a drop in a vast ocean for most of us.
I think this is the end goal. Building the infrastructure to audit a single page is the first step towards that bigger outcome.
Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.
If you run Lighthouse (the tool that powers web.dev's auditing feature) from a CLI or as a Node module, you can tell it to only run the audits that are relevant to your needs.
I get the general frustration that non-technical teammates look at these reports and say, "we're doing terrible, you need to fix this" when in reality you know that the audits aren't relevant to your business. But it's tough to create an auditing tool for the web at large. There are a lot of businesses that would benefit from PWA features. The general idea was to raise awareness of how PWA features can often improve the UX of many sites. Not all, but many. My takeaway from this discussion is that we need to improve our messaging around the fact that these audits aren't commandments. Some of them may not be relevant to your top priorities. Maybe we could improve the report UI in DevTools and web.dev so that you can flag individual audits as irrelevant. On subsequent runs, those audits would be omitted from your reports. Or maybe we can somehow get more clever about how to present certain audits. E.g. based on Chrome User Experience Report data we identify that service worker usage in your industry is low, and we flag the service worker audit as potentially irrelevant to your needs. That would help solve the problem of non-technical people seeing a low score and assuming that it's a fault with your site, when in reality it's just an irrelevant audit.
Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.
I hate how Google's tools keep pushing deferred CSS. It kind of breaks how CSS is supposed to work. They want you to manually pick out the CSS that is relevant to the top of the page, and put it directly into your HTML. How on earth is that maintainable or scaleable? Or secure?[0] You easily run the risk of sending redundant bytes if the same styles are still in your external CSS. I tried that little script they suggested, and not only got FOUC'ed up the ass, the page took longer to load. (bbbbbut it's asynchronous, that's what makes it SO FAST!) Nope, that didn't last long. Not going to do it, Goog.
I agree that splitting up your CSS to only send the critical stuff first is tough to scale and it's tough to find a reliable solution.
> It kind of breaks how CSS is supposed to work.
Can you elaborate on this?
> You easily run the risk of sending redundant bytes if the same styles are still in your external CSS.
I think we're up against 2 less-than-optimal situations. Suppose you have 50KB of CSS.
* Ship it the traditional way. User waits on all 50KB before first paint.
* Ship it the code splitting way. User gets 10KB upfront, and leaves before the rest loads. But if they interact with the site extensively, then they trigger the redundant bytes that you're mentioning, so that the total download size comes out to be 75KB.
Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.
It breaks CSS, because having styles on the page couples presentation with content. The selling point of CSS is to change a style in one place, and have it affect multiple pages. If you break it up, you end up having to maintain multiple versions of your CSS. To do this in the name of performance strikes me as one of the very last things to do, given it's unfavorable maintenance cost.
> I think we're up against 2 less-than-optimal situations. Suppose you have 50KB of CSS.
> Ship it the traditional way. User waits on all 50KB before first paint.
Best practice for CSS is to link it in the first kilobyte or so of HTML[2][3]. Browsers have optimized for it: it makes the CSS request happen immediately, before the browser parses the rest of the HTML. Unless the user has a slow connection (<10 Mbps), bad round trip time (>200 ms), or the server is slow to serve a 50K static file (average CSS size[0]), that CSS will load within half a second, with first paint soon after. If you need to cut down that down, you should consider a CDN before deferred CSS.
> Ship it the code splitting way. User gets 10KB upfront, and leaves before the rest loads. But if they interact with the site extensively, then they trigger the redundant bytes that you're mentioning, so that the total download size comes out to be 75KB.
If CSS was split, and the user leaves[1] before the CSS completely loads, CSS isn't the source of slow page loads.
Google's tools need check if styles load within a second or two. If it's any more than 3 or 4 seconds, deferred CSS starts to make sense. If styles (or the entire page) load in less than that, don't bother.
Specifically, there were audits flagged as "not applicable" and the alpha version of Lighthouse was instead flagging them as failures. That's why it looked like it was telling you to add audio or images—it was actually saying those audits are not applicable because your site doesn't use audio or images. I think that bug has been fixed in Lighthouse but feel free to reply to this comment if you're still seeing it.
We've also temporarily turned off the PWA audits—they were having some bugs of their own based on the infrastructure they were running on. Based on the feedback in this thread we'll look into making them configurable so folks can choose if they want to run them.
We'll also be opening up the repo shortly so folks can file bugs there directly.
My website scored a 46 as well. It's just a couple HTML pages with a single stylesheet but Google really wants me to configure it so you can read my internet webpage outside of the internet.
I think I'm gonna add an explanation for what CTRL+S does instead.
You might want to brush up your knowledge on what a PWA actually is [1]. For example, that list contains 'Site works cross-browser'. I hope your website works cross-browser too ;-)
The only thing that sets apart a normal modern website (https, responsive, cross-browser, Each page has a URL) and a PWA is the ServiceWorker (and a few meta-tags). All the other aspects are more or less soft or minor aspects like "Page transitions don't feel like they block on the network".
This might sound pretty preachy, but in fact, I just want to give a better perspective on what PWAs are, so that they are not getting confused with the average single page JS bloat. Instead, PWA is more like best-practice (e.g. to avoid broken back buttons) paired with some mandatory tools.
Hell yea my site works cross-browser. I've tested it with Firefox, Chromium, Midori, w3m, lynx, surf, and edbrowse and they all look fine.
I freely admit I have no idea what a PWA actually is, but that page is not helping me understand. All I find are vague descriptions about how they're reliable, fast and engaging, but nothing even approaching a definition. For all I know, a 16-ounce claw hammer is a PWA. It's certainly very reliable, fast and supremely engaging.
If PWA is all about the service worker, why are things like HTTPS, splash pages, 200 offline, or address bar matching brand colors(?!) included in the category? Those things don't make my site any faster or more responsive, and I doubt a service worker would either. Why on earth would I want my site to return 200 offline anyway? I don't wanna lie to a user.
Ok, lemme take a deep breath. I'm sure a PWA does not equal a single-page broken piece of JS monstrosity. I just don't think it's a good idea to give me bright red warning triangles about not having a service worker unless Google believe every site should have a service worker. In that case I disagree with them because I don't think I need a 200 OK if my network interface has caught fire in the middle of browsing.
It's basically a set of techniques and technologies developed in an attempt to produce a user experience similar to that of a native application when using a web application.
In the case of not having a service worker, well, a PWA without services workers doesn't make much sense. This is because service workers are used to cache content to create that 'native' feeling of your application not 404ing when you go through a tunnel.
I don't have a computer running an operating system that supports either of those browsers so I can't test with them. However, I assume they can render basic HTML enough to make my site readable and I don't even think there's anything in the CSS to change that.
I just ran their audit to https://mail.google.com, you guys (googlers) need to speed up your websites first, Gmail is terrible slow lately, just saying...
Google's pagespeed service is hot garbage. You can do things that make the page load significantly slower and get a perfect score than if you had a lower pagespeed score with a much faster page load.
I also find it strange how I can get an A on every other page speed service, including GTMetrix, but get an F on Google's tes(s). Totally useless.
Most performance profilers will penalize you for embedding Google Analytics and Google Fonts because they have poor caching settings. What's so important about those resources that they can't be cached for longer periods?
There are two main reasons why short caching is helpful on tag scripts:
* Faster iteration time for developers: the more often you release updates the faster you can move. If your script has a one week cache lifetime then there's not much point in daily releases and any experiments you run will be really skewed.
* Quicker response to problems: if we push out a bad update that gets past our testing, the TTL we serve it with determines how long that will stick around in browser caches.
(Disclosure: I work at Google, on ads JS, and I previously worked on mod_pagespeed. Not speaking for Google, just myself.)
Easier bug fix presumably. Meaning if there is a bad release url is cached for little. If urls were versioned and were changable by google, it would have been done.
Google cannot bust this in case of a bug. They can mitigate this by having a loader with built in functionality that always looks out for "is there a newer version" but that has its limitations.
I see no explanation given why an asset that never changes would need to be requested once a day, and they clearly state they do record it.
> We only [sic] see 1 CSS request per font family, per day, per browser. Google Fonts logs records of the CSS and the font file requests, and access to this data is kept secure.
I read that as what's written there. It's for tracking. Tracking popularity is still tracking.
They take and keep the tracking data. What they say they use it for doesn't change that. If they simply incremented a counter the bit about "keeping access to the data secure" would make no sense.
This is a complete guess but I suppose one request everyday would be enough to get atleast some idea of how popular any given font is. The page says its to keep them updated but I highly doubt that would be the reason.
While I don't know anything about fonts stuff, successful execution of the ad tag leads to an ad request, and this happens on every page view which has the script and not just the ones where the file has fallen out of cache. So there's no additional useful information Google would get from a low cache lifetime for the script.
The script request is to a cookiless domain for performance, unlike the ad request, so there's not even much useful information on that request.
(Disclosure: I work at Google, on ads JS, and I previously worked on mod_pagespeed. Not speaking for Google, just myself.)
I agree it's not perfect! But my view is that if they(Google) did the effort todo pagespeed, then lighthouse, then integrate lighthouse in Chrome.. and now in a separate website... Some of the metrics must be import for your rankings/site - even if only indirectly (slow site etc)
I never pay attention to the "grades" only the data, especially the waterfall graph/data. The grades are ridiculous, and mostly meant to cause alarm. You can score 100/100 on almost every test, but get a "D" overall because you scored poorly on 1 or 2 tests. It's great tool for techs, but in the hands of a client almost every result/score appears terrible.
Meh, these tests are just good ways to find obvious problems with your site.
Fixating on the overall score seems like an indication of a different problem.
For example, you can't change the caching behavior of google-analytics.js, and it's usually intractable to split apart your CSS to optimize for above-the-fold content. Oh well, you didn't get a perfect score. Stay practical.
Try selling that to a client that keeps using the test against a site that is A rated everywhere else because Google's brand has fooled them into thinking a perfect score means better SEO.
Right, that would be an example of a "different problem."
I'd be hard pressed to let stupid things client/managers can do direct how this sort of tool should work. For example, it'd be silly to demand "score inflation" just because your client is unreasonable. Or rather, if a client is making your life hard, I don't think it's the world that needs to change.
What's your solution if your client measures your success by punching their website into https://www.worthofweb.com/calculator/? And doesn't take you seriously enough to listen to you in general?
This is a problem we run into, clients will run their site through a speed tester then give us a list of things to fix...without having a clue of what the ratings and results even mean.
When Gmail first came out I was amazed at how fast it was, so much faster than the native outlook client I used at the time, wow! How is that even possible in javascript!
Now gmail is slower than most any other website I use regularly. Can't even refresh the inbox in less than three seconds.
I've noticed the same slower by a bit every year pattern with google maps too. I'm afraid to click anything when I have a map open because I know it might trigger a repaint which will cause me to sigh and switch to another tab while it chugs for a few seconds to accomplish this.
Not web dev related, but related to Google not focusing on performance: the latest Android release of Google Maps is actually non-functional on my Galaxy S7 because of how slow it is. It completely overloads RAM and crashes. Every time. Hilarious.
I'm kind of disappointed that this isn't Go the programming language. I thought "that'd be really cool, porting android apps from Java to Go." But no, turns out there are only so many creative words you can come up with when "google" is your starting point.
Gosh, thanks for this note! I thought it was just me and my 2.5 years old HTC that can't keep up. It is totally unusable as you say, for a few weeks now (crashes, slow, markers/directions not show at all while the app thinks it does)...
The difference in performance between Fastmail and the new Gmail interface right now is freaking stunning. I still load Gmail every few days to see if I have email there, and it's like waiting for Windows 98 to boot.
This was actually an intentional choice not to do that. Setting up forwarding means you may have email continually passing through Google servers you didn't realize was still doing so. By having a hard cut between the two, I know for a fact that any email received at FastMail is not going through my Gmail account, and any email received at Gmail is, and needs to have the contact information updated.
I have a vacation auto-responder permanently on on Gmail as well to notify any individuals who hit my old address where my new address is.
Recent Ad Words user here. It is quite possibly the worst application I've used in years. I get that they offer a LOT of products and that the nature of the business is complicated. But, man, was everything about that process awful. And the UI is unbearably slow.
I think it is in poor taste for Google to publish https://web.dev/ during the new desktop Gmail.
>Google's web platform team has spent over a decade learning about user needs. Now we want to make it as easy as possible for you to master the defining standards of web development today.
>Fast load times
>Guarantee your site loads quickly to avoid user drop off.
"Loading Gmail"
>Network resilience
>See consistent, reliable performance regardless of network quality.
"Loading..." (in yellow at the top, after clicking a search result. Indefinitely.)
A Gmail account has has 10+GB of data. Users for the most part don't want 10GB in browser local storage. Gmail's speed problems are in the backend, not the web UI.
That's not true. Pop open your console and watch the network tab for yourself. Slow websites are almost always due to frontend issues. Besides, Google has the ability to index the entire internet and return results in milliseconds. I'm guessing it's not my 10GB of email that's tripping them up.
what you've written is totally false. It worked fine on the old UI, and also still works fine in simple HTML mode.
You can see this yourself here: http://mail.google.com/mail/h/ (direct link to html view. you have to already be signed in for this link to work.)
Side note: I'm sure you would have mentioned it, but you don't happen to work at Google on the new Gmail front-end, do you? (Since I could see someone who is, trying to deflect blame for their current bugs by "blaming the back-end".) Just want to make sure...
That's kinda what it feels like to me. Does g suite already have so much technical debt that they can't make it better? It's been this way since I started using it, more than a year ago.
Seriously! Every click feels like a chore, it's easily 3+ seconds before the page is interactive again. I've started using the mobile app for everyday tasks since it's so much more responsive, I just wish it supported better password resets.
Oh man, I used to be on the team that built that. It's just enormous. If you're on Chrome, the optimizations are sometimes enough to make it smooth. If you're on any other browser, god help you. I used to see 30s(!) reflows in Firefox.
Not really. Would you take webdev advice from a company where their own apps are memory hog/leaking garbage (Gmail)? At some point you need to call a square a square.
If you disagree with the advice on web.dev then that's fine. Call that out. Saying that the advice is wrong because another team in another building, maybe in another city or another country, built an app you don't like, is just wrong. This sort of ad hominem is beneath HN.
Sorry, I'm not letting you get away with an illogical argument here. There's nothing hypocritical about the authors of web.dev making an auditing tool because some other people who work for the same company work on site that (may) do badly.
In fact, it wouldn't be hypocritical even if the authors of web.dev themselves worked on a site that did badly.
It is, after all, a tool. It's not a declaration of superior intellect. It's not an article of condescension. It's a tool.
The hypocritical statement is on the part of the company, not the specific authors. The efforts represent the company, as do their efforts in standards organizations, browser implementations, and email user interfaces. It is completely logical to question their inconsistencies especially as one of the primary drivers of standards. If the company cannot present a consistent front, we can ask ourselves whether their attempts to help others do so are accurate much less well intentioned. This should be obviously clear and not about individuals or their feelings.
Releasing a tool cannot be hypocritical. The existence of a tool to audit performance, accessibility, etc. is not a declaration of Google's moral or virtuous superiority. It's a tool.
Their advice isn't perfect but that doesn't mean I won't pay attention to it. Memory use and leaking isn't really covered in their page speed advice anyway, which is mostly about server side caching, image optimisation, css / js delivery.
If there's any other decent page speed assistance I'd love to see it.
> Not really. Would you take webdev advice from a company where their own apps are memory hog/leaking garbage (Gmail)? At some point you need to call a square a square.
Wait, was web.dev built by the Gmail team? I always assumed Google was bigger and more complex than that?
Yup. It’s a company that routinely breaks the web, pursues its own standards at the detroment of others and and will prioritize things like AMP over any of its own tools.
This is powered by Google Lighthouse, with the benefit of it being done via a web UI instead of a Dev Tools Audit. Which is both good and bad.
Good because Lighthouse has some reasonable best practices to follow, and a few good performance timings, so lowering the barriers of entry is nice.
Bad because many of Lighthouses best practices aren't always applicable (our major media customers constantly say "stop telling me a need a #$%ing Service Worker!"). And while Speed Index and Start Render are great, Time-to-interactive, First CPU idle, and estimated keyboard latency are still fairly fluid/poorly defined, and of different value.
This all also overlooks the value that something like the Browser's User Timings provides (Stop trying to figure out what's a "contentful" or "meaningful" paint, and let me just use performance.mark to tell you "my hero image finished and the CTA click handler registered at X"), which Lighthouse doesn't surface up.
What is interesting is the monitoring side. WebPageTest, Lighthouse, Page Speed Insights, YSlow, etc are just point-in-time assessments that is largely commoditized. Tracking this stuff over time and extracting meaningful data is valuable, so that's pretty cool.
Disclaimer: I work in the web performance space. People replace homegrown Lighthouse, puppeteer, or WPT instances with our commercial software, so I'm biased. However I like a lot of the raising awareness and trail blazing about what Performance/UX means that Google is doing.
But I'm getting the impression that you want Lighthouse to surface up this information in a different way. Please feel free to elaborate.
Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.
PLEASE PLEASE PLEASE WEBDEVS, STOP LOADING YOUR WEBSITE WITH CRAP TO MAKE THEM BETTER
I fed a large news site that is not terrible to use into this, and it gave it 14/100 rating. This news site is perfectly fine, has a good design, good typography and loads without any scripts if you need it to. It loads quite fast.
Among other things, Google recommends
- Lazy loading:
NOOOO, just load my document. I hate this bullcrap. It never works correctly and then you just get a laggy and slow scrolling site where you have to wait all the time you use it.
It's a news site. Just load the whole thing, it takes a couple of ms but then the site is actually usable! It's an actual document you can scroll through.
- Ask the user to install as an app, add offline usage etc? Why even?
- Dynamically compress everything?
YES GOOGLE, when saving 8kb per picture, but in the end we need to pull 10mb of javascript libraries from sixty different sources all over the web to even display a text with one small picture ITS ALL WORTH IT
I don't make websites except my personal stuff. I understand you want to present your knowledge and skills.
But please, the websites are getting worse and worse. The best way to present almost any information is in a classic html webpage. It wouldn't need to be like this, but almost any modern websdesign approach seemingly leads to slow, laggy and partly unusable websites, that do end up loading something, but often not the thing I actually want to read.
I am actually trained now to feel a sense of relief if I come across a straight html website, just because the user experience is so terrible now thanks to javascript.
As a consumer, I will continue to beg for simple websites that stay true to the idea of displaying a scrollable document with data and text.
Please, only use all these animations, loadings, dynamics, off-site frameworks, custom browser controls and single page documents if you are making
a) a portfolio
b) an actual application that is mostly simple buttons and does not present significant amounts of text or data
I agree with you about lazy loading. It's just dumb. How is it better to give me a scaffold of a page before the content? The page may look nice but is unusable until the content has loaded. Why not just give me everything when it's ready?
I've gotten into trouble for saying this before but things like recommending lazy loading is an example of Google imposing their wishes on the web and making it worse in the process.
There are two kinds of lazy loading--that which blocks the content, and that which doesn't. If you have e.g. a bunch of JS libraries that aren't necessary to display the page, only for certain interactions, it makes sense to lazily load those. This is what "lazy loading" meant in my front-end team at Google anyway. (Whether you should even have all of this JS to begin with is another question, however.)
Personally, I specifically can't stand the lazy loading that happens on youtube/facebook. The kind where there is no text, just shimmering placeholders until the actual content has loaded.
I just don't understand why they think I will think their site loads faster if they don't actually have any content right when I click on it, rather than the server actually delivering the page with some modicum of content...
Talk about breaking the back button! Do a measurement on a website. Then click one of the Guide links like "Links do not have a discernible name". Page navigates away from the report to some documentation. Click back, report is gone.
I get it, it's a single page webapp. But if you do that you need to make all the simple navigations open new tabs. There is a "open link in new button" icon next to each link, but that's really not good enough.
Ah yeah that's something we want to fix. If you're signed in it keeps the report around but if you're signed-out it's stateless. It's definitely on our to-do list to fix.
This is sort of representative of web.dev vs the rest of Google.
Read the rest of the comments, seriously. I would be very discouraged to release this web site when the rest of the company is vehemently opposed to any of the practices listed there.
This site didn't load properly in Firefox on a Samsung Galaxy S8 just now, and the accessibility section is "coming soon". I'm sorry Google, but if you're letting key basics like this slip... Accessibility is not a bolt-on for afters, and chrome isn't the only web browser. Deep down you know this too. Those who preach are held to higher standards, and you've let yourselves down badly here.
I can't replicate it in private browsing. I'm not running any mobile extensions and my internet is as good as it comes. First time I loaded it the blue line at the top was there for ages. I was scrolling the site for a good 20 secs marvelling at the irony of a site teaching performance not performing. It felt like something to do with the service worker not working. The cookie notice only appeared after I refreshed. Now it works fine, and loads fine each time. So it was something to do with the initial caching of assets.
And yes, the accessibility section coming soon sends the wrong message very subtly but powerfully. It's something we all need to be better at, and when you've got the resources of Google there just isn't any excuse. Those two small words on that missing section quietly absolve us all. Because if Google can't do it right, why should we? It just isn't good enough, so yes, they have let themselves and our community down. I know they're strong words, but someone needed to say it.
> The cookie notice only appeared after I refreshed
Maybe it's a geo thing. Are you in Europe? I see no cookie notice (US).
> I know they're strong words, but someone needed to say it.
Eh, it needs to be there but don't throw the baby out with the bathwater. It's apparently in "beta" and according to this hn thread robdodson is working on it, so odds are the accessibility section will be handled just fine.
Regardless
> Those who preach are held to higher standards
is a bad take. You can criticize a tutorial site without standing on such a flimsy soapbox.
The funny/sad thing about PWA and how broken it's on iOS right now, is that the first iPhone was supposed to work with PWA. I'm not sure if it was Jobs' vision or they didn't have something good to offer for native devs. Safari has been a joke for long time now.
This is what made me have another look at PWAs, start experimenting with them and starting to see, that for plain CRUD applications PWAs are probably the way to go.
FWIW, the official Twitter desktop app for Windows 10 is a PWA and it works great (well, as well as any official Twitter app has worked in my assessment; not as well as third-party apps from their hey-day, but that's a different story).
I also use the Starbucks PWA (app.starbucks.com) and it mostly works well. Again, it seems to work as well as their native apps.
> chrome right now is the only one which supports PWA as "native" app..
What do you mean with "'native' app" exactly? When I use my PWA with a Firefox on Android I can't see that it is a PWA. It just looks like any other Andoird app.
Granted that is only one more browser and on the desktop side Firefox still has a lot to do, but at least there is one more player in the race ;-)
I have a hard time believing this is "for web developers" when most web developers used `.dev` for local development, which was broken because Google decided they wanted to own `.dev` for themselves.
I think .test is the TLD is the one most likely to work as expected as for all use cases as .localhost might be expected to only ever point to the loopback IP address. If anyone does something more complicated (like using a vagrant private network ip address), you might find some tools break unexpectedly.
Edit: the newer version of the docs have more details (in section 6) on the differences to be expected between test, localhost, example and invalid domains. see: https://tools.ietf.org/html/rfc6761
how's this any different than devs using 123.123.123.123 for test purposes (instead of private network addresses), then getting upset when they found out it has been allocated to China Unicom?
I can't wait for one of the executives in my company to run into this, understand nothing but the numbers, and then complain about our numbers not being higher despite the fact that half of these metrics aren't applicable to our web app.
I already have a boilerplate response to "why isn't our google pagespeed score higher" that I copy and paste.
I know Google's happy about performance nagging, but I wish they were better at knowing what is/isn't applicable.
I do agree that some sort of "not all these metrics may be applicable to your architecture, please consult an engineer" would go a long way.
I left a toxic work environment at one point where I was literally yelled at because I was claiming to know our company's situation better than Google's pagespeed tools.
Needless to say, that wasn't the only problem with that job... but it's frustrating.
Heyo! One of the web.dev devs here. The new web.dev site is an experiment from our team to see if we can improve the interactivity of our docs. We link to developers.google.com/web in a number of places. Over time, if folks seem to enjoy the web.dev model, we may explore moving more of our docs over there. But for now it's just a fun experiment
I tried my rather large work website and it was unable to fetch it after a couple tries.
Also this is the same company that brought us AMP and didn't use closing tags on their landing page to "save bandwidth", and tried to push a java -> JavaScript nightmare of a gui toolkit. I don't get why they are trying to push for this unless it's trying to strongarm devs into making more shitty AMP pages.
The hacker news homepage gets a 23 on accessibility (Idk I mean there are probably some issues I hadn't thought of but the fact that its primariliy no-nonsense text makes it quite accessible by default)
It does however get 100 on performance (quite rightly)
The PWA score is around 50 but most of the complaints are really silly. It almost makes me wonder if the entire category of PWA is silly.
Finally, I checked GOV.UK, the website so lauded here on hacker news as of late. It also got around 58 on PWA - what is the point? If those complaints for PWA actually meant anything, surely they could fit into one of accessibility, best practises, or performance.
"Imagine if your favorite game took forever to load because you were on a slow network connection, it wouldn't be your favorite game for very long. "
I can inform you that games I play take forever to load on slow networks. (Overwatch, Darksouls). Although I guess it's not downloading the game. It's just trying to login to the servers.
I'm amazed by how far off-base they have managed to make this. The recommendations are bizarrely inaccurate.
1)it says that my (black) text doesn't have sufficient contrast against my (white) background.
2)It says that my page could benefit from having more of its resources served by http 2 (more than 100% presumably).
3)It says that links should have a name (this is not supported by html5)
4)It says my (100% valid) robots.txt file is not valid
5)it says my 100% no javascript static webpage which isn't a PWA should return a 200 when offline. Not sure how they think I should do that.
I've switched all my stuff to .test, since that domain doesn't cause as many weird issues with some routers, proxies, OS networking bugs, dumb regex, etc.
Also, .test is reserved for testing/local forever, so it won't suffer the same fate as .dev.
The website itself doesn't seem to take that too seriously.
For example, it loses scroll position when navigating:
* go there from HN
* scroll down a bit
* press the Back button in the browser to go back to HN
* press the Forward button in the browser to go to the site again
* you are now scrolled the very top again, instead of where you scrolled to
Tested in curren Chrome and Firefox on Linux.
Also note how HN doesn't have this problem, it will remember your scroll position so you can continue reading where you left off.
Browsers are smart, they have accessibility built-in. Too clever JavaScriptery destroys it.
It has been a common practice for devs to setup a local DNS server that points any *.dev domain to localhost. Maybe you did it (or installed some tool that did it) and forgot.
Getting Chrome SSL warnings on both the OP link and https://get.dev/
Doesn't really evoke confidence, if this is a Google initiative. Can someone post a short precis as to what this is about for those of us who cannot visit the URLs?
UPDATE: Apologies - My bad, not Google's fault. I had a local Valet/NGINX redirect for local .dev domains setup for Laravel development projects that was causing the issue.
Ah, OK, I think I worked out what is happening. I have a valet service running for local Laravel development which is redirecting local .dev subdomains to Valet instances on my iMac!
Apologies for that - I will shut down the Valet service and try again.
Although that might only change the default for new installs; you might still need to change it for existing installs. I'm not sure; I've never used Laravel.
Perhaps your browser is being directed to a page not owned by google. I don't get an SSL warning in Chrome or Firefox (Windows 10).
From the site: "With actionable guidance and analysis, web.dev helps developers like you learn and apply the web's modern capabilities to your own sites and apps."
It includes a web version of the Lighthouse website measurement tool. It also has guides for various web dev topics.
The topics can be browsed normally, but I think most users will discover topics by the measurement tool showing the user guides to improve their website.
No, it will be available across a wide range of registrars, similar to .app. You can see the full list of participating registrars for our existing TLDs here: https://www.registry.google/register-a-domain/
Correct me if I'm wrong, but registries can't let single registrars have exclusive access to TLDs. If Alphabet/Google were to do this it would be antitrust, and they could lose their dual registry/registrar status.
Hey folks, I wanted to share a quick status update to let y'all know which issues we're seeing and working on. Apologies for the hiccups and thank you all for trying the beta!
Web.dev looks incredible for educating web developers, exposing Lighthouse to devs who aren't already familiar with it, and for automating basic website testing over time.
But it will be a nightmare for support teams working at any kind of web service.
As Google doesn't provide support for this tooling and site owners invariably fixate on the scores it provides, product support teams for everything from WordPress themes to CDNs end up fielding support questions that Google should be helping with via resources pitched at the non-technical folks who inevitably use these tools (as well as, you know, help from an actual human support team).
As it stands, support teams will now be inundated with questions unrelated to their product from customers who have no interest or technical background to read the current educational sections of web.dev, and whose time would be better spent crafting landing pages, great content, or reducing the 38 social plugins they're using instead of making all the dials turn green.
I can already foresee the support requests from the web.dev scores:
“Google says my WordPress site isn't installable. Where's the option for that in your theme?”
“Your website says Cloudflare improves load time. But my first meaningful paint time went up by 1.5 seconds after setting it up! I'm going to write bad reviews about you.”
“Google says I need to theme my browser's address bar to match my branding. I added that tag they mention but don't see any change in my browser.”
It's great to build awareness of ways to make the web faster and better, but it needs to be backed with guidance that's pitched at the ability of the people who will be using these automated testing tools.
For example, why not detect the technology behind the site and — for stuff like WordPress — recommend plugins or other tech-specific resources that could help fix problems like lack of image lazy-loading? There are lots of ways education could be enhanced for non-developers.
But it's a resource for web devs, mostly used by web devs. Reinforced by the web.dev domain name, which surprised me. I've never seen .dev domain before.
All audit tools make recommendations that many of us will ignore. I have zero intention of using webp images for example.
I don't think your claim that "nightmares" will happen is warranted. Shiny new site audit tools are fun. Enjoy it, there's no nightmare!
There's so much wrong with this report, it can be confusing to many people.
For example, some of what they're calling "SEO" really has nothing to do with SEO. It should be checking:
- if the page is crawlable
- if there is a valid title tag
- if there is a valid meta description tag
- if there is a valid canonical tag
But instead, it checks for a valid viewport meta tag? And if the font sizes are legible? I could see that it might be an issue if the site is hiding text on the page, but viewport and font sizes really have nothing to do with SEO.
If you have an old version of Pow installed. Make sure to either upgrade to Puma-dev (Which moves .dev to .test, the correct subdomain for local machine testing) or uninstall it first.
I have a question. How can I really figure out whether my (static) website uses http2? The google audit tells me that I should use http2 for all of my resources but loading my site in Safari or Chrome shows me that it uses the h2 protocol. Or are these unrelated?
Weirdly it also points out that my elements have non-unique Ids which is false and the list of failing elements shows that, it looks like they are stopping their search at the colon character, but they should not.
Regarding accessibility: Are skip links still a thing, or has this been superseded by landmark roles? Is there still an advantage in using a link as compared to selecting a region?
Edit: Meaning, if I have already a main region, does adding a skip link to main actually complicate things?
The results are significantly different from the lighthouse audit in chrome's web developer tools, the latter shows my site as PWA compatible with great performance whereas this tool shows that my site is not PWA ready and has medium performance.
Yep that's our bad. We accidentally started prefixing all of the urls with https we've fixed the bug but haven't shipped it yet cuz we're in code freeze
Agreed! I love the text only sites. I would add text.npr.org to the list also. I am very tired of sites that spill media and JavaScript all over the place and add no real value, just to slow the site down or to make it completely unusable. A typical site of mine breaks all the rules of current UI design. [1]
My question is, why do we need a React/Redux for PWAs all the time? It is slow on cellular data, freezes phones without the latest Chrome, all in all feels laggy compared to a native app.
The thing is you can easily add "Progressiveness" to your website with Service workers and using open source libraries to add instant on-page navigation without repaint.
The only place I find an SPA is appropriate is anywhere else but e-commerce websites, which is one of the main target of Google's PWA initiative, to build a collective resistance to Amazon, which has one of the highest converting pages.
Your conversion rates will naturally go up when people are able to checkout quickly with minimal cognitive load, this does not mean that a React app is appropriate or feasible, especially when e-commerce businesses that rely on older phones, when smartphones are approaching 1000 USD
Please don't make this place even worse by posting low-information rants. Instead, provide correct information, so we can all learn something. If you don't want to do that or don't have time, it's simple to just not post.
Wow. From the people that have been botching the web for 20 years...
I mean it's only recently that they've started getting their things together. And arguably their flat design isn't always usable. Now let's talk speed....
In case you're wondering what's in it for google with PWAs/App manifests/"Installable apps"[0], it's answered in another recent HN post[1] and comment[2].
I wrote and deleted a pretty vitriolic comment about not trusting google as stewards of anything, people who care about the user (outside of selling data/access to the user millions of times a second), but I couldn't figure out why they would push app manifests (the only interesting part of the actual comment).
The accessibility audit is garbage as well. Apparently I should add an image to the site just so I can put an alt attribute in it. I should include audio so I could have a transcription to go with it. Not sure whether Lighthouse decided that 16px or 20px is less than 12px but apparently one of them is and makes up over 60% of the page.
I understand this is not made for people who serve static HTML files and handmade CSS from ~/sites/ but I'm pretty sure I hate the kinds of sites this is designed for. Should have a custom splash screen? Respond with 200 when offline? What's next, I get points for breaking the back button too? Why is using HTTPS and redirecting HTTP to it a PWA thing?
I'd hate to see the site that aces this audit.