bitofhope 5 years ago

This is useless to me because I have a website, not a web application. I got a 46/100 on PWA which is astonishing because the site being tested is not a progressive web application at all. Imagine vim getting a 40/100 score in the category of web browsers. Weirdly good score.

The accessibility audit is garbage as well. Apparently I should add an image to the site just so I can put an alt attribute in it. I should include audio so I could have a transcription to go with it. Not sure whether Lighthouse decided that 16px or 20px is less than 12px but apparently one of them is and makes up over 60% of the page.

I understand this is not made for people who serve static HTML files and handmade CSS from ~/sites/ but I'm pretty sure I hate the kinds of sites this is designed for. Should have a custom splash screen? Respond with 200 when offline? What's next, I get points for breaking the back button too? Why is using HTTPS and redirecting HTTP to it a PWA thing?

I'd hate to see the site that aces this audit.

  • KevanM 5 years ago

    This, so much this.

    The skew to web applications rather than websites for tooling has been very disappointin and myopic view of what a 'website' is.

    It would make sense if I could give it a root URL and then a robot progressively ran tests over a month and then reported back, but one URL is a drop in a vast ocean for most of us.

    • Yaggo 5 years ago

      Properly made "web application" doesn't have to break the web. It can be curled, links work, back button works, etc. You just don't get any interactivity (besides links) without javascript, but it still works as browseable site (rendered server-side). You can have the cake & eat it.

      (I'm not saying that every application should behave like that. Often the extra work is not worth it. But a public, content-heavy site should behave like that, whether it was single-page app or tradiotionally implemented.)

      • pmlnr 5 years ago

        This is called progressive enhancement and, sadly, nobody seems to be doing it any more.

        • SquareWheel 5 years ago

          Progressive enhancement is different than what the parent comment is suggesting. They are describing how to correctly write SPAs and other webapps.

          The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous. Your browser has it. Your screen reader has it. Even web crawlers have it.

          • PavlovsCat 5 years ago

            > The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous.

            WP describes it as

            > Progressive enhancement is a strategy for web design that emphasizes core webpage content first. This strategy then progressively adds more nuanced and technically rigorous layers of presentation and features on top of the content as the end-user's browser/internet connection allow. The proposed benefits of this strategy are that it allows everyone to access the basic content and functionality of a web page, using any browser or Internet connection, while also providing an enhanced version of the page to those with more advanced browser software or greater bandwidth.

            It's way, way more than JS.

            > They are describing how to correctly write SPAs and other webapps.

            In the context of "I have a website, not a web app", and web apps that "don't break the web", i.e. also behave well as web pages. If you are suggesting anyone is building backwards to that from a web app, instead of progressive enhancement, do you know an example?

            • SquareWheel 5 years ago

              If I understand your question, you're asking about adding functionality to a webapp to make it feel like a webpage rather than enhancing a page to add new features. The best two examples are actually mentioned above.

              1. Using history.pushstate to intelligently add to page history for meaningful changes to the page. This ensures pressing "back" in your browser is still reliable.

              2. Using server-side rendering on the first render. This keeps SPAs fast while the payload is being transferred.

              Regarding Wikipedia's definition, that's a more broad definition than I'm used to seeing (speaking as a web developer). I've always heard it in reference to falling back gracefully from Javascript - usually with a <noscript> tag.

              Supporting mobile, weaker networks, accessibility, etc. fall into much larger categories. Many of these topics require their own discussion and best practices.

              Those topics are of course still important today (if not even more so).

          • acdha 5 years ago

            > The reason progressive enhancement has fallen away is because Javascript support is now ubiquitous. Your browser has it. Your screen reader has it. Even web crawlers have it.

            That's only part of the problem: every day I encounter sites which fail because the developers assumed not just that everyone has JavaScript but that they can load tons of assets reliably and instantaneously. The key part of progressive enhancement is thinking about how to degrade gracefully when everything doesn't work perfectly, which also tends to offer a better experience for anyone who doesn't have a very high-speed near-perfect network connection.

            A couple of weeks back, I was using a family member's Spectrum “high-speed” cable modem service at a whopping 5Mbps with latency measured in the hundreds of milliseconds. It really highlighted who was doing progressive enhancement and who was doing “works on my machine” when you saw one page load 90 seconds faster than the other.

            • 0xffff2 5 years ago

              >A couple of weeks back, I was using a family member's Spectrum “high-speed” cable modem service at a whopping 5Mbps with latency measured in the hundreds of milliseconds.

              And that's still great internet compared to some places. I have a house out in the middle of nowhere that is only served by a single satellite internet provider (surrounded by trees that block the view to other providers' sats). I get 20Mbs at ~500-1000ms latency for a few days before I hit the 20Gb cap, then I get 0.5-1Mbs for the rest of the month. Hacker News is one of the few sites on the web that I can browse relatively painlessly when I'm up here.

              • acdha 5 years ago

                A couple of years back I was at a conference in Rome. Literally in the heart of the city (the windows overlooked the Forum) and that meant that they had only satellite access because nobody had run cables through the historic buildings. I've never been more glad to have spent time optimizing our site for 2.5-3G performance than when we were demoing it during presentations and it seemed slow but almost everything else was unusable.

            • SquareWheel 5 years ago

              I'd put network use in a different category. It is an important issue though.

              Thankfully the tools are getting better for this. The recently supported font-display property is a great one. It allows devs to choose how to handle web font rendering over slower internet connections.

              Now I just wish more devs would start to take advantage of all the great performance tools available. Those best practices are unfortunately rarely taught.

              • acdha 5 years ago

                > I'd put network use in a different category. It is an important issue though.

                My rationale for considering it to be included is that as the concept was developed I took the spirit of progressive enhancement to be doing the best with what your users have rather than only catering to people with the same setup you have.

                > Now I just wish more devs would start to take advantage of all the great performance tools available. Those best practices are unfortunately rarely taught.

                Agreed. I think one of the challenges has been both showing business value from performance — once you're putting things into a cost/benefit comparison it's a lot easier to get people to routinely consider the performance impact of their decisions.

              • collinmanderson 5 years ago

                for font-display, do you prefer swap or fallback?

                • SquareWheel 5 years ago

                  It might depend on main body text versus title text. It's jarring when body text changes so I'd prefer fallback in that case. For a title which might have more branding concerns, I'd prefer swap.

    • kaycebasques 5 years ago

      > It would make sense if I could give it a root URL and then a robot progressively ran tests over a month and then reported back, but one URL is a drop in a vast ocean for most of us.

      I think this is the end goal. Building the infrastructure to audit a single page is the first step towards that bigger outcome.

      Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.

  • kaycebasques 5 years ago

    If you run Lighthouse (the tool that powers web.dev's auditing feature) from a CLI or as a Node module, you can tell it to only run the audits that are relevant to your needs.

    https://github.com/GoogleChrome/lighthouse/blob/master/docs/...

    I get the general frustration that non-technical teammates look at these reports and say, "we're doing terrible, you need to fix this" when in reality you know that the audits aren't relevant to your business. But it's tough to create an auditing tool for the web at large. There are a lot of businesses that would benefit from PWA features. The general idea was to raise awareness of how PWA features can often improve the UX of many sites. Not all, but many. My takeaway from this discussion is that we need to improve our messaging around the fact that these audits aren't commandments. Some of them may not be relevant to your top priorities. Maybe we could improve the report UI in DevTools and web.dev so that you can flag individual audits as irrelevant. On subsequent runs, those audits would be omitted from your reports. Or maybe we can somehow get more clever about how to present certain audits. E.g. based on Chrome User Experience Report data we identify that service worker usage in your industry is low, and we flag the service worker audit as potentially irrelevant to your needs. That would help solve the problem of non-technical people seeing a low score and assuming that it's a fault with your site, when in reality it's just an irrelevant audit.

    Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.

    • billyhoffman 5 years ago

      +1M to the idea to leverage CUER data to filter recommendations. That would be awesome.

  • theandrewbailey 5 years ago

    I hate how Google's tools keep pushing deferred CSS. It kind of breaks how CSS is supposed to work. They want you to manually pick out the CSS that is relevant to the top of the page, and put it directly into your HTML. How on earth is that maintainable or scaleable? Or secure?[0] You easily run the risk of sending redundant bytes if the same styles are still in your external CSS. I tried that little script they suggested, and not only got FOUC'ed up the ass, the page took longer to load. (bbbbbut it's asynchronous, that's what makes it SO FAST!) Nope, that didn't last long. Not going to do it, Goog.

    [0] I have a strict Content Security Policy that forbids inline JS and CSS, and am considering using a require-sri-for declaration. https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

    • kaycebasques 5 years ago

      I agree that splitting up your CSS to only send the critical stuff first is tough to scale and it's tough to find a reliable solution.

      > It kind of breaks how CSS is supposed to work.

      Can you elaborate on this?

      > You easily run the risk of sending redundant bytes if the same styles are still in your external CSS.

      I think we're up against 2 less-than-optimal situations. Suppose you have 50KB of CSS.

      * Ship it the traditional way. User waits on all 50KB before first paint.

      * Ship it the code splitting way. User gets 10KB upfront, and leaves before the rest loads. But if they interact with the site extensively, then they trigger the redundant bytes that you're mentioning, so that the total download size comes out to be 75KB.

      Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.

      • theandrewbailey 5 years ago

        Thank you for your reply.

        It breaks CSS, because having styles on the page couples presentation with content. The selling point of CSS is to change a style in one place, and have it affect multiple pages. If you break it up, you end up having to maintain multiple versions of your CSS. To do this in the name of performance strikes me as one of the very last things to do, given it's unfavorable maintenance cost.

        > I think we're up against 2 less-than-optimal situations. Suppose you have 50KB of CSS.

        > Ship it the traditional way. User waits on all 50KB before first paint.

        Best practice for CSS is to link it in the first kilobyte or so of HTML[2][3]. Browsers have optimized for it: it makes the CSS request happen immediately, before the browser parses the rest of the HTML. Unless the user has a slow connection (<10 Mbps), bad round trip time (>200 ms), or the server is slow to serve a 50K static file (average CSS size[0]), that CSS will load within half a second, with first paint soon after. If you need to cut down that down, you should consider a CDN before deferred CSS.

        > Ship it the code splitting way. User gets 10KB upfront, and leaves before the rest loads. But if they interact with the site extensively, then they trigger the redundant bytes that you're mentioning, so that the total download size comes out to be 75KB.

        If CSS was split, and the user leaves[1] before the CSS completely loads, CSS isn't the source of slow page loads.

        Google's tools need check if styles load within a second or two. If it's any more than 3 or 4 seconds, deferred CSS starts to make sense. If styles (or the entire page) load in less than that, don't bother.

        [0] https://httparchive.org/reports/page-weight

        [1] https://www.nngroup.com/articles/how-long-do-users-stay-on-w...

        [2] https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduc...

        [3] https://www.w3.org/Style/Examples/011/firstcss#external

  • robdodson 5 years ago

    I think my comment may get buried but I'll try anyway :)

    I'm one of the engineers working on web.dev and recently we had some issues with the way the report was being generated (detail here: https://medium.com/dev-channel/web-dev-status-update-14th-no...)

    Specifically, there were audits flagged as "not applicable" and the alpha version of Lighthouse was instead flagging them as failures. That's why it looked like it was telling you to add audio or images—it was actually saying those audits are not applicable because your site doesn't use audio or images. I think that bug has been fixed in Lighthouse but feel free to reply to this comment if you're still seeing it.

    We've also temporarily turned off the PWA audits—they were having some bugs of their own based on the infrastructure they were running on. Based on the feedback in this thread we'll look into making them configurable so folks can choose if they want to run them.

    We'll also be opening up the repo shortly so folks can file bugs there directly.

  • TomK32 5 years ago

    It even recommend the brand-new WebP format over PNG.

    Only thing I changed after this audit was to use http2 on my server.

    • richbradshaw 5 years ago

      I'm assuming you are being sarcastic about WebP being brand new - the format's been around for 8 years now.

  • RandomGuyDTB 5 years ago

    My website scored a 46 as well. It's just a couple HTML pages with a single stylesheet but Google really wants me to configure it so you can read my internet webpage outside of the internet.

    I think I'm gonna add an explanation for what CTRL+S does instead.

    • semi-extrinsic 5 years ago

      I tested one of my websites which is a simple Flask app, and where the front page is just text, images and links.

      My score was 500: Lighthouse Internal Server Error.

  • arendtio 5 years ago

    You might want to brush up your knowledge on what a PWA actually is [1]. For example, that list contains 'Site works cross-browser'. I hope your website works cross-browser too ;-)

    The only thing that sets apart a normal modern website (https, responsive, cross-browser, Each page has a URL) and a PWA is the ServiceWorker (and a few meta-tags). All the other aspects are more or less soft or minor aspects like "Page transitions don't feel like they block on the network".

    This might sound pretty preachy, but in fact, I just want to give a better perspective on what PWAs are, so that they are not getting confused with the average single page JS bloat. Instead, PWA is more like best-practice (e.g. to avoid broken back buttons) paired with some mandatory tools.

    [1]: https://developers.google.com/web/progressive-web-apps/check...

    • bitofhope 5 years ago

      Hell yea my site works cross-browser. I've tested it with Firefox, Chromium, Midori, w3m, lynx, surf, and edbrowse and they all look fine.

      I freely admit I have no idea what a PWA actually is, but that page is not helping me understand. All I find are vague descriptions about how they're reliable, fast and engaging, but nothing even approaching a definition. For all I know, a 16-ounce claw hammer is a PWA. It's certainly very reliable, fast and supremely engaging.

      If PWA is all about the service worker, why are things like HTTPS, splash pages, 200 offline, or address bar matching brand colors(?!) included in the category? Those things don't make my site any faster or more responsive, and I doubt a service worker would either. Why on earth would I want my site to return 200 offline anyway? I don't wanna lie to a user.

      Ok, lemme take a deep breath. I'm sure a PWA does not equal a single-page broken piece of JS monstrosity. I just don't think it's a good idea to give me bright red warning triangles about not having a service worker unless Google believe every site should have a service worker. In that case I disagree with them because I don't think I need a 200 OK if my network interface has caught fire in the middle of browsing.

      • gigaftp 5 years ago

        PWA = Progressive Web Application

        It's basically a set of techniques and technologies developed in an attempt to produce a user experience similar to that of a native application when using a web application.

        In the case of not having a service worker, well, a PWA without services workers doesn't make much sense. This is because service workers are used to cache content to create that 'native' feeling of your application not 404ing when you go through a tunnel.

      • Calib3r 5 years ago

        Explorer & Edge make up more than 30% of internet traffic, yet you ignore testing them?

        • bitofhope 5 years ago

          I don't have a computer running an operating system that supports either of those browsers so I can't test with them. However, I assume they can render basic HTML enough to make my site readable and I don't even think there's anything in the CSS to change that.

yandrypozo 5 years ago

I just ran their audit to https://mail.google.com, you guys (googlers) need to speed up your websites first, Gmail is terrible slow lately, just saying...

  • partiallypro 5 years ago

    Google's pagespeed service is hot garbage. You can do things that make the page load significantly slower and get a perfect score than if you had a lower pagespeed score with a much faster page load.

    I also find it strange how I can get an A on every other page speed service, including GTMetrix, but get an F on Google's tes(s). Totally useless.

    • rocky1138 5 years ago

      Google Pagespeed once recommended that I enable gzip compression on an embedded Google map.

      https://johnrockefeller.net/google-pagespeed-_/

      • arkitaip 5 years ago

        Most performance profilers will penalize you for embedding Google Analytics and Google Fonts because they have poor caching settings. What's so important about those resources that they can't be cached for longer periods?

        • hrrsn 5 years ago

          I’ve always loved how Google’s own performance profiler marks you down for Analytics/Doubleclick caching.

        • jefftk 5 years ago

          There are two main reasons why short caching is helpful on tag scripts:

          * Faster iteration time for developers: the more often you release updates the faster you can move. If your script has a one week cache lifetime then there's not much point in daily releases and any experiments you run will be really skewed.

          * Quicker response to problems: if we push out a bad update that gets past our testing, the TTL we serve it with determines how long that will stick around in browser caches.

          (Disclosure: I work at Google, on ads JS, and I previously worked on mod_pagespeed. Not speaking for Google, just myself.)

        • tehlike 5 years ago

          Easier bug fix presumably. Meaning if there is a bad release url is cached for little. If urls were versioned and were changable by google, it would have been done.

          https://www.google-analytics.com/analytics.js

          Google cannot bust this in case of a bug. They can mitigate this by having a loader with built in functionality that always looks out for "is there a newer version" but that has its limitations.

        • gondo 5 years ago

          tracking

          • jopsen 5 years ago

            https://developers.google.com/fonts/faq#what_does_using_the_...

            I'm no lawyer, but I read that as it is not used for tracking users.

            (full disclosure: I work at Google, on something unrelated)

            • PavlovsCat 5 years ago

              I see no explanation given why an asset that never changes would need to be requested once a day, and they clearly state they do record it.

              > We only [sic] see 1 CSS request per font family, per day, per browser. Google Fonts logs records of the CSS and the font file requests, and access to this data is kept secure.

              I read that as what's written there. It's for tracking. Tracking popularity is still tracking.

              • jopsen 5 years ago

                > Tracking popularity is still tracking.

                Tracking usually refers to tracking users, which as I read the FAQ isn't what it's for.

                • PavlovsCat 5 years ago

                  They take and keep the tracking data. What they say they use it for doesn't change that. If they simply incremented a counter the bit about "keeping access to the data secure" would make no sense.

            • philtar 5 years ago

              Do you have a guess why those assets are not cached more aggressively?

              • jopsen 5 years ago

                As I read the FAQ, the actual font files are aggressively cached. It's only the CSS files that links to the font files that aren't cached forever.

                Probably to allow for updates and estimate popularity, with low overhead assuming CSS files are small.

              • fantyoon 5 years ago

                This is a complete guess but I suppose one request everyday would be enough to get atleast some idea of how popular any given font is. The page says its to keep them updated but I highly doubt that would be the reason.

          • jefftk 5 years ago

            While I don't know anything about fonts stuff, successful execution of the ad tag leads to an ad request, and this happens on every page view which has the script and not just the ones where the file has fallen out of cache. So there's no additional useful information Google would get from a low cache lifetime for the script.

            The script request is to a cookiless domain for performance, unlike the ad request, so there's not even much useful information on that request.

            (Disclosure: I work at Google, on ads JS, and I previously worked on mod_pagespeed. Not speaking for Google, just myself.)

    • m3nu 5 years ago

      Doesn't work against my S3-hosted static site?

      "Error: 500 from Lighthouse API: Internal Server Error"

      • foxylad 5 years ago

        Same for my Cloudflare-fronted Appengine site.

        • vinniejames 5 years ago

          Same on multiple domains, I think their site is broken

          • zwaps 5 years ago

            Same here, custom hosted website.

    • jansan 5 years ago

      On our website Google's pagespeed mainly complained about the Google Analytics snippets.

    • rawoke083600 5 years ago

      I agree it's not perfect! But my view is that if they(Google) did the effort todo pagespeed, then lighthouse, then integrate lighthouse in Chrome.. and now in a separate website... Some of the metrics must be import for your rankings/site - even if only indirectly (slow site etc)

    • billyhoffman 5 years ago

      Isn't GTMetrix just a combined Page Speed Insights and YSlow audit?

      • josefresco 5 years ago

        I never pay attention to the "grades" only the data, especially the waterfall graph/data. The grades are ridiculous, and mostly meant to cause alarm. You can score 100/100 on almost every test, but get a "D" overall because you scored poorly on 1 or 2 tests. It's great tool for techs, but in the hands of a client almost every result/score appears terrible.

      • partiallypro 5 years ago

        It uses an older version of Pagespeed, yes.

    • wild_preference 5 years ago

      Meh, these tests are just good ways to find obvious problems with your site.

      Fixating on the overall score seems like an indication of a different problem.

      For example, you can't change the caching behavior of google-analytics.js, and it's usually intractable to split apart your CSS to optimize for above-the-fold content. Oh well, you didn't get a perfect score. Stay practical.

      • partiallypro 5 years ago

        Try selling that to a client that keeps using the test against a site that is A rated everywhere else because Google's brand has fooled them into thinking a perfect score means better SEO.

        • wild_preference 5 years ago

          Right, that would be an example of a "different problem."

          I'd be hard pressed to let stupid things client/managers can do direct how this sort of tool should work. For example, it'd be silly to demand "score inflation" just because your client is unreasonable. Or rather, if a client is making your life hard, I don't think it's the world that needs to change.

          What's your solution if your client measures your success by punching their website into https://www.worthofweb.com/calculator/? And doesn't take you seriously enough to listen to you in general?

        • savethefuture 5 years ago

          This is a problem we run into, clients will run their site through a speed tester then give us a list of things to fix...without having a clue of what the ratings and results even mean.

        • arkitaip 5 years ago

          True. This is like when movie studios start obsessing about the Metacritic ranking on their movies instead of focusing on the quality of the movies.

    • amygdyl 5 years ago

      I said this in reply to QUIC apparently ignoring hardware TLS offload...

      Is this for real?

      Even Quick Assist?

      I am watching the internet be developed by the giants for the giants. In my memory, giants didn't act like giants. "At scale" wasn't a preprocessor. Browser developer tools were written to help you participate in conversation, not play unending catch up in translation, the meaning lost in acoustics of auditoria for the life of me I would think giants could make intelligible.

  • crunchlibrarian 5 years ago

    When Gmail first came out I was amazed at how fast it was, so much faster than the native outlook client I used at the time, wow! How is that even possible in javascript!

    Now gmail is slower than most any other website I use regularly. Can't even refresh the inbox in less than three seconds.

    I've noticed the same slower by a bit every year pattern with google maps too. I'm afraid to click anything when I have a map open because I know it might trigger a repaint which will cause me to sigh and switch to another tab while it chugs for a few seconds to accomplish this.

    • komali2 5 years ago

      Not web dev related, but related to Google not focusing on performance: the latest Android release of Google Maps is actually non-functional on my Galaxy S7 because of how slow it is. It completely overloads RAM and crashes. Every time. Hilarious.

      • 05 5 years ago

        You should probably try their third world app version: Google Maps Go Edition. (See e.g. https://www.androidinfotech.com/2018/02/google-maps-go-editi...)

        • jniedrauer 5 years ago

          I'm kind of disappointed that this isn't Go the programming language. I thought "that'd be really cool, porting android apps from Java to Go." But no, turns out there are only so many creative words you can come up with when "google" is your starting point.

      • imrehg 5 years ago

        Gosh, thanks for this note! I thought it was just me and my 2.5 years old HTC that can't keep up. It is totally unusable as you say, for a few weeks now (crashes, slow, markers/directions not show at all while the app thinks it does)...

    • ocdtrekkie 5 years ago

      The difference in performance between Fastmail and the new Gmail interface right now is freaking stunning. I still load Gmail every few days to see if I have email there, and it's like waiting for Windows 98 to boot.

      • mortehu 5 years ago

        The main Gmail desktop website is sort of like a desktop application. Most active users keep it permanently open. Try https://mail.google.com/mail/u/0/x/ , https://mail.google.com/mail/u/0/h/ or https://mail.google.com/mail/u/0/mu/ (on mobile these will redirect unless you enable "request desktop site")

        • spurgu 5 years ago

          This is brilliant. Using the last one. And now https://m.facebook.com/messages/ for FB messenger as well. Thank you for this kind sir!

          edit: Last one because it supports keyboard shortcuts.

      • joobus 5 years ago

        Setup forwarding in Gmail to send to a fastmail address. Problem solved.

        • ocdtrekkie 5 years ago

          This was actually an intentional choice not to do that. Setting up forwarding means you may have email continually passing through Google servers you didn't realize was still doing so. By having a hard cut between the two, I know for a fact that any email received at FastMail is not going through my Gmail account, and any email received at Gmail is, and needs to have the contact information updated.

          I have a vacation auto-responder permanently on on Gmail as well to notify any individuals who hit my old address where my new address is.

  • onlyrealcuzzo 5 years ago

    Recent Ad Words user here. It is quite possibly the worst application I've used in years. I get that they offer a LOT of products and that the nature of the business is complicated. But, man, was everything about that process awful. And the UI is unbearably slow.

    • hrrsn 5 years ago

      I agree, it’s painful, but the new AdWords UI is still far ahead of the old one.

      • amygdyl 5 years ago

        They don't want users. They need the layer of apologists to obscure the grand systematic internalisation game. SI is subject to financial services law in the UK, at least...

  • logicallee 5 years ago

    I think it is in poor taste for Google to publish https://web.dev/ during the new desktop Gmail.

    >Google's web platform team has spent over a decade learning about user needs. Now we want to make it as easy as possible for you to master the defining standards of web development today.

    >Fast load times

    >Guarantee your site loads quickly to avoid user drop off.

    "Loading Gmail"

    >Network resilience

    >See consistent, reliable performance regardless of network quality.

    "Loading..." (in yellow at the top, after clicking a search result. Indefinitely.)

    (some other non-quoted parts are OK)

    >Accessible to all

    >[Build] a site that works for all of your users.

    • davidjnelson 5 years ago

      > “Loading..." (in yellow at the top, after clicking a search result. Indefinitely.)

      I dig google and its products but oh wow is this bug frustrating. Hope it’s fixed soon.

    • gowld 5 years ago

      A Gmail account has has 10+GB of data. Users for the most part don't want 10GB in browser local storage. Gmail's speed problems are in the backend, not the web UI.

      • joshfraser 5 years ago

        That's not true. Pop open your console and watch the network tab for yourself. Slow websites are almost always due to frontend issues. Besides, Google has the ability to index the entire internet and return results in milliseconds. I'm guessing it's not my 10GB of email that's tripping them up.

      • logicallee 5 years ago

        what you've written is totally false. It worked fine on the old UI, and also still works fine in simple HTML mode.

        You can see this yourself here: http://mail.google.com/mail/h/ (direct link to html view. you have to already be signed in for this link to work.)

        Side note: I'm sure you would have mentioned it, but you don't happen to work at Google on the new Gmail front-end, do you? (Since I could see someone who is, trying to deflect blame for their current bugs by "blaming the back-end".) Just want to make sure...

  • sumoboy 5 years ago

    You think gmail is slow, try the new adwords interface.

    • e40 5 years ago

      Or the g suite admin console. Cripes, it's insanely frustrating to use.

      • hanoz 5 years ago

        Google seems to be increasingly giving the impression of collapsing under its own weight.

        • e40 5 years ago

          That's kinda what it feels like to me. Does g suite already have so much technical debt that they can't make it better? It's been this way since I started using it, more than a year ago.

      • r1ch 5 years ago

        Seriously! Every click feels like a chore, it's easily 3+ seconds before the page is interactive again. I've started using the mobile app for everyday tasks since it's so much more responsive, I just wish it supported better password resets.

    • alexbecker 5 years ago

      Oh man, I used to be on the team that built that. It's just enormous. If you're on Chrome, the optimizations are sometimes enough to make it smooth. If you're on any other browser, god help you. I used to see 30s(!) reflows in Firefox.

    • Semaphor 5 years ago

      I assume it's the same as the DFP/AdManager interface? Because every click means I can make a new pot of coffee.

  • pmarin 5 years ago

    >Gmail is terrible slow lately, just saying...

    Recently I started using the html only version of Gmail because for my user case is faster than the web app.

  • Touche 5 years ago

    This is a bad take. By releasing tools like this it helps all websites, including Gmail, have the data on how to speed up their website.

    • svnpenn 5 years ago

      Not really. Would you take webdev advice from a company where their own apps are memory hog/leaking garbage (Gmail)? At some point you need to call a square a square.

      • Touche 5 years ago

        If you disagree with the advice on web.dev then that's fine. Call that out. Saying that the advice is wrong because another team in another building, maybe in another city or another country, built an app you don't like, is just wrong. This sort of ad hominem is beneath HN.

        • kodablah 5 years ago

          Yet it seems excusing corporate hypocrisy and inconsistency based on size and employee location to be par for the course.

          • Touche 5 years ago

            Sorry, I'm not letting you get away with an illogical argument here. There's nothing hypocritical about the authors of web.dev making an auditing tool because some other people who work for the same company work on site that (may) do badly.

            In fact, it wouldn't be hypocritical even if the authors of web.dev themselves worked on a site that did badly.

            It is, after all, a tool. It's not a declaration of superior intellect. It's not an article of condescension. It's a tool.

            • kodablah 5 years ago

              The hypocritical statement is on the part of the company, not the specific authors. The efforts represent the company, as do their efforts in standards organizations, browser implementations, and email user interfaces. It is completely logical to question their inconsistencies especially as one of the primary drivers of standards. If the company cannot present a consistent front, we can ask ourselves whether their attempts to help others do so are accurate much less well intentioned. This should be obviously clear and not about individuals or their feelings.

              • Touche 5 years ago

                Releasing a tool cannot be hypocritical. The existence of a tool to audit performance, accessibility, etc. is not a declaration of Google's moral or virtuous superiority. It's a tool.

              • collinmanderson 5 years ago

                Maybe the web.dev is trying to convince other teams at Google to improve their products.

      • stef25 5 years ago

        Their advice isn't perfect but that doesn't mean I won't pay attention to it. Memory use and leaking isn't really covered in their page speed advice anyway, which is mostly about server side caching, image optimisation, css / js delivery.

        If there's any other decent page speed assistance I'd love to see it.

      • fold_left 5 years ago

        > Not really. Would you take webdev advice from a company where their own apps are memory hog/leaking garbage (Gmail)? At some point you need to call a square a square.

        Wait, was web.dev built by the Gmail team? I always assumed Google was bigger and more complex than that?

        • dmitriid 5 years ago

          Yup. It’s a company that routinely breaks the web, pursues its own standards at the detroment of others and and will prioritize things like AMP over any of its own tools.

  • IloveHN84 5 years ago

    Google websites and services are terribly slow

billyhoffman 5 years ago

This is powered by Google Lighthouse, with the benefit of it being done via a web UI instead of a Dev Tools Audit. Which is both good and bad.

Good because Lighthouse has some reasonable best practices to follow, and a few good performance timings, so lowering the barriers of entry is nice.

Bad because many of Lighthouses best practices aren't always applicable (our major media customers constantly say "stop telling me a need a #$%ing Service Worker!"). And while Speed Index and Start Render are great, Time-to-interactive, First CPU idle, and estimated keyboard latency are still fairly fluid/poorly defined, and of different value.

This all also overlooks the value that something like the Browser's User Timings provides (Stop trying to figure out what's a "contentful" or "meaningful" paint, and let me just use performance.mark to tell you "my hero image finished and the CTA click handler registered at X"), which Lighthouse doesn't surface up.

What is interesting is the monitoring side. WebPageTest, Lighthouse, Page Speed Insights, YSlow, etc are just point-in-time assessments that is largely commoditized. Tracking this stuff over time and extracting meaningful data is valuable, so that's pretty cool.

Disclaimer: I work in the web performance space. People replace homegrown Lighthouse, puppeteer, or WPT instances with our commercial software, so I'm biased. However I like a lot of the raising awareness and trail blazing about what Performance/UX means that Google is doing.

  • vntok 5 years ago

    Why don't you want to implement a basic service worker? A couple of cache strategies among these ones seem like a net positive? https://developers.google.com/web/fundamentals/instant-and-o...

    (ex: "Cache, falling back to network" for CDN'd libraries, "Cache then network" for content like news articles)

    • pmlnr 5 years ago

      Because for a normal website, it should be taken care by the browser, and not my custom service worker.

      This is XMLHttpRequest all over again.

    • dbbk 5 years ago

      Standard caching headers should be enough?

  • kaycebasques 5 years ago

    > Bad because many of Lighthouses best practices aren't always applicable

    It's tough to create an auditing tool that caters to the web at large. See my other comment in this thread: https://news.ycombinator.com/item?id=18442686

    > This all also overlooks the value that something like the Browser's User Timings provides... which Lighthouse doesn't surface up.

    Lighthouse does surface up this info in the "User Timing Marks and Measures" audit: https://developers.google.com/web/tools/lighthouse/audits/us...

    But I'm getting the impression that you want Lighthouse to surface up this information in a different way. Please feel free to elaborate.

    Disclaimer: I write the docs for Lighthouse. I'm speaking from my general knowledge of the project but haven't vetted these comments with my team. So consider all comments my own.

  • arkitaip 5 years ago

    Oh god that Service Worker bs. It's like how Yahoo's performance profiler penalized you for not using a CDN on a single page site.

zwaps 5 years ago

PLEASE PLEASE PLEASE WEBDEVS, STOP LOADING YOUR WEBSITE WITH CRAP TO MAKE THEM BETTER

I fed a large news site that is not terrible to use into this, and it gave it 14/100 rating. This news site is perfectly fine, has a good design, good typography and loads without any scripts if you need it to. It loads quite fast.

Among other things, Google recommends - Lazy loading: NOOOO, just load my document. I hate this bullcrap. It never works correctly and then you just get a laggy and slow scrolling site where you have to wait all the time you use it. It's a news site. Just load the whole thing, it takes a couple of ms but then the site is actually usable! It's an actual document you can scroll through.

- Ask the user to install as an app, add offline usage etc? Why even?

- Dynamically compress everything? YES GOOGLE, when saving 8kb per picture, but in the end we need to pull 10mb of javascript libraries from sixty different sources all over the web to even display a text with one small picture ITS ALL WORTH IT

I don't make websites except my personal stuff. I understand you want to present your knowledge and skills. But please, the websites are getting worse and worse. The best way to present almost any information is in a classic html webpage. It wouldn't need to be like this, but almost any modern websdesign approach seemingly leads to slow, laggy and partly unusable websites, that do end up loading something, but often not the thing I actually want to read.

I am actually trained now to feel a sense of relief if I come across a straight html website, just because the user experience is so terrible now thanks to javascript.

As a consumer, I will continue to beg for simple websites that stay true to the idea of displaying a scrollable document with data and text.

Please, only use all these animations, loadings, dynamics, off-site frameworks, custom browser controls and single page documents if you are making a) a portfolio b) an actual application that is mostly simple buttons and does not present significant amounts of text or data

thank you

its bad guys

  • Zelphyr 5 years ago

    I agree with you about lazy loading. It's just dumb. How is it better to give me a scaffold of a page before the content? The page may look nice but is unusable until the content has loaded. Why not just give me everything when it's ready?

    I've gotten into trouble for saying this before but things like recommending lazy loading is an example of Google imposing their wishes on the web and making it worse in the process.

    • alexbecker 5 years ago

      There are two kinds of lazy loading--that which blocks the content, and that which doesn't. If you have e.g. a bunch of JS libraries that aren't necessary to display the page, only for certain interactions, it makes sense to lazily load those. This is what "lazy loading" meant in my front-end team at Google anyway. (Whether you should even have all of this JS to begin with is another question, however.)

    • hyperdimension 5 years ago

      Personally, I specifically can't stand the lazy loading that happens on youtube/facebook. The kind where there is no text, just shimmering placeholders until the actual content has loaded.

      I just don't understand why they think I will think their site loads faster if they don't actually have any content right when I click on it, rather than the server actually delivering the page with some modicum of content...

    • Calib3r 5 years ago

      I personally like lazy loading as far as UI experiences go. Shoot me.

NelsonMinar 5 years ago

Talk about breaking the back button! Do a measurement on a website. Then click one of the Guide links like "Links do not have a discernible name". Page navigates away from the report to some documentation. Click back, report is gone.

I get it, it's a single page webapp. But if you do that you need to make all the simple navigations open new tabs. There is a "open link in new button" icon next to each link, but that's really not good enough.

  • robdodson 5 years ago

    Ah yeah that's something we want to fix. If you're signed in it keeps the report around but if you're signed-out it's stateless. It's definitely on our to-do list to fix.

    • koboll 5 years ago

      Maybe just throw on a target='_blank', no need to do anything fancy

      • sniuff 5 years ago

        it's stateless

    • softawre 5 years ago

      It's made worse by the fact that you can't easily read the recommendation without clicking on it.

    • dmitriid 5 years ago

      > on our to-do list to fix

      This is sort of representative of web.dev vs the rest of Google.

      Read the rest of the comments, seriously. I would be very discouraged to release this web site when the rest of the company is vehemently opposed to any of the practices listed there.

    • NelsonMinar 5 years ago

      Fair enough! Thanks for the reply.

  • exprA 5 years ago

    Also broken everywhere else in the site for me: you never get back to where you were* , even on a “static” page full of links.

    A horrible excuse of a web site – but perhaps that still makes for a good web app. I couldn't tell and don't know if I'd want to.

    * edit for clarification: this includes position on previous page

  • exodust 5 years ago

    Good to see someone else mention this, and also for the Google web.dev.dev to reply.

alib 5 years ago

This site didn't load properly in Firefox on a Samsung Galaxy S8 just now, and the accessibility section is "coming soon". I'm sorry Google, but if you're letting key basics like this slip... Accessibility is not a bolt-on for afters, and chrome isn't the only web browser. Deep down you know this too. Those who preach are held to higher standards, and you've let yourselves down badly here.

  • magicalist 5 years ago

    > This site didn't load properly in Firefox on a Samsung Galaxy S8 just now

    Check your extensions. It loads fine for me.

    > you've let yourselves down badly here.

    ...because an article on accessibility is "coming soon"?

    • alib 5 years ago

      I can't replicate it in private browsing. I'm not running any mobile extensions and my internet is as good as it comes. First time I loaded it the blue line at the top was there for ages. I was scrolling the site for a good 20 secs marvelling at the irony of a site teaching performance not performing. It felt like something to do with the service worker not working. The cookie notice only appeared after I refreshed. Now it works fine, and loads fine each time. So it was something to do with the initial caching of assets.

      And yes, the accessibility section coming soon sends the wrong message very subtly but powerfully. It's something we all need to be better at, and when you've got the resources of Google there just isn't any excuse. Those two small words on that missing section quietly absolve us all. Because if Google can't do it right, why should we? It just isn't good enough, so yes, they have let themselves and our community down. I know they're strong words, but someone needed to say it.

      • magicalist 5 years ago

        > The cookie notice only appeared after I refreshed

        Maybe it's a geo thing. Are you in Europe? I see no cookie notice (US).

        > I know they're strong words, but someone needed to say it.

        Eh, it needs to be there but don't throw the baby out with the bathwater. It's apparently in "beta" and according to this hn thread robdodson is working on it, so odds are the accessibility section will be handled just fine.

        Regardless

        > Those who preach are held to higher standards

        is a bad take. You can criticize a tutorial site without standing on such a flimsy soapbox.

tinyvm 5 years ago

Google is really pushing Hard on PWA.

My biggest concern is the support outside of chrome.

While other browsers tend to come on pair with chrome in terms of features , chrome right now is the only one which supports PWA as "native" app..

Windows announced support for PWA natively a while ago [0] but there has been no news since then.

On Apple side it's silence radio...iOS have some support but for Mac it seems unlikely to happen...

[0]https://blogs.windows.com/msedgedev/2018/02/06/welcoming-pro...

  • ubersoldat2k7 5 years ago

    The funny/sad thing about PWA and how broken it's on iOS right now, is that the first iPhone was supposed to work with PWA. I'm not sure if it was Jobs' vision or they didn't have something good to offer for native devs. Safari has been a joke for long time now.

    • mr_toad 5 years ago

      That was before they started printing money from the App Store.

  • pjmlp 5 years ago

    > Windows announced support for PWA natively a while ago [0] but there has been no news since then.

    There have been news since February.

    UWP hosted apps got replaced with PWAs, including access to UWP APIs if signed, documentation and tooling support is now available.

    https://developer.microsoft.com/en-us/windows/pwa

    https://docs.microsoft.com/en-us/microsoft-edge/progressive-...

    This is what made me have another look at PWAs, start experimenting with them and starting to see, that for plain CRUD applications PWAs are probably the way to go.

  • bhauer 5 years ago

    FWIW, the official Twitter desktop app for Windows 10 is a PWA and it works great (well, as well as any official Twitter app has worked in my assessment; not as well as third-party apps from their hey-day, but that's a different story).

    I also use the Starbucks PWA (app.starbucks.com) and it mostly works well. Again, it seems to work as well as their native apps.

  • arendtio 5 years ago

    > chrome right now is the only one which supports PWA as "native" app..

    What do you mean with "'native' app" exactly? When I use my PWA with a Firefox on Android I can't see that it is a PWA. It just looks like any other Andoird app.

    Granted that is only one more browser and on the desktop side Firefox still has a lot to do, but at least there is one more player in the race ;-)

crazygringo 5 years ago

Why am I getting:

Error: 403 from Lighthouse API: Forbidden

whenever I try to run the audit on any of my sites hosted on Digital Ocean? They all load normally in my browser...

Anyone else having the same issue?

  • nacs 5 years ago

    I have a feeling they've hit some kind of API limit for their own Lighthouse service ironically.

  • guessmyname 5 years ago

    Even attempting to scan “www.google.com” returns a 403:

        curl -XPOST \
        -H "Accept: */*" \
        -H "Connection: keep-alive" \
        -H "Accept-Language: en-us" \
        -H "Origin: https://web.dev" \
        -H "Content-Type: application/json" \
        -H "Referer: https://web.dev/measure" \
        -H "Host: lighthouse-dot-webdotdevsite.appspot.com" \
        -H "User-Agent: Mozilla/5.0 (KHTML, like Gecko) Safari/537.36" \
        --data-binary '{"url":"https://www.google.com","replace":true,"save":false}' \
        "https://lighthouse-dot-webdotdevsite.appspot.com/lh/newaudit"
        
        {"errors":"Error: 403 from Lighthouse API: Forbidden"}
  • judge2020 5 years ago

    Probably an issue with the actual scanning service. web.dev itself gets the same error.

  • nodesocket 5 years ago

    Ironically, my static site hosted on Google Cloud Storage is returning 403 Forbidden.

  • ganeshkrishnan 5 years ago

    I got a "no such site" first and then this 403 error.

  • prophesi 5 years ago

    Yep, all of my DO hosted sites share the same fate.

iambateman 5 years ago

This looks kinda cool.

I have a hard time believing this is "for web developers" when most web developers used `.dev` for local development, which was broken because Google decided they wanted to own `.dev` for themselves.

Bad taste in my mouth.

  • crooked-v 5 years ago

    > when most web developers used `.dev` for local development

    Most web developers were wrong. The reserved TLD for local development has been .localhost since 1999.

    • CydeWeys 5 years ago

      Or .test. See RFC 2606: https://tools.ietf.org/html/rfc2606

      • shkkmo 5 years ago

        I think .test is the TLD is the one most likely to work as expected as for all use cases as .localhost might be expected to only ever point to the loopback IP address. If anyone does something more complicated (like using a vagrant private network ip address), you might find some tools break unexpectedly.

        Edit: the newer version of the docs have more details (in section 6) on the differences to be expected between test, localhost, example and invalid domains. see: https://tools.ietf.org/html/rfc6761

  • gruez 5 years ago

    >Bad taste in my mouth.

    how's this any different than devs using 123.123.123.123 for test purposes (instead of private network addresses), then getting upset when they found out it has been allocated to China Unicom?

  • NelsonMinar 5 years ago

    It might be time to let this one go.

    • omeid2 5 years ago

      Slowly boiling the whole web.

    • Sir_Substance 5 years ago

      Some of us are still tickets once a month or so over this. Kinda hard to let it go.

stupidbird 5 years ago

I can't wait for one of the executives in my company to run into this, understand nothing but the numbers, and then complain about our numbers not being higher despite the fact that half of these metrics aren't applicable to our web app.

I already have a boilerplate response to "why isn't our google pagespeed score higher" that I copy and paste.

I know Google's happy about performance nagging, but I wish they were better at knowing what is/isn't applicable.

  • kaycebasques 5 years ago

    I agree that this is a problem. See my other comment in this thread: https://news.ycombinator.com/item?id=18442686

    • stupidbird 5 years ago

      Thanks for reading through the comments.

      I do agree that some sort of "not all these metrics may be applicable to your architecture, please consult an engineer" would go a long way.

      I left a toxic work environment at one point where I was literally yelled at because I was claiming to know our company's situation better than Google's pagespeed tools.

      Needless to say, that wasn't the only problem with that job... but it's frustrating.

amaccuish 5 years ago

ICANN were foolish to go down the new TLD root. Why should Google have ownership over web.dev, and not mozilla (MDN) or w3c?

  • acheron 5 years ago

    Foolish how? Sure, it kind of ruins DNS, but ICANN made a lot of money, and isn’t that the important thing?

  • Jaruzel 5 years ago

    And lets not talk about .Amazon being given to Amazon (the company) and not Amazon (the rainforest).

  • voycey 5 years ago

    Totally agree, Money talks!

paulddraper 5 years ago
  • robdodson 5 years ago

    Heyo! One of the web.dev devs here. The new web.dev site is an experiment from our team to see if we can improve the interactivity of our docs. We link to developers.google.com/web in a number of places. Over time, if folks seem to enjoy the web.dev model, we may explore moving more of our docs over there. But for now it's just a fun experiment

    • chimen 5 years ago

      I like that material design. Can you share the tech stack it was built with?

xg15 5 years ago

> Easily discoverable

Ensure users can find your site easily through search.

> Installable

Be on users’ home screens with no need for an app store.

It's nice that they educate devs on the rules they themselves came up with. I just wouldn't call it "how to build a better web" then.

arendtio 5 years ago

> Accessible to all - Coming Soon

Probably the frontend joke of the last two decades... And a bad one too.

fhdhehfhzhe 5 years ago

Looks like this - https://webhint.io Why not collaborate with the people who’ve made this first, google?

  • feb 5 years ago

    The similarities with webhint are striking. For example they have mostly the same categories :

    * Performance (exactly same as webhint)

    * PWA (exactly same name as webhint)

    * Accessibility (exactly same as webhint)

    * Best Practices

    * SEO

    Webhint also has categories Interoperability and Security. Not sure what web.dev uses for those.

  • magicalist 5 years ago

    As others have noted, looks like this is just running Lighthouse? That's been around for a long time.

    • softawre 5 years ago

      From webhint's about/faq:

      > webhint’s development started inside the Microsoft Edge team

      • magicalist 5 years ago

        > webhint’s development started inside the Microsoft Edge team

        ...was that before lighthouse started? And was open source and seem viable at the time?

scrame 5 years ago

I tried my rather large work website and it was unable to fetch it after a couple tries.

Also this is the same company that brought us AMP and didn't use closing tags on their landing page to "save bandwidth", and tried to push a java -> JavaScript nightmare of a gui toolkit. I don't get why they are trying to push for this unless it's trying to strongarm devs into making more shitty AMP pages.

RugnirViking 5 years ago

The hacker news homepage gets a 23 on accessibility (Idk I mean there are probably some issues I hadn't thought of but the fact that its primariliy no-nonsense text makes it quite accessible by default)

It does however get 100 on performance (quite rightly) The PWA score is around 50 but most of the complaints are really silly. It almost makes me wonder if the entire category of PWA is silly.

Finally, I checked GOV.UK, the website so lauded here on hacker news as of late. It also got around 58 on PWA - what is the point? If those complaints for PWA actually meant anything, surely they could fit into one of accessibility, best practises, or performance.

hornetblack 5 years ago

"Imagine if your favorite game took forever to load because you were on a slow network connection, it wouldn't be your favorite game for very long. "

I can inform you that games I play take forever to load on slow networks. (Overwatch, Darksouls). Although I guess it's not downloading the game. It's just trying to login to the servers.

Whitestrake 5 years ago

I find myself disappointed that they took away `.dev` as local development TLD, and this is what they start using it for.

TonnyGaric 5 years ago

I get the message "Error: 500 from Lighthouse API: Internal Server Error" when I try to run audit for my personal website https://tonnygaric.com

Anyone any clue why? Other websites I enter seem to run without a problem.

pestkranker 5 years ago

It's actually Lighthouse under the hood.

apatheticonion 5 years ago

Tried to run it on my site and got "Error: 403 from Lighthouse API: Forbidden"

seanhunter 5 years ago

I'm amazed by how far off-base they have managed to make this. The recommendations are bizarrely inaccurate.

1)it says that my (black) text doesn't have sufficient contrast against my (white) background. 2)It says that my page could benefit from having more of its resources served by http 2 (more than 100% presumably). 3)It says that links should have a name (this is not supported by html5) 4)It says my (100% valid) robots.txt file is not valid 5)it says my 100% no javascript static webpage which isn't a PWA should return a 200 when offline. Not sure how they think I should do that.

crooked-v 5 years ago

I wonder how many things will break when .dev domains become broadly available.

  • ripexz 5 years ago

    They're already "broken" on Chrome for a few versions now, we switched all `.dev`s to `.localhost` in dev environments at my company.

    • geerlingguy 5 years ago

      I've switched all my stuff to .test, since that domain doesn't cause as many weird issues with some routers, proxies, OS networking bugs, dumb regex, etc.

      Also, .test is reserved for testing/local forever, so it won't suffer the same fate as .dev.

d--b 5 years ago

Interesting that they’re not selling AMP in there. They most likely didnt dare

nh2 5 years ago

> Accessible to all

The website itself doesn't seem to take that too seriously.

For example, it loses scroll position when navigating:

* go there from HN * scroll down a bit * press the Back button in the browser to go back to HN * press the Forward button in the browser to go to the site again * you are now scrolled the very top again, instead of where you scrolled to

Tested in curren Chrome and Firefox on Linux.

Also note how HN doesn't have this problem, it will remember your scroll position so you can continue reading where you left off.

Browsers are smart, they have accessibility built-in. Too clever JavaScriptery destroys it.

shinryuu 5 years ago

I get that the site can't be reached.. ? Is it a valid site?

  • danillonunes 5 years ago

    It has been a common practice for devs to setup a local DNS server that points any *.dev domain to localhost. Maybe you did it (or installed some tool that did it) and forgot.

    • sandov 5 years ago

      >or installed some tool that did it

      I would love to see an example of this. Software-gore.

  • CydeWeys 5 years ago

    Yes, it is a valid site. Looks like you've got something wrong with your local or network config that needs to be fixed.

cyberferret 5 years ago

Getting Chrome SSL warnings on both the OP link and https://get.dev/

Doesn't really evoke confidence, if this is a Google initiative. Can someone post a short precis as to what this is about for those of us who cannot visit the URLs?

UPDATE: Apologies - My bad, not Google's fault. I had a local Valet/NGINX redirect for local .dev domains setup for Laravel development projects that was causing the issue.

  • CydeWeys 5 years ago

    Can you post a screenshot of the certificate you're seeing? It sounds like you're being MITMed. It works fine in Chrome for me.

    Oh, and this is definitely a valid Google initiative. Source: Work for Google, am involved in this.

    • cyberferret 5 years ago

      Ah, OK, I think I worked out what is happening. I have a valet service running for local Laravel development which is redirecting local .dev subdomains to Valet instances on my iMac!

      Apologies for that - I will shut down the Valet service and try again.

      • CydeWeys 5 years ago

        Valet fixed this last year. Recommend that you update: https://github.com/laravel/valet/pull/436

        Although that might only change the default for new installs; you might still need to change it for existing installs. I'm not sure; I've never used Laravel.

  • akovaski 5 years ago

    Perhaps your browser is being directed to a page not owned by google. I don't get an SSL warning in Chrome or Firefox (Windows 10).

    From the site: "With actionable guidance and analysis, web.dev helps developers like you learn and apply the web's modern capabilities to your own sites and apps."

    It includes a web version of the Lighthouse website measurement tool. It also has guides for various web dev topics.

    The topics can be browsed normally, but I think most users will discover topics by the measurement tool showing the user guides to improve their website.

  • aiCeivi9 5 years ago

    I get DNS error here. *.dev is probably reserved for local domains by admins. Not the greatest choice of tld for real sites.

  • bibinou 5 years ago

    I got a ERR_SSL_VERSION_INTERFERENCE briefly before the page successfully reloading. I have uBlock Origin installed, so it may be it.

    • davkap92 5 years ago

      I had ssl error... and then realised it was because I have dnsmasq on *.dev at the time there was no dev tld :P

  • traek 5 years ago

    I'm seeing a valid cert issued to Google.

  • jay_kyburz 5 years ago

    I'm getting a Service Not Found on Firefox.

iamunr 5 years ago

So sad to see .dev go. :(

  • CydeWeys 5 years ago

    Precisely the opposite -- it's going to be available for open registration soon. https://get.dev/

    • partiallypro 5 years ago

      Do you have to use Google's registrar service? That page isn't clear, and if you have to use Google's service, I don't view that as a good precedent.

      • bduerst 5 years ago

        Correct me if I'm wrong, but registries can't let single registrars have exclusive access to TLDs. If Alphabet/Google were to do this it would be antitrust, and they could lose their dual registry/registrar status.

weinzierl 5 years ago

It only accepts https URLs. Lighthouse in the DevTools can do both, http and https.

  • robdodson 5 years ago

    Yep that's our mistake (one of the web.dev devs). We auto added https to every URL. Oops! Gonna fix that as soon as we are out of our code freeze

    • onion2k 5 years ago

      I assumed that was intentional to push everyone to use https.

    • microcolonel 5 years ago

      Confused about the purpose of a "code freeze", I guess you're focused on monitoring for abuse, so don't want to go back to working on features/functionality yet?

    • danillonunes 5 years ago

      > one of the web.dev devs

      web.dev^2

      • Jaruzel 5 years ago

        What dev does a web.dev dev when a web.dev dev devs?

modernerd 5 years ago

Web.dev looks incredible for educating web developers, exposing Lighthouse to devs who aren't already familiar with it, and for automating basic website testing over time.

But it will be a nightmare for support teams working at any kind of web service.

As Google doesn't provide support for this tooling and site owners invariably fixate on the scores it provides, product support teams for everything from WordPress themes to CDNs end up fielding support questions that Google should be helping with via resources pitched at the non-technical folks who inevitably use these tools (as well as, you know, help from an actual human support team).

As it stands, support teams will now be inundated with questions unrelated to their product from customers who have no interest or technical background to read the current educational sections of web.dev, and whose time would be better spent crafting landing pages, great content, or reducing the 38 social plugins they're using instead of making all the dials turn green.

I can already foresee the support requests from the web.dev scores:

“Google says my WordPress site isn't installable. Where's the option for that in your theme?”

“Your website says Cloudflare improves load time. But my first meaningful paint time went up by 1.5 seconds after setting it up! I'm going to write bad reviews about you.”

“Google says I need to theme my browser's address bar to match my branding. I added that tag they mention but don't see any change in my browser.”

It's great to build awareness of ways to make the web faster and better, but it needs to be backed with guidance that's pitched at the ability of the people who will be using these automated testing tools.

For example, why not detect the technology behind the site and — for stuff like WordPress — recommend plugins or other tech-specific resources that could help fix problems like lack of image lazy-loading? There are lots of ways education could be enhanced for non-developers.

  • exodust 5 years ago

    But it's a resource for web devs, mostly used by web devs. Reinforced by the web.dev domain name, which surprised me. I've never seen .dev domain before.

    All audit tools make recommendations that many of us will ignore. I have zero intention of using webp images for example.

    I don't think your claim that "nightmares" will happen is warranted. Shiny new site audit tools are fun. Enjoy it, there's no nightmare!

LolNoGenerics 5 years ago

Over a decade of experience... with service workers! Nice trick, leave time travel to spock et al.

bhartzer 5 years ago

There's so much wrong with this report, it can be confusing to many people.

For example, some of what they're calling "SEO" really has nothing to do with SEO. It should be checking: - if the page is crawlable - if there is a valid title tag - if there is a valid meta description tag - if there is a valid canonical tag

But instead, it checks for a valid viewport meta tag? And if the font sizes are legible? I could see that it might be an issue if the site is hiding text on the page, but viewport and font sizes really have nothing to do with SEO.

  • hajile 5 years ago

    I could be wrong, but I thought these were things Google used when ranking sites (especially for mobile).

ramshanker 5 years ago

Error: 403 from Lighthouse API: Forbidden

skunkworker 5 years ago

If you have an old version of Pow installed. Make sure to either upgrade to Puma-dev (Which moves .dev to .test, the correct subdomain for local machine testing) or uninstall it first.

yoz-y 5 years ago

I have a question. How can I really figure out whether my (static) website uses http2? The google audit tells me that I should use http2 for all of my resources but loading my site in Safari or Chrome shows me that it uses the h2 protocol. Or are these unrelated?

Weirdly it also points out that my elements have non-unique Ids which is false and the list of failing elements shows that, it looks like they are stopping their search at the colon character, but they should not.

pasta 5 years ago

Has anyone viewed the report? It's full of elements never used on the sites I checked.

While it is still in beta (like all Google products are) don't take this serious.

masswerk 5 years ago

Regarding accessibility: Are skip links still a thing, or has this been superseded by landmark roles? Is there still an advantage in using a link as compared to selecting a region?

Edit: Meaning, if I have already a main region, does adding a skip link to main actually complicate things?

thinkloop 5 years ago

The results are significantly different from the lighthouse audit in chrome's web developer tools, the latter shows my site as PWA compatible with great performance whereas this tool shows that my site is not PWA ready and has medium performance.

k__ 5 years ago

Only works for HTTPS.

  • robdodson 5 years ago

    Yep that's our bad. We accidentally started prefixing all of the urls with https we've fixed the bug but haven't shipped it yet cuz we're in code freeze

zimpenfish 5 years ago

Bizarrely refuses to use HTTP/2 to my nginx (curl, Firefox, Safari all use HTTP/2 happily) and then dings me for it. Seems a bit unfair.

stabbles 5 years ago

Try and compare cnn.com with lite.cnn.com. Or (if you're Dutch) compare nos.nl with noslite.nl. All hail text only news websites.

  • mangoleaf 5 years ago

    Agreed! I love the text only sites. I would add text.npr.org to the list also. I am very tired of sites that spill media and JavaScript all over the place and add no real value, just to slow the site down or to make it completely unusable. A typical site of mine breaks all the rules of current UI design. [1]

    [1] http://vqRN.com

voycey 5 years ago

Still pissed that they registered .dev as a TLD

koib 5 years ago

What if there's a login/auth wall before getting to the actual page you want to test?

bananatron 5 years ago

This is just lighthouse? You can run these tests in chrome under dev tools > audit.

codesternews 5 years ago

Wow amazon.com results are quite impressive. Have not thought it has 95 performance.

vini 5 years ago

Error: 500 from Lighthouse API: Internal Server Error

Is all get when I try to test my website.

technion 5 years ago

Given this says it's in beta, can anyone see where I can report a bug?

NKCSS 5 years ago

I think I broke Google

> Error: 500 from Lighthouse API: Internal Server Error

  • chrismorgan 5 years ago

    I tried a couple of other sites first, successfully, then I tried asking it about https://web.dev, and that’s when it fell over.

zwaps 5 years ago

PWA = Poorly made Web Application

for those who don't know

intea 5 years ago

Google only scores 80 in the SEO Category...

jjordan 5 years ago

Must be nice to have all the money in the world to purchase any domain you want.

rblion 5 years ago

won't load...

progressiveweb 5 years ago

My question is, why do we need a React/Redux for PWAs all the time? It is slow on cellular data, freezes phones without the latest Chrome, all in all feels laggy compared to a native app.

The thing is you can easily add "Progressiveness" to your website with Service workers and using open source libraries to add instant on-page navigation without repaint.

The only place I find an SPA is appropriate is anywhere else but e-commerce websites, which is one of the main target of Google's PWA initiative, to build a collective resistance to Amazon, which has one of the highest converting pages.

Your conversion rates will naturally go up when people are able to checkout quickly with minimal cognitive load, this does not mean that a React app is appropriate or feasible, especially when e-commerce businesses that rely on older phones, when smartphones are approaching 1000 USD

modzu 5 years ago

is this a joke?

throwaway487548 5 years ago

Which framework is this? Material-what? What is Google's Bootstrap alternative?

It does not look like Angular (which is a J2EE all over again), at least.

Update: I did some research

    cd material-components-web-codelabs/mdc-101/complete
    grep version package-lock.json |wc -l
    943
No. Just no.
Sargos 5 years ago

These comments really highlight how Hacker News is slowly devolving into Slashdot with a stuck in the past mentality and no constructive criticism just groupthink attacks on "stuff we don't like or understand".

  • dang 5 years ago

    Please don't make this place even worse by posting low-information rants. Instead, provide correct information, so we can all learn something. If you don't want to do that or don't have time, it's simple to just not post.

    https://news.ycombinator.com/newsguidelines.html

keyle 5 years ago

Wow. From the people that have been botching the web for 20 years...

I mean it's only recently that they've started getting their things together. And arguably their flat design isn't always usable. Now let's talk speed....

hardwaresofton 5 years ago

In case you're wondering what's in it for google with PWAs/App manifests/"Installable apps"[0], it's answered in another recent HN post[1] and comment[2].

I wrote and deleted a pretty vitriolic comment about not trusting google as stewards of anything, people who care about the user (outside of selling data/access to the user millions of times a second), but I couldn't figure out why they would push app manifests (the only interesting part of the actual comment).

[0]: https://web.dev/installable

[1]: https://news.ycombinator.com/item?id=18434639

[2]: https://news.ycombinator.com/item?id=18435412