painful 5 years ago

I recommend against using Zeit. I tried their offering this year and it was too much in beta. I honestly couldn't even get a basic service deployed because my defined dependencies wouldn't install. My web service deployed fine using the serverless offering of another cloud that I won't name, although it was not AWS. As a disclaimer, I have no conflict of interest.

  • JMTQp8lwXL 5 years ago

    They killed their Docker offering right about after they launched it, mostly because it's too difficult/expensive to scale containers, so they forced their entire user base onto the serverless paradigm, because it was convenient for them. Their PR story is it's "better for everyone", except there remain use cases (aka websocket support) that remain unsolved for months on their new platform. To give them credit, they didn't fully deprecate Docker-- old customers can use it, for an indeterminate amount of time.

  • Untit1ed 5 years ago

    Now is hit-and-miss for me. Works great for the most part but they seem more focused on moving fast than stability.

    At one point they (without any warning that I could see) turned on a CDN for all my deployments that didn't take the host header into account... so suddenly all the various URLs leading to my landing page generator all returned a single customer's page. Annoying.

  • bdickason 5 years ago

    I’ve been using zeit / now for the past two months to host static sites and it’s been excellent. Haven’t tried a node app yet but their core service seems really easy to use and I haven’t had any issues.

yodon 5 years ago

Conceptually one lambda per page (route) sounds super cool, but in practice I suspect it would lead to a ton of cold start wait times for less common routes

  • rozenmd 5 years ago

    Sounds like a trade-off for extreme scalability

    • yodon 5 years ago

      Yes, where needed, but I suspect for most people it's just a trade off of getting buzzword compliance in exchange for slow cold start responses

    • bpicolo 5 years ago

      Shouldn't change your ability to scale. AWS is happy to clone your single lambda an arbitrary amount of times.

      • barbecue_sauce 5 years ago

        The tradeoff is the cold start wait time vs. easy scaling.

  • painful 5 years ago

    Have you actually found cold-start delays to be an issue with serverless services, or are you just speculating? To my knowledge, the cloud provider keeps enough instances hot based on a forecasted demand. I have not found cold-start delays to be a real issue.

    • scarface74 5 years ago

      AWS doesn’t keep instances warm based on forecasted demand. It uses one instance per request and instances stay warm once they are started for a predetermined amount of time based on the size of tyrvlambda environment.

    • reilly3000 5 years ago

      It depends on the stack. You have to be more mindful of cold start when running Java/Clojure apps, less with JS and .net. There are some good benchmarks out there. There are also some easy ways to keep routes warm, although for certain setups I imagine that could be quite expansive, though still probably not expensive.

    • zdragnar 5 years ago

      I don't know if anything has changed since, but we definitely hot cold start perf issues on lesser used routes at a company I was with 18 months ago.

  • matrix0978 5 years ago

    these individual pages get compiled into a single file with zero dependencies leading to super fast cold start times. I'm a bot, beep boop.

eoinmurray92 5 years ago

I love now.sh - its the best developer experience out there and we really really tried to use it at my company - but we kept on having latency issues. It was consistently 5-10x the TTFB of Heroku or an ec2 instance (all in the same locations, or super close).

fulafel 5 years ago

Is the headline benefit really paying only for use? Trading increased complexity & dev effort for this seems like a backwards development when vm/container/heroku-style hosting is so cheap & getting cheaper.

  • tirumaraiselvan 5 years ago

    The complexity and dev effort is higher initially because the paradigm is so different from "localhost" development. But once the initial boilerplate and yak-shaving work is done, serverless can actually be easier to manage because you deal at a much smaller functional level.

    • collyw 5 years ago

      I have never used "serverless", but how does that differ from say a Django app?

      You have the initial boilerplate setting up the project, then you map urls to functions. Once the initial setup is done its trivial to map a new urls to new functions.

      I am genuinely curious as I can't really work out what all the hype is about regarding serverless. Am I missing something?

      • fulafel 5 years ago

        I think this list can give a helpful picture if you remember that it's written by marketing people at a serverless company: https://serverless.com/blog/when-why-not-use-serverless/

        It downplays/skips over the complexity costs from learning, configuring, debugging, deploying, testing, etc the big zoo of woven together cloud services. Ideally you'd also want some dev time and cognitive capacity left over to think about your domain problems...

      • tirumaraiselvan 5 years ago

        Yes. So in serverless, there is no concept of a long running service. So you can't listen on port 3000 and have any routing within the app. You will rather write one function, add a route to it via API Gateway and then use it in your app. You can probably use django library to help you write code in django style but you can't use it fully for e.g. 1) you can't do migrations in serverless the same way, 2) everytime your function is invoked Django will map the database to models, etc so it could be very slow.

        • collyw 5 years ago

          So your original comment said

          "But once the initial boilerplate and yak-shaving work is done, serverless can actually be easier to manage because you deal at a much smaller functional level."

          Everything you just described in your second comment, sounds like it would make things more difficult.

          • tirumaraiselvan 5 years ago

            Suppose you have set up everything to get you going with a single route. Now, adding another route/handler is just deploying another function. And you deal with everything at this function level and never the entire "monolith".

            You get all the advantages of breaking free from a "monolith", independent iteration, scale, federated management. Consider a big team which writes 100s of routes/handlers, then you can just federate the ownership easily.

            Not very great for a solo-dev kind of projects, I agree.

cwyers 5 years ago

This article hits one of my biggest pet peeves -- _tell the reader what you're about_. Presumably, the reason you have this on the Hasura blog is at least in part lead generation, right? And it worked, in that I showed up on Hasura's website never having heard of them before. But the article makes no effort to tell me what Hasura is before throwing me in the deepend. A link wouldn't go amiss, either. (Yes, I know there's one in the header. It's better to have one in the article text too. Links are free, better to have more than fewer.)

dane-pgp 5 years ago

For apps that just do basic things like read and write to a database (e.g. DynamoDB) would it be possible to go a stage beyond serverless and have the code that runs in the user's browser talk to the database directly?

Obviously you'd have to be careful about permissions, and integrate with Cognito, but there are REST APIs for talking to AWS services so I'm sure there are use cases where even the lambdas are not necessary.

I don't know what such an architecture would be called, other than "serverlessless".

  • rkangel 5 years ago

    This is exactly the model that Firebase is aiming to support. Cloud Firestore is accessed directly through the various client libraries (Android, iOS, JS). Firestore has a permissions system to control who can access what, based on Firebase Auth user ids, and then when you need little bits of server side stuff you can put it in cloud functions.

  • eranation 5 years ago

    Other than what others mentioned (Cognito with temporary credentials) I think AppSync might be close to what you are looking for https://aws.amazon.com/appsync/ It has a GraphQL API and supports DynamoDB as a data source

  • cldellow 5 years ago

    Depending on your threat model, absolutely. Cognito supports issuing temporary IAM credentials, so you can have granular permissions, billing and auditing.

    I debated doing this for a side project but decided it was a little too risky for my use case. For a corporate/intranet thing, though, it'd absolutely be reasonable.

  • tirumaraiselvan 5 years ago

    Apart from permissions, which can perhaps be solved by temp credentials etc, how do you prevent 1) DDoS: someone fiddling the queries to result in db outage, 2) scale: browsers can't leverage connection pooling so not sure if DB can handle so many client connections.

    • fulafel 5 years ago

      How do serverless apps usually prevent DDoS? If it's using API Gateway features, couldn't you just proxy DynamoDB calls through API Gateway?

      I think connection pooling is not relevant to DynamoDB.

      • tirumaraiselvan 5 years ago

        1) DDoS not just at the API Gateway level, but also at the database level. Suppose you fiddle the query to return you a million rows or some horrible aggregated join. You can slow down the entire DB. You need to hide the query from the client.

        2) Yeah, connection pooling is apparently not relevant for DynamoDb because it is HTTP based, I wonder how they implement transactions then. How can I manipulate code while having an open transaction?

        • fulafel 5 years ago

          I think dynamodb lacks joins. Generally, good point. Though I suspect many traditional web app backends are vulnerable to this kind of "crafted high overhead api call" dos too. I guess you could throttle the calls based on duration using some kind of token bucket scheme...

  • debaserab2 5 years ago

    It's not ever direct though. There's an HTTP intermediary - either you're controlling it, or Amazon is.

  • zinckiwi 5 years ago

    Until the introduction of Cloud Function support, I believe that was essentially the execution model for Firebase.

  • dustingetz 5 years ago

    you can’t trust the browser client at all, but setting that aside it isn’t very reliable, much better to queue up commands and then do real work on server (what happens when some third party service is down?)

infocollector 5 years ago

Are there any good NextJS tutorials out there that you can link here? I am looking to learn NextJS. Thanks

  • Scooty 5 years ago

    The nextjs website has a tutorial that I found very helpful:

    https://nextjs.org/learn/

    • barbecue_sauce 5 years ago

      I dislike the forced login to read the tutorials. I get that it's a Zeit project, but this smacks of user farming.

      • Scooty 5 years ago

        It bugs me too. When found that link to post, the first thing I thought was "I'm gonna get a comment about this shitty forced signup". It's a good guide though. In a few hours I went from knowing very little about next (not a React/Web beginner though) to getting my hands dirty in the internals.

    • siquick 5 years ago

      Definitely one of the better official tutorials out there

  • matrix0978 5 years ago

    and you call yourself an info collector...