Sharing is Pearing: Building a Blog-sharing Generator
TheJam.dev is a 2-day virtual conference focused on building modern web applications using full stack JavaScript, static site generators, serverless and more.
As websites grow, they start to take on a lot of complexity to handle the nuance of real world workflows. Durable compute patterns offer a way to simplify this chaos. Netlify’s Async Workloads offers a new way for developers to create durable workflows that can scale to all different use-cases without increasing site complexity.
Sean Roberts is a Technical Advisor and Principal Engineer at Netlify. His focus is spent finding ways to make the web easier to build, more secure, and faster.
Sean Roberts 0:00 Um, yeah, I’m super excited to be here to talk about this. I’m Sean, and I’m a Principal Engineer and technical advisor here at Netlify, you can find me on a lot of social media. And yeah, the, you know, I’ve been building web systems for the better part of two decades now, so I’ve been able to see some interesting stuff. It might be fun to go back to the the CFE kind of archives and see how it was a while ago. But yeah. And I’ve been able to help make some really great experiences across the web. So and at Netlify, my focus is, you know, working across the platform and the product to with a keen focus on web web architecture and web performance and and nowadays more, you know, web first, AI type systems, yeah. So let’s dive into it. So we’re on the downhill of 2024, so all of those aspirations we had a New Year’s Day are looking like we’re gonna have to start re establishing them for the next year. But this is definitely a fun and exciting time of year, and you know, there’s a lot of different things on people’s minds this time of year. Hopefully we’re all able to stop and identify the things that bring us energy and show gratitude for those those people and those things. My favorite aspects is definitely being able to catch up with loved ones we don’t get to see most of the year for many, a big highlight in November is going to be, you know, that time to get some excellent food. And of course, Black Friday and Cyber Monday are on lots of people’s minds. Now, I bring this up, you know, because at this time of year, you know, at Netlify, this is what we’re spending a lot of time thinking about, you know, not because it’s the time to buy a bunch of gifts, but because Black Friday and Cyber Monday represent, you know, a really crucial moment for our customers And the community that we’re serving. And you know, the, you know, in retail businesses this time of year, a large number of our customers, right, are our online retailers. And so if you want to get some bands this time of year, you’re definitely going to be benefiting from all of our focus to make sure that those sites are able to max maximize those key opportunities. And what I mean by those key opportunities are, you know, like Friday or if you’re launching a new product, or you’ve received an award or a bunch of attention, and you get this influx of unexpected or expected traffic to your site, you know you’re wanting to capitalize and make the most of that moment. And you know, Black Friday might not be your moment. It could be many different things, but what’s true across all of those moments is, you know, you want to be able to build systems that can meet the demand in a way that captures as much of that opportunity presented to the business or the site as possible. And this is really the focus of this talk today, not just the technology, but the intent about why it exists, right, and allowing developers and customers to maximize the ability to capture full opportunities that come to their sites. So when you know, when we’re thinking about, you know, these opportunities, the big thing is, you know, handling scale and and resiliency, right and, but let’s assume, for now, right, that you’ve covered the basics, right with you know, you’re able to scale up automatically where possible. You know, you you set out some target and to ensure that you’re able to handle at least that much capacity and throughput on your system. Most serverless offerings, especially you know, including those at Netlify, make this pretty trivial, but you know, ultimately, what we’re looking at past that is how you handle unexpected or these transient challenges, right? Things that you know they they’re probably out of your control and you need to capture. You need so but you still want to be able to capture as much as the opportunity, even though something is happening, and there’s many ways to go about ensuring that, but a tried and true method for handling these resilience resiliency challenges is leveraging event based architectures and durable compute. Yeah. So what we mean here, right? So, so event based architecture, if you’re not familiar with it, it’s that ability to use events, instead of one to one mapping between a server transaction call to an endpoint, you’re essentially broadcasting an event, and things are listening for that event or subscribe to it, and we’ll run a process, you know, reacting to that those architectures have unique, different qualities to them that a traditional server doesn’t allow. And here’s just, you know, just some of the, some of those benefits there. And on the durable compute side, this is where things get really interesting is, you know, especially in a serverless environment, right? You want to be able to share that to ensure that when things go wrong, the system will continue despite that thing happening, right? So if a problem crops up, you’re able to pick up and continue forward. Let’s take an example of, like, maybe a key moment where you really wish the system was more durable, right? So if you, if you launched a new website, and maybe you had, you know, signed up for some newsletter free plan, and that plan is now getting rate limited or something, right when you’re starting to get an influx of traffic, if you don’t build your systems using, you know, these durable models and these ways to decouple these transaction models, then whenever you start hitting those rate limits, all those sign offs, all those attempts to provide you business or awareness or contribute can be lost, right? So we’re going to be talking about how we leverage these patterns to even when those transient issues happen, these unexpected things happen, you’re still able to live up to that and have a successful key moment there. And there’s lots of ways to achieve durable, compute and event based architectures, you’ll find that, you know, like most things, that there’s a range of complexities that you can find when you start adopting different solutions for this. Ultimately, the important thing is that you’re looking to these types of patterns to solve these problems, to make sure you handle capturing that full moment and Netlify we built async workloads which allows you to create serverless functions that operate using event based workflows and are durable by default. So this is essentially our take on making sure that our developers in our community have the tools that they need to have resilient sites beyond just being able to scale but also being able to capture moments when things outside of scaling problems come into play. So what we’re going to do next is we’ll walk through a bit about how this works, and I’ll cover some details about how these systems, you know, abstract a lot of the complexities going on from queuing, retries and orchestration. And from there, we’ll actually examine how this applies to a real world scenario that, you know, captures a key business model. So within async workloads, as I mentioned, this is going to be, you know, an event based serverless architecture. So you will be sending events and subscribing to events. It’s durable by default, so it’ll have persisted state that allows you to retry and continue forward. In addition, it will allow these multi step workflows that have these whenever, you know, if you have 10 steps and Step Five starts failing, and then you retry, you don’t have to do the other four again. And you know, there’s a lot of really cool things that happen there. It really expands the window for logic as well. So like being able to say, I want this, I want to notify this user after they sign up. I want to notify them in seven days that they need to upgrade to continue whatever it is that ability to write logic that will run in seven days and not just have something that’s sitting there waiting, you will see that when we put it in the logic, it’s really just saying, in seven days, do this thing, and it will just do it for you. And then if total failures are happening, let’s say you have a multi hour, multi day outage, or something wild is happening. All this work is still persistent, right? And so like, when those things are resolved, they those events and such can be retried. So that’s. It’s doing, we’ll walk through and show show what that actually looks like, and then apply that to a scenario here.
So let’s move over now. So you know the first step whenever you’re kind of setting up async workloads, it’s really simple. We recently launched our new extensions and our SDK v2 and if you go to your site, or your team, rather, can go to Extensions, you’ll find async workloads. And then you can install it from here and configure it. I’ve already installed it, but this is all that you really need to do. And then when you go over to the code side, we can start writing these async workloads. And let me pull up my terminal, great. So in my terminal here, I have just some getting some bearings here, this area here is my running local server, and then any sort of additional commands or calls I want to make, I’ll make it down here. So this is, this will be triggering things, and this will be reacting, you know, or how those things are reacting, but we’ll get more to that. Okay, so first of all, this is the most basic serverless function. There is, you know, nothing really unique going on here. This is just, if you, you know, set up Netlify and start writing a serverless function, this is probably where you would start. And Netlify makes it super easy, right, to just write this function, and then you can start using it. So if I go over Chrome, and this is my local server, I invoke this basic serverless function, and it says hello world, exactly as I asked it to. And looking at the logs here, we’ll see that my log function down there, logged hello from serverless function. This here. So whenever we start creating async workloads, we’re going to npm install the Netlify async workloads package, and then we can start building with it, and when we compare these two functions. So this is the basic serverless function that you would call from HTTP endpoint, and this is the basic async workload function, which is really just a callback that’s wrapped in this async workload and like I mentioned, because this is a event based architecture, there has to be something that produces an event, and then this is saying that I consume this event and I’m listening for it. So this is all you need to create an async workload function. This is all you need to get durable compute and event based architecture. That’s it. I can’t emphasize this enough and how much complexity was removed from setting up various different providers, various different queue systems and things like that. So whenever we invoke this function, by default, it’ll retry. It’ll it’ll attempt to run five times. It will have, you know, a back off schedule, an expedition back off schedule. It’ll go into a dead lettered state if it totally can’t work. And then you there’s an API that you can call to retry it. All of that resiliency packaged up in a very straightforward API here. So now let’s actually invoke it. So like we mentioned before, on our serverless function, if we went to to the if we went to the endpoint and reload the page, we’ll see that, you know, it called there. From here, in that same serverless function, we’re going to
close clients and then client that says 51 and one. So our event name was Aw, l1, and I can subscribe to many events, but in this case, aw one, and I’ll save that. When I go over to Chrome and reload it, I’ll find that the terminal says hello from serverless function, which is the first thing the serverless function says, and then it invoked my async workload, which says hello from async workload. So again, serverless function gets called, says hello from async from serverless function, and that invoke. To this event which invokes this callback to say hello from async workload. So this is the most basic step for getting a durable event based architecture on your system. That’s it. And but if we start looking at more like how this really comes together and why this starts mattering is, if we take this example here, which is a, you know, we we have an async workload here that will log the the attempt that it’s currently on. And then, for now, just, you know, to to have an example of a transient issue that might be happening. We’re going to throw an error whenever you know this is let you know the attempt is less than one, so we should throw an error twice, and then eventually it’ll retry again and we’ll make it to the bottom. So this is called async workload two. And actually, for this, I’m going to hop over to my terminal to make this easier to see.
And async workloads comes with a small little CLI that once you’ve installed the package on your on your site, you can just start using these commands to just send events to your local server here. And so in this case, I’m triggering an async workload events called Aw, l, dash, two, right, which is what our event subscribes you. And when I send that, it’s going to, right now it’s going to, first of all, download the API keys that I need to run to make that call, and then it will run it. So let’s look at what’s happening here. So this is when it started. So we can see that the async workload through an error which we expected, error one, error two, and then it made it to the end. So if we look at the code here again, whenever we call this, it captured the error. This fails twice, and then it’s able to continue here. And you can imagine this could have been any sort of things. It could this could have been a fetch to your API. This could have been any sort of, you know, just, you know, a race condition, anything that can happen here that would trigger this to to cause an error. And then whenever that, you know, the retries automatically happened. And whenever it picked up, it was able to continue forward. And additionally, right? You know, you we can set max retries on this to say, you know, I only want, I want to retry 10 times. And I can also provide a back off schedule that I can say, I want this to, you know, I want this to always retry in five minutes, right? We can do something like this. So it’s not only can you handle that ability to describe what the system or how the workload functions, but you can also tune how the retrying and all that takes effect. So this is the very basic stuff, but let’s go one step deeper here, where this async workloads, they come with the ability to create step functions. And step functions are, you know, these discrete pieces of process that when you when you run them, you want to only run them once within a given workload. So we have this function that’s available from the to the callback called Step dot run, and we give it an ID, and then the callback that’s gonna run. So in this process, we have four different steps. This step is going to run. Step one, when it’s done, the whole system gets reinvoked, and we do not call step one again, and then we’ll call step two, and the same thing, it’ll get reinvoked. And now we get down here, and we have something a little bit more complex, where we want to run both of these steps in parallel. And so this could be processing a batch of things, and we can say these are steps, and I want to wait for them to all complete. So let’s look at this. Let’s invoke this now. And so that was async workload, event three. So we’re invoking this, and we will look at our data up here, we see step one running, then step two runs three and four get run in parallel. And this is actually because this is how, again, how, if we have issues, if we’re running these four steps and step two fails, and we’ll show what this looks like, then we won’t repeat step one. This allows us to you. If we have a copy computationally expensive task or something that you know we don’t want to continue doing, we could wrap it in a step and ensure that we only do it once and again, the queuing and retrying all that’s handled for you. All you had to do in this case was to identify what the step was and then provide the logic. In addition to just being able to define the steps, we can it also supports serializing the return value. So in step one, instead of just logging it, we’re also returning this string that says, you know, one return value, and then step two, the same thing, we’re returning two return value, and then same for three and four. In addition to just each step is returning different things. You’ll see in step two and three is that we’re actually also logging what value we got from step one. So in these async workload functions, these are discrete steps and but when each When, when, when a step that has not been completed completes for the first time, the whole workload gets reinvoked. This gives each step a clean slate, but in but even though that we’re reinvoking Those full workloads, we’re not losing the context of what happened in previous steps. When that is values are returned. So so we can look at this here. Um, go here we’re facing workload four. So when we invoke this, we’ll see that step one runs, then step two runs, and then it also logs again, what the return value was from the first step and then same for for step three, it runs and logs with the return value was for step two. And so this really allows you to compose really complex and really interesting workflows by defining these discrete steps. And lastly, I’ll show one more parts about these, these steps here is that I’m going to again add that arbitrary error state. So step two is going to fail two times here. So when we run this again,
excuse me, let’s see step one ran. We see step one ran. We see step two started running, but through an error, we still got the return value from from number one, and so the whole thing gets retried. Step two runs again because it did not need to run step one again. So we see step two retry, step two again, and we still have that that value from step one. And this happens one more time until finally, you know, we stopped throwing that error, and then step three and step four return, all while retaining the serialized responses from the previous steps. All right, on the two more quick things like I had mentioned, you know that ability to handle how time is reflected in your workflows. In addition to defining discrete steps, you can also define discrete moments that you need to actually wait for something. So we could have a workflow where it’s like, sign the user up and then sleep for 10 days and send a follow up email. And this workload includes the full aspect of what’s going on. Step one, the definition of how long it will have waited. And then automatically, it will continue after that period of time, which this is currently 10 seconds. This could have been 10 days or 10 weeks even. And it will, it will keep that, you know, stored and persisted until the time it is for to pick up and run. And if you’ve ever had to build a serverless setup where, or really any setup, you know, if you’re hosting your own servers or not, where you have to do something in weeks ahead of, you know, weeks in the future, and still retain access to information about what happened weeks before. That’s a That’s a tough challenge. Much and and it gets complex, really fit, really fast.
The last little note here is, is you can also create your events to be fully tight. So in this case, we’re just, we’re saying that I have a workload event. This is the name for it. This is the event data. And so we can see that the event data inside the workload is fully typed, and the things like event filters, and even if I, you know, change this to be something else, we we’re going to get an error because that’s not the correct value for this event on the client. Like we mentioned before, you can also type these. So this would be importing my typed event, and we’re seeing that I can’t send this type. And so this really if I were able to send this, and then also if I had required data, I didn’t have any data that was required, it was all optional. It would also say you also have to send the data along with this. So this is really powerful, and really makes for these composable workflows to work really well. So I swap back over. To here.
All right, so that was, that was the core functionality around async workloads. There’s a lot more that it can do, but I’m not going to continue looking at that alone. But you know, definitely hit us up. Our documentation has so much information on how to really leverage these things. But let’s tie this back to to November and and coming back to one of the most critical moments for web businesses in November, and see how this applies to Black Friday and Cyber Monday. And specifically, you know, there’s a lot that really goes with E commerce, but we’re going to take the the one of the hardest and arguably the most important part, which is the checkout. And there’s a lot of things that happen to check out, like this is really high level hand waving, a lot of the micro nuance that goes on here. But I think it will illustrate the point pretty well. So when we’re checking out, we need to create that order, reserve the inventory, process a payment, and then update the order once it’s complete, and then finally send that receipt. How hard could that be? Right? You know, what could go wrong? You know, there’s many different things here that, like, you know, some things are more obvious and easy to test for, and some things are hard, or rather, they only really surface once you start getting to that level of load. Or any other sort of externalities can come to play. Just a few examples. You know, there might be, you might have a API that you’re using that has a region outage. You might have, you know, you’re getting rate limited by something because you selected a free tier. Or, you know, maybe someone who signed up for the API key didn’t realize that it was going to expire on, you know, the day before Black Friday. We actually I was working in E commerce, or one of my first jobs working in E commerce, someone had inadvertently, like, deleted all of the categories, E, commerce, shop, and that didn’t delete the products, but like, all the category APIs were just gone, and that caused a lot of different issues in an internal process, not just the the ability to find the products, which You know, was fine, but really, when you started purchasing and bringing things together, internal systems would start to break, and then we would lose those sales until we got that whole thing resolved. It’s those things you do not expect, and it’s being prepared for those moments.
So one more dive at the code here. If we were to look at maybe just a very standard again, and And arguably, this would be the cleanest looking Checkout API you’ve ever seen, they don’t look this good. But again, hand waving here for to illustrate this, like we mentioned, we need to be able to create the order from the cart, reserve the inventory, process the payment, update the order status, and then send the receipt email any one. These steps can have issues. Any one of these steps can take 30 seconds to resolve, 30 minutes to resolve, 30 hours to resolve, right? And you don’t want to, you’re not planning for that fact is leaving literal money in this case, on the table. So how would we go from this to leveraging async workloads to to to make this more resilient and event based? So if what we’ve done here is move the full workload into an event that that is, you know, will respond when checkout two happens. So what would happen is you have an API endpoint that authorizes the user and things like that. There’s some some checks and balances that you have to do on the front and then it will trigger this and say Your order is submitted. And it will run through this process, and it will follow up async if it’s unable to resolve. This is how pretty much every large e commerce player that you know is on in the industry does this. It pushes things to the background so that it can work on it handle issues, be resilient to various different scenarios, but try to, again, maximize the ability to capture that payment at checkout. So just moving it into an async workload was a good step. So again, it will retry up to, you know, four up to four more times. So five times, by default, I’ll have a back off schedule. It will try to get this check out to work, and if it completely fails. Let’s say one of these things just isn’t going to resolve in that window, then it will get pushed to like a dead letter queue, which is like this fatal area of persistence that you can then use the API to trigger retries for whenever the issue is known to be resolved. But we can even we can even we can make this even better, right? Because, let’s say you had a blip on the processing payment, right? Even though you really want to make sure these are idempotent, you might not want to repeat these processes on retries, right? And that’s where these discrete steps come into play, which all we did between those two was wrap this important capability in a step and allow it to run. So in this case, we’re still creating the orders, we’re still reserving inventory, processing payments and so on, but now they’re wrapped in steps, and so that if I get all the way down to the Send Email issue and and we weren’t able to send the receipts, then the workload will again retry on the back, off schedule, or that you decide, or the default one, but it will not redo any of these other processes that have already been complete, and so really, just make sure that the full process gets complete. And then, as a bonus, right? We can also say, if we were wanting to say, hey, we also want to send a follow up email a week after someone purchases. You know, obviously you want this to be a little bit more complex, but to wait a week and then do one more step was as easy as saying, I want this thing. I want this workflow to pause for a week and then this is the thing I want it to do next, and then now it’s it’s on step which will be included in the retries and all of that.
So, like I mentioned before, these systems have the core focus around helping provide easy to understand system that provides resiliency for you and your team to capture the complete opportunities that make their way to your site. And I thought this would be a great time of year to talk about the impacts that it could have in the biggest events in November for retailers, which will be back Friday, but it’s also super powerful for a lot of other use cases to handle, which these other use cases would be really hard to handle within a serverless environment. You know, AI workflows is, you know, top of mind right now, but these can really help you simplify how these things work together over periods of time, right? Even you know ETL data pipeline. So if you you’re trying to take a bunch of data or user input data, process it like, create embeddings, and then send it to a provider. These, these, those are workloads that are very straightforward to define and have work you. And then, you know, again, if there’s issues that happen that you didn’t expect, let’s say you get rate limited from your AI provider, or your, you know, any sort of system, it’ll continue and pick up after they’ve resolved their issue. So it’s a very powerful system, and you know, it’s really about giving providing our customers access to that event based durable compute offering that is really essential for maximizing the ability to capture a full opportunity. It’s available for all plans and stuff on Netlify to try and and just go try it, experiment with it. You know, the docs, and there’s a guide that walks you through the basics and and how to set up the more complex stuff. And we’ll be doing more of these and more provide more resources for people to try it out. And with that, I will say, thank you. I hope it’s been insightful. Yeah,
Brian Rinaldi 35:59 awesome. Thanks, Sean, that was great. I, you know, I’ve been doing a lot of work with step functions, and obviously seems very, very similar. So I have, like, before I start asking my questions, just put it out to the audience. If you have questions, please just put them in the chat and I will ask them of Sean. So one of my questions was okay. Each of these followed a straightforward path. What if I needed to make a decision path? Like, whereby, like, if this happens, you know, like, if, let’s say, I run one step and and it, it returns a value that I’m like, if it’s this value, go that down this path, if it’s that value, go down that path, is that built in? Or what I that what I’d like say, trigger different events that might do different other workloads?
Sean Roberts 36:47 Yeah, yeah. And so it’s all built in, right? And so on the steps, side of things, you’re just, what you’re doing is you’re saying, I have this book of logic, and I’m identifying it as this, as some ID key and and then you know that thing might return a value. There isn’t a required order for any of those things to come in, right? So you could have a function that returns a value, and then you can have in number of other functions that you only run those depending on that value. That’s totally fine. There’s no this. This is all kind of a runtime dynamic, you know, solution. So you’re not configuring all the possible routes and paths ahead of time, which really reduces the complexity. You’re just saying. I have this step that I only want to run once, and this is what it’s called, and then you just invoke it and, like, it’s, it’s just function calling at the end of the day on the on async workloads, okay, okay, yeah, because
Brian Rinaldi 37:58 I know, like in when you do like step functions, for instance, AWS, you have to, like, kind of lay out that workflow, and there’s like a decision point, and it splits off, and things like that. It seems like this is a much more straightforward way of of, kind of doing that. But I just imagine, I imagine there, there even in that scenario of like, the like, you might want to go down a different path, if the payment processing fails, right, right, and then go, yeah, right, right, exactly.
Sean Roberts 38:27 And then it’s especially on the AI agent side, which is where a lot of this use, you know, a lot of the use cases that I’ve been trying to work on come from. It’s like, I need this thing to make a decision about something, and then, depending on what it’s decided, which is very non deterministic, go do this other thing, and as long as you can identify it, that’s the important part about you know functions, that if you identify it properly, then when retries stuff happen, that’s that’s how it knows it’s already completed. Something is, is by getting a result from something that’s already been identified. But yeah, it’s really straightforward to do those things on on there.
Brian Rinaldi 39:09 Yeah, yeah, for sure. And so curious, how long does the like? Is there a limit in how long I can do that pause like, let’s say I have, like, a 30 day subscription thing and I run, like I said it to just pause and run. Or do I Is there a limit to how many times I can keep going? Or no
Sean Roberts 39:35 limits there? I don’t, I don’t think we set a limit. If we did it was, I think it would have been like a year or something, but that was mainly to prevent people from accidentally doing something like scheduling something for 10 years, because I added a zero or something, right? Like, it’s more for accident, but no, yeah, you can schedule something like, in fact, the way I. Right now to help automatically clean up things. There’s actually an event that happens internally, that happens every 30 days to clean up all the dead, lettered, stale events that have failed and that you didn’t action so that everything that’s in that part of it is still fresh. So yeah, 30 days, three months. All good,
Brian Rinaldi 40:23 nice, yeah, because I imagine, like, I think Nick was commenting that he’s like, Oh, I could use this for handling subscription stuff and right things like that. So, yeah, yeah, exactly, that’s, that’s awesome, okay, yeah. I mean, I think it, I’ve seen a lot of this kind of thing, especially like nowadays, where, with the ability to have these auto retries and things like that, is, is makes it really, really important for, I mean, there’s so many different processes. You gave a good example, but there’s a ton of different processes, even, like just, you know, batch processing was something you mentioned, yeah, you know, yeah. I think
Sean Roberts 41:01 that, I think a common one that comes up often, that, like, I love this thing for is, like, you know, if you’ve ever worked with OAuth and various different token handling, most of them have some sort of need to refresh that token after some period of time and kick off a flow, get that token, and then say, you know, wait this period of time. Or even your you can just schedule it and say, schedule a refresh token event in this number of days, you know. And it’ll just do that. And so, like, you don’t have to, you know, if you’ve had to keep these things fresh in the background. It’s kind of painful. It’s like, you know, you have to have something like regularly handling this stuff. In this case, it’s literally just saying, get this, give me my refresh token, make this API call. And, you know, seven days and it’s done
Brian Rinaldi 41:55 everything. Can you, I mean, I know that somewhat, it’s somewhat predictable. But like, could you is the amount of time that you set it to, like, Wait, is it? Can it be dynamic, or does it have to be a fixed
Sean Roberts 42:10 it can be done. Yeah, it’s all. It’s, there’s none that’s like, there’s, again, I think there’s a total maximum. I have to look up the value. But again, that’s just to prevent people from having, you know, accidentally going to something crazy far. But no you can. It’s all dynamic. You can use milliseconds as an integer, or you can use this short string version, like, you know, 30 days would be like three, zero d like that string representation.
Brian Rinaldi 42:43 But I could, like, like, if I get, because I know some systems, like, if you get a token, it’ll tell you, like, oh, by the way, this token is good, like, when it expires, so you could then set the to retry to run based on the return value. Maybe, yeah.
Sean Roberts 42:59 And, speaking of handling these resilient issues, like if you start getting rate limited and you start seeing back pressure information from an API that says, hey, you have to slow down. You have to stop for five minutes, you can actually grab that response and throw there. You can check it out in the documentation, but you can throw a specific error that says retry with delay. So instead of doing the default back off, you can catch that and say, I’m gonna throw error with retry with delay for the five minutes, and then it’ll wait for at least that amount of time, and then pick up and continue. So that when you get rate limited, you’re dynamic to when you continue forward, you’re not just retrying and ignoring that fact. So it’s all dynamic.
Brian Rinaldi 43:47 Oh, that’s that’s actually really sweet, because I have run into that where, like, I’m calling an API and and I get rate limited, and I had to use like, a library and stuff to try and figure out how to get it to, like, pause and wait and things like that. So this is, would be much easier. Okay, that’s, that’s cool. I and I really, I think you all came up with a very simple kind of way to write these, you know, like, I, I’ve, as I was mentioned in the comments. Like, you don’t have to learn in in AWS world, it’s called, like, ASL, right, which is like the way you write in like a JSON format. You write out the whole step functions, and here you just, you just writing your code.
Sean Roberts 44:34 Yeah, exactly. Another nice, powerful little feature that really show showcases. This is the configuration block that says, you know, I subscribe to this event. You can also provide an event filter, and that event filter will run, you know, when the router is trying to figure out what workload should handle this event. And you. You have to, it’s a JavaScript function. You can write whatever you want in JavaScript like so yours receive the event data. So you can say, for example, I want to run this premium feature events, but my event filter, you know, when they send the user, I want to make sure that user is on an enterprise plan, or it won’t run this function. So you can become really dynamic and and you didn’t have to, we didn’t have to introduce some sort of middle layer step so that you can define what variables are and and on all this stuff. You just write JavaScript, and it, you’ll it allows you to terminate that events much earlier. So you don’t even have to process any of that stuff. You know. It saves on other things like database connections and and things for for work that’s not necessary.
Brian Rinaldi 45:51 That’s awesome. Well, it looks really cool. Actually, I’m curious to check it out for some stuff myself. So yeah, and I saw like, well, apparently Nick is, Nick is, you know who we were talking about earlier. He’s in the audience. And he’s like, already. He’s like, I’ve got these ideas. I I will bet you we end up with a two fold, two stack episode that that with Nick Taylor building, hey, sick,
Sean Roberts 46:18 let’s do this. Nick. I’m for it. And just to say, hey, you know, this is we released it not too long ago. We’re working on a lot of cool enhancements to it, and I’d love to talk to anyone who wants to explore it more and try it out. And even, even if you don’t want to go this route, if this concept of using event based architecture and durable compute stuff is interesting. Architecturally, I’m happy to talk about that using async workloads or not. You know, trying to help make sure people are are really capturing their moment on the web is like something I’d really love to talk to them about.
Brian Rinaldi 46:53 Nice, yes, I will. So, yeah, they can reach out to you. Are you? Do you have any, like, contact information that you
Sean Roberts 47:02 Yeah, so which is what my kids think I do? That’s what do you think Daddy does? And they said JavaScript. So that’s my that’s my tag.
Brian Rinaldi 47:14 Great, that. That is a good story. So, yeah, okay, so reach out to you at JavaScript all the channels. Yeah, yeah. Cool, cool, awesome. Well, thank you, Sean, this was great. Like I said, I’m, I’m curious to give this a shot. It definitely seems like simple enough that I don’t have to, you know, I can jump right in and start building things. So really good job on that. Thanks.
Sean Roberts 47:41 Thank you. You.
TheJam.dev is a 2-day virtual conference focused on building modern web applications using full stack JavaScript, static site generators, serverless and more.
TheJam.dev is a 2-day virtual conference focused on building modern web applications using full stack JavaScript, static site generators, serverless and more.
TheJam.dev is a 2-day virtual conference focused on how to build modern web applications using Jamstack, serverless and more
TheJam.dev is a 2-day virtual conference focused on how to build modern web applications using Jamstack, serverless and more
Joel Varty will show how to get started using Gatsby to create blazing fast websites using the Jamstack.