Streamlining Serverless: Making Development Easier with Framework24
Sumit Verma will show how Framework24, a new open source project, aims to make it easy to deploy serverless infrastructure as code.
What if the AWS SDK was really fast? What if it had a much simpler interface, and great error handling? Or built-in debugging output? What if it had its own API for integrating with your unit tests? What if its docs were, like, ok? And what if you could build your own custom plugins?
Well, now it can: it’s called aws-lite
! Learn more about why we built our own open source AWS SDK, and whether a new AWS SDK is right for you.
Ryan (he/him) is a co-founder of Begin, a platform for building Functional Web Apps on AWS. Ryan is also a maintainer of the OpenJSF Architect framework, and the reluctant creator of aws-lite, a fast, community-driven SDK for AWS. When he’s not computering, you may find him (and his family) climbing on some rocks.
Ryan Block 0:10
Hello, All right, so I call this one. So we created an AWS SDK. We call it AWS light. If you’d like to follow along at home on the website, you can hit that at AWS light.org, so in order to explain AWS light, I have to talk a little bit about my company. Begin. We’re not going to get too vendory today, but it will help elucidate why we arrived at creating our own AWS SDK. So begin creates what we call functional web apps, which are totally serverless, lambda based web apps. And you can learn a little bit more about the FWA idea at FWA dot Dev, and the way that it works is every web route has its own dedicated lambda. So your get foo route maps to source, HTTP, get foo, and every lambda has to be really fast, because this is all customer hot path. So by default, fwas that are provisioned with architect and begin use a variety of services in the AWS ecosystem, like dynamo, SNS, SQS and many more. We were doing that for a super long time with the AWS SDK, as you might expect, back in late 2021 we started to observe some unacceptable cold start latency that really caught our attention. We were seeing simple operations in these customer hot paths on the web, taking over a second, over a second to load a web page because of SDK calls, was not acceptable to us, so we started tracing it, and we found it was the AWS SDK. We found that to be really surprising, because our expectation was just that we could always rely on the AWS SDK to be performant. So back then, Amazon, when we brought this up to them, advised that we do a deep require of a given client. So if we wanted to use DynamoDB, don’t require AWS SDK. You could actually require AWS SDK, slash client, slash DynamoDB. This is more common knowledge now, but back then it was not really widely known, and that did improve latency for us, so we saw some improvements. So flash forward a little bit. Sdkv two is deprecated, and we’re all told we need to move to sdkv Three. Probably a lot of people watching this are still mid flight in that transition, and there was just another problem, which was once again, the client was creating extremely long latency, unacceptable latency for these customer hot paths. So we had this major performance regression upon switching, SDK b3 and again, this was made really abundantly clear that we were not gonna be able to get around this. Amazon shutting down sdkv Two. It’s not gonna be available in lambda in July. You have to move to sdkv Three. And just a little bit icing on the cake, if you have not recently gone to the sdkv, three docs, well, Gird your loins. They’re difficult to say the least. So I remember this old Alan Kay quote that I really liked, which is that people who are really serious about deploying to AWS should make their own AWS. SDK, just kidding. Obviously Alan Kay never said that, and I really, really want to underscore the point. We did not want to do this. I did not want to spend my time building at AWS, SDK, but we really did not feel like we had a choice. We believe in lambda based workloads, and we believe in web apps that are dynamic and powerful and built with lambda, and those need to be able to talk to AWS services quickly. So we decided to build our own, and we called that AWS Lite. It is no JS specific for right now, and that is really important, because the AWS SDK that Amazon makes is intended to be used in a variety of environments. We don’t have that requirement. We are shipping for no JS. And because we’re doing that, we can make it super fast. We can really hand author the code, make sure it’s it’s not only readable, but performant. And you know, because we care about open source, we can make that a community driven project. So let’s talk about speed. This is one of the biggest reasons why we did this. It’s not the only reason why we did this, but we felt that this could be done faster, and it is. So. We published benchmarks. When I was first bringing up the project, I put a lot of. Time and effort into a benchmarking suite. We didn’t want to just make claims. We wanted to be able to prove with science, no smoke and mirrors, just how much faster this stuff is. So in this chart, you can see a time to respond with a single DynamoDB roundtrip is about twice as fast with us. And one of the cool things about this is this is compounding. So the more complicated your back end is in lambda, the more time you will save using AWS Lite. But you don’t have to take our word for it, like we are seeing this in the real world. This isn’t just a synthetic benchmark thing. I love this post from Phil who showed what happened when he cut over his recurring lambda executions. He had a scheduled job that he was running that would take between 15 to 20 seconds, and suddenly it reduced by half the lambda durations. So not just about benchmarks. This is real. But it wasn’t enough to stop with speed, because I work in the AWS, SDK all the time at begin, and I have to use this stuff so it needs to be ergonomic. So one of the things that I really wanted to have in this was a nice debug mode, so you can watch requests and responses flowing through the system as you’re using the SDK. It’s got a fully pluggable API. So if you don’t like the way that we have structured our service plugins, you can make your own. You can also make your own for your own make your own plugins for your own services. So AWS, SDK not only is compatible with with AWS, but it can be used with services like Backblaze or, in theory, any other AWS compatible API, and there’s a growing number of those as a first class concern. We handle automatically the serialization and deserialization of AWS flavored JSON. If you know what that is and what that means, I’m sorry, but if not, it’s really nice, and it’s just not a thing you have to you have to think about or deal with. We have unit test integration that is totally generic. You can plug this into any unit test system or setup and be able to define mocks and responses really easily. And I think our docs are pretty good. I mean, there’s always room for improvement. But I really, I think they’re, I think, all right, yeah, I’m not trying to, like brag here, but I think they’re pretty decent. And then I wanted to talk a little bit about what openness means to us. So openness isn’t just is the source code published on GitHub, which, of course, it is. But if you’ve ever looked at the AWS, SDK, everything is built and generated. It’s very hard source code to navigate. It’s all meta code. So we have zero abstraction build resource. If you are curious about how it works, you can just dive right in. This is really simple, low level stuff that we think is really important. We have an app types like contributor model, so we encourage people to come in if you have a service that you really rely on in the AWS ecosystem, you can volunteer to maintain a service plugin. And again, if you don’t want to do that, you can maintain your own everything is licensed. Apache 2.0 our service plugin surface area and the number of methods that we support is growing rapidly, and we try to do everything on the open. So it’s all on GitHub, it’s all in discord, and we welcome prs. So openness is a big part of this. AWS is a gigantic ecosystem. We’re not going to be able, as a small company to cover everything ourselves, and nor do we intend to so we really do rely on external contributors to come help make this project better. So with that, I’m gonna just do a little bit of hacking with AWS light on on my computer here, we can see it do some stuff, and then let’s go to some Q and A afterwards, I find that most of the interesting stuff seems to happen when people start imagining a world without the AWS SDK, or beyond the AWS SDK. So I want to leave plenty of time for Q and A if possible. So here I’ve got my demo project. There’s nothing in it except two dependencies.
Ryan Block 9:40
I’ve got the AWS lite clients and the DynamoDB plugin, so I’ll just create a new file here, import AWS Lite. It, and let’s get going. So first thing I’m gonna need to do with AWS light is instantiate it. So I will do I’m gonna set the region US West two. That’s the one I’m going to be interacting with. You do have to specify a region. It will also pick that up in your AWS configuration. If you have that set, this is going to use my default AWS credentials, so I don’t need to specify those here. And we’re going to specify login that will be DynamoDB. So and then over on this side, I’ve got a little script that will just continually refresh in the terminal every time I save and make a change. So if you’re wondering why the terminal is just going to be constantly changing, that is why I call it trr. If you ever want to use that, it’s on NPM under trr. So just gotten it that. Oh yeah, don’t forget, you have to install your packages. Here we go. Cool, so let’s go make our first call.
Ryan Block 11:14
So I’m just going to do a DynamoDB scan here, and we’ll do a table name.
Ryan Block 11:30
Think that is the table name that we’re going to scan. How do we look cool? So I’ll just explode that a little bit better. So this is the resulting scan of the DynamoDB table. Name I specified here so it’s got one row, idea, foo, data, okay, true. Pretty straightforward. Now let’s see what happens if we actually kind of go under the hood and turn on debug mode. This is something I use all day long, every day, it turns out so here, once we get it going, we see client instantiation information. What’s the current configuration, what are the methods available? We’ve got my credentials. We redact your keys just in case, and then we actually publish the raw request here, so you can see what’s going on under the hood. In this case, not a lot. It’s just a pretty simple request because it’s a to get in the response. You see, we’ve got the AWS flavored JSON that I referred to earlier. Now this is not my favorite to work with, so when I started on this project, wanted to make sure that this came out a little a little more workable. In AWS SDK land, you have to instantiate a document client, and there’s a whole bunch of redirection that happens around there. Here, we just give you the data as you would expect it to be, so you can see that right here. So let’s see what happens when we want to write a row. I’m so I’m going to put a new item. Table name, got an item, so I’ll specify the ID of bar, since my primary key on that table is ID, and then I’ll just do another data property, and I’ll call that okay, false. So now I’ve got two rows, pretty sweet. And again, we’ve got the whole request response cycle going on. Over here, you can see my published new row to the put item method and then the response and the request from the scan and the response for that, and here we go. Now let’s say I actually wanted to implement this as part of my business logic, and I wanted to have some control over this whole cycle in my unit or integration test suite, so we can actually do that with AWS light’s built in testing API. So what I’ll do here? So I’m going to disable that, and I’m going to turn on the testing mode. So in order to do that, I do AWS like testing dot enable, and that turns on testing mode. And so when I do that, this scan is not going to find anything, because I turned on testing mode and didn’t specify any mocks. So I will go ahead and do that now. So the first property that you specify in a mock is the method name. So I want to say this is going to be my response for a scan, and I like the way that that looked up here. So let’s say my business logic only cares about the items array. So I’ll just go ahead and specify the items array. I’m not going to worry about the count property or the scan property or the scan count property, and then I’ll do hello and whatever. Okay, I’m going to turn off debug here, because it’s not really going to add a whole lot of value in the testing mode, but now you can see that my scan that I had done down here on live AWS is returning the data that I specified here in my mock. And what’s also kind of neat, let’s get rid of that for now, is I can do this sequentially. So say your business logic has multiple operations on multiple scan operations. As it progresses, you can set up multiple responses in sequence, so turn this into an array. So about that here, so I can see what I’m doing.
Ryan Block 16:37
So my first scan is still that, hello, okay, and if I run this a second time, you see the second in the sequence. So this allows me a lot of granular control over over how AWS requests and responses are tested. This is a first class consideration of AWS light. It’s not something we farm out to and it’s baked really low level into the system, which is nice because there are some tools out there, like the AWS SDK mock, and there’s a v3 mock as well. Those use systems like sign on and other mocking tools under the hood that are, in my opinion, a little complex and somewhat brittle. So again, this was something we wanted to make like a really nice first class consideration, and then, if I’m good, I I say, I just want that for the first request, I can disable it. So then I’ve got my my mocked response for the first time I run scan, and then I disable testing, and it goes right back into talking to live AWS. So yeah, AWS Lite, super fast, open source, in I hope, a meaningful way, and integrates nicely with your testing systems and highly extensible. So that is the project that I’ve been working on lately, and I would love to answer questions about it.
Sean C Davis 18:32
Thanks, Ryan, that was great. We’ve got a few questions waiting for us, and folks out there in the audience, keep using the Questions tab and dump your questions in there. I will filter them, and we’ve got about 20 minutes or so to chat here somebody. I’ve got a couple already lined up from Brian. And let’s, let’s start there. Brian asks, what are the key differences that enabled you to get such a significant performance difference? Hmm,
Ryan Block 18:58
yeah, so, because most people are just consumers of the AWS SDK, and we don’t spend a lot of time kind of getting into the weeds there. You don’t really notice how it was authored. And basically the AWS SDK is authored almost entirely in meta code. And I have, like, a kind of an interesting real world example that speaks to the differences between the AWS SDK and the AWS Lite. So the reason why AWS authored their SDK and metacode is because they have some requirements. I think they have some requirements that we don’t they wanted to automatically publish types. They wanted a system that could be used in browser as well as node and possibly in other JavaScript environments. We don’t need to do those things. We are actually drafting on their types, so that’s really nice when we thank them for publishing those types so we can make use of them, and we’re not targeting the browser. Yeah, so we can write real JavaScript. And so the way that this metacode kind of nets out is one. The example I would give is AWS is CDN system is called CloudFront, and CloudFront is an older system in their ecosystem and utilizes XML like deeply, nested XML for all of its requests and responses, and writing a metacode system to handle the validation and publishing of super deeply nested XML from JavaScript and interpolating that into XML is quite tricky. And when I was going through and creating our plugin for that, I observed that there was something like 10 to 12, I think, 1000 lines of code for handling this deeply nested structure. When I authored it by hand, which was quite annoying. I mean, I’m not gonna lie, it was not fun to do. My implementation was about 200 lines of code. And so, I mean, maybe there’s some bugs in there. I don’t know. I have not seen them. I use it so, you know, you never know there could be bugs in their system as well. But if you’re comparing, like my 200 lines of code ish to their 10 plus 1000 lines of code, this is why, in a nutshell. So yeah, we write code the old fashioned way with chatgpt. Just kidding. We write it by hand and it’s really fast, is kind of the condensed answer.
Sean C Davis 21:36
So you said you take advantage of their TypeScript library, which is great, because one of the questions I had was, did you did you put an emphasis on having parity in terms of method names? So that is it like? Are you matching those names for what you’ve included in AWS light? So it’s you know, is it easy to move from one to the other? Or have you? Did you take your own approach on the SDK design?
Ryan Block 22:05
Yeah. So for the first we call them like the, you know, the the first class plugins, the ones that we publish. Those ones, we use SDK AWS, SDK semantics for so we use their method names. We try to use all their property names. Occasionally, there are things that we do to improve upon them, and we have, we have the ability to kind of denote those and the types that we publish. But for the most part, we just kind of re export all their types, because we can we’re following their semantics. So you know, part of that is so we can draft on their types, which is nice. This project would be almost impossible for me to maintain both the all of the API integrations as well as types for them, but it it allows us the ability to focus really solely on just the logic. So, you know, all the type stuff, like, if you use that, great, it’s there for you. If you don’t, no worries. We’ve got pretty good documentation that gets generated from the stuff that we do that we do already maintain and yeah, so I hope that answers the question,
Sean C Davis 23:29
yes, definitely, definitely. Another one coming in from Brian is, are there things that I cannot do with the AWS light SDK that I would that I would have to use the full AWS SDK for.
Ryan Block 23:46
There’s not many. I mean, I think it really just depends on the services that you use. So if you use AWS SDK for a service that we don’t yet have a plugin for, and you don’t want to author your own, we do have an escape hatch for that. Actually, we have a low level client that allows you to just interact with any AWS service without having a plugin, which is nice, because, you know, if you we we sometimes see this in our work, we bump into something where we don’t have a plugin. We just need, like, a single method. So we just, like do a raw request with with AWS light, you know, we’re done. It’s good. So if you don’t want to use that escape hatch, if you want to have, like, all the nice, you know, pre built semantics of the AWS SDK, that might be a thing. It’s still also fairly early. The library has been around. We just, we just introduced it late last year. So for certain things in, for example, the what is called the AWS credential provider chain, we don’t yet have full support. AWS has a system called IMDs in EC two. If you use EC two, I mean, this is more serverless. So, you know, maybe you do, maybe you don’t, but we don’t yet support. IMD IMDs, we will, we don’t run it now, so there’s like edges like that. But generally speaking, I think if you are working in serverless, AWS, serverless specific workloads, it will probably cover most of your normal day to day use cases. But
Sean C Davis 25:21
it sounds like you have, do you have some level that you have to get to? I’m thinking, if you’re going to spin up a plugin for a new service, are, do you have some threshold where you’re like, well, the MVP is, you know, X percent coverage of all of the methods that AWS, SDK offers. Or, you know, are you going all the way, or does it just depend on this? It depends. Has been a theme today. Yeah, I think
Ryan Block 25:53
it depends, I think, for the most part. And you know, this might sound kind of like, I don’t know, self serving, but it’s like, it’s really like, last night, for example, I was working on something for the AWS site performance project, which is, like, this adjacent project that does all of our benchmarking and scoring for this thing, so we have something to compare it to. And I needed a plugin for AWS im and it didn’t exist. And I was like, Well, I’m like, I’m like, I’m just trying to get something done here. So I wrote, you know, I authored, like, the four methods that I needed, and I published it. And so like, if you use I am and you went to AWS light, you like, oh, this only supports like, four methods, and these are not even the methods I use. So like, that’s totally fair. That would be a completely fair criticism. And I will, I will, take that one on the chin. But it’s also like, you know, let’s not let the perfect be the enemy of the good. And we encourage people if, if you need more methods like, send us PR. It’s all, it’s all there. We’ll take, we’ll take PR from anybody. Just sign the CLA,
Sean C Davis 26:57
yes, great. So yeah. And speaking of that. I’m curious. I mean, you mentioned that there’s obviously this massive landscape of everything that you would have to support if you were going to support all of the services, but there’s also, you know, so many other tasks and places to spend your time when you are supporting a an open source project. And so I’m curious with the, you know, it’s a big, big project to take on. How do you, how do you approach balancing and prioritizing your time so that you’re also able to get the work for begin done, but, you know, keep this project moving forward?
Ryan Block 27:40
Yeah. I mean, I think it kind of takes care of itself in so far as it’s got very clear priorities. So, like, you know, we had, we had a bug report in the last week where something was pretty badly broken under certain circumstances, and that was a pretty easy decision, right? Like, I need to fix that. We prioritize bugs first at our company, like, culturally, you know, we drop what we’re doing to focus on important bugs. So I did that most of the time. I don’t really see that in AWS light. It’s been very, very stable. And, yeah, so I mean, you know, if PRs come in with new methods or new plugins. We triage those, you know, within a day or two. But, you know, the project is still young, and we’re not inundated. So it hasn’t really been a huge problem over the fullness of time, as the project grows and it has been growing and usage has been increasing, I’m sure this will, you know, become more of a problem, and then we’ll start to have to, you know, designate additional people to contribute and work on it, but, you know, we’re just trying to solve one problem at a time. So,
Sean C Davis 28:54
okay, yeah, that makes sense, for sure. Back to the performance for a minute. So you had mentioned, when Brian asked about the performance comparison, you mentioned that the just the pure quantity of code was one piece and the other piece being that you’re able to focus on and target just node. You know, I’ve seen different approaches, and people kind of introducing different languages to try to make things even faster, like, there’s been a lot of talk about rust over the last couple years, and so I’m curious if you’ve thought about other ways to improve performance, or if you’re getting, you’re getting what you need from node, it’s good enough, it’s solving your problems, and you’re, You’re going to continue down that path.
Ryan Block 29:41
Yeah. I mean, I if there’s a thing that we can do at like, at all ever with AWS side that will increase performance, like, I will do that thing. So just as an example, I created an issue this morning for myself. We have two dependencies. This is part of the performance as well, like minimal dependencies, and those dependencies have no sub dependencies. We rely on Michael Hart’s AWS four for signing the requests. Really great library. And we rely on ini for reading the AWS credential files on your local machine. We have to extract the credentials in order to make the requests. The ini dependency is 20 kilobytes. I think we can probably do that in 10 to 20 lines of code. Is that like a gigantic savings? No, but like it’s it’s easy. That’s not like a challenging thing to do, and it removes a dependency, removes file, reads, it’s all incremental. So I am open to any performance increases that anyone has. I think there’s, like no performance increase too small. We don’t use things like Axios or anything for HTTP, you know, I use the raw node HTTP built ins. So everything we do is performance optimized. Yeah, there’s a thing that we can do in node to make it faster and more efficient. I will do that thing. So I don’t like, I try, like, this is as I’m maintaining it. If I see something, I will try to do it. If I haven’t seen it, I want to hear about it. So part of the reason we talk about performance log, we want to put people in that mindset. So if you’re an open source contributor, you know, you’re looking at it and you’re like, oh, yeah, this, like, this part of the project could be optimized better, like, totally I’m all about it.
Sean C Davis 31:27
But, and you said that the the the official SDK also is supporting the browser. Have you had requests come in for that? Or do you is that a place where you’re like, No, I’m I’m drawing a hard line there, because I know that that’s gonna, that’s gonna totally bloat performance. Yeah, I
Ryan Block 31:43
mean, I’m not really sure what the browser use cases outside of AWS is own console. I think they may be the primary and possibly only user. So, you know, the problem with the browser is, you’re, you’re interacting with AWS services. In order to do that, you need credentials. You’re not going to send your AWS credentials to your customers browsers. You’re not going to do that like, like interacting with these services requires a trusted back end process. So yeah, browser is not really a consideration for us. We have had requests for demo and we have tested, and it does work in demo really well. We have a little bit of a blocker right now, just in order to get the demo support stood up. But as soon as we get past that with the demo folks, we’ll probably publish a demo version on JSR. And I will also say, you know, even, even if you don’t use AWS light, it’s been kind of cool to see AWS respond to the existence of AWS light and, and I’ve seen this in the AWS Performance Project, where, for the things that we benchmark, especially, you know, specifically a DynamoDB client, their times keep going down a little bit, like a little bit, you know, as the months have gone by, they’ve they’ve gone down like it used to be consistently, like our most important benchmark is have them consistently over 1200 milliseconds. And I think they recently broke 1000 milliseconds. I think it’s like the high 900 so they are making improvements because we’re out there pushing them. And so if, if, for no other reason than that, I think it’s, you know, helping contribute to the ecosystem.
Sean C Davis 33:32
Yes, that was actually, that was a question I had had, was, have you talked with AWS at all? You know, is there? And maybe, maybe another way to phrase this is, what does success for this project look like? Is it AWS adopting? Is it, is it just the official SDK getting better? If, yeah, yeah, what are the goals?
Ryan Block 33:53
Um, well, our goals are we needed an AWS SDK that we could rely on not breaking us that was fast, and so the transition from v2 to v3 was pretty violent for us, because all of our code that we had written in lambda that would rely on AWS, SDK, v2 now had to be reauthored From the from from the from the begin side, from begin.com side. And so that meant a ton of unplanned work that provided absolutely no value to our business or to our customers. And that sucks so philosophically, an approach that we’ve always taken with our open source is we don’t break people or if we do it in minimal and as infrequent as possible. So like a heartbreak where we’re telling all of our customers you have to completely alter your business logic is pretty unacceptable to us, so having a really stable position to to build against AWS moving forward was pretty important to us from a business. Standpoint. So from that perspective, we’ve already achieved our objectives. We now have a tool where, when something comes up, and we have like, you know, a method that we need for begin.com if it doesn’t already exist, we build it, and if other people benefit from that great and if other people want to contribute to that ecosystem, even better. So from that regard, it like project is already achieved its goals, and so we’re, it’s good we’re using in production. Everything’s cool there in terms of, like, you know, broader impact, yeah, I mean, if it pushes AWS to do a better job with their work, great. You know, I haven’t seen it come forward in the documentation. So the AWS, SDK, three docs are notoriously bad, probably some of the worst I’ve ever seen in any project anywhere. And I will go to the mat on that one like no one will argue with me if you’ve ever had to use it. It’s really bad. It is clearly by machines, for machines, not for people. So I won’t say that ours is like the best. It’s just good enough. I think, I hope for people to use so from that regard, hopefully it can be a good resource for people, especially if we’re using their semantics. You might even be able to use AWS light docs to use SDK three. So yeah, I think, I think those are the goals. And you know, if, if, along the way, people see this stuff and are philosophically aligned with it, and, you know, that makes them want to try out vegan. That’s cool, but that’s not the point. The point was so that we didn’t have to get broken by this stuff again. Yeah,
Sean C Davis 36:32
that was another question I had. Was, Is there? It’s obviously a lot of work to put into this, and it was solving a problem, but has there has anything yet positive come out of it, other than just being able to serve, you know, continue serving your current customers? Yeah. I
Ryan Block 36:50
mean, I think it’s really streamlined our own development. Things that used to take a while when working with the SDK. Now don’t take as long. I mean, I it’s a little bit of anecdata. So, you know, your own mileage may vary, but I’ve been working the SDK, AWS SDK for a super long time, and I can kind of just do a rough comparison of like, okay, I’ve got this business logic. It’s written in sdkv Two. We need to do something with it. And I often find, I don’t know if I’ve ever found this not to be the case, that it is faster to build the SDK AWS Lite, sorry, the AWS light plugin from scratch to accomplish that than it is to attempt to port SDK b2 to be three, because of how difficult that thing is to use and how poorly documented It is, some people may find that that is not the case, and so that’s totally cool, but that has been the case for me. It has accelerated our move off of AWS, SDK, b2
Sean C Davis 37:53
Yes, okay, and documentation has come up a few times. And so that’s my final question will be, what have you learned about writing good documentation throughout this, this process,
Ryan Block 38:08
I try to hedge because, you know, I never want to make the claim that it’s it’s good. I would let other people be the judge of that. It’s tricky, you know. I mean, I have a background also in writing too. So writing that like helps people understand what you’re doing from an implementation perspective, is really challenging. Being terse is not easy, so we do the best that we can, and if it’s not good enough, hopefully people will contribute and send us PRs and stuff like that. But yeah, I think really, what it starts from is a human being imagining another human being consuming this thing, and in going from there, what we have in sdkv Three is, sadly, not that it is really a system of types or machines, by machines, for machines and docs are generated from that. So I think if we can remember that these are tools for humans, code is for people. It’s not for machines. That if that’s the position we start from, we’re all going to be better off.
Sean C Davis 39:17
Love that. All right. Well, thank you, Ryan, thanks for the presentation, for the conversation and for the work you’ve done for the open source community at large. Really appreciate it. You.
Sumit Verma will show how Framework24, a new open source project, aims to make it easy to deploy serverless infrastructure as code.
Moar Serverless will give you all the information you need to take advantage of serverless in your application development including new AI and edge capabilities.
Moar Serverless will give you all the information you need to take advantage of serverless in your application development including new AI and edge capabilities.
Moar Serverless will give you all the information you need to take advantage of serverless in your application development including new AI and edge capabilities.
4 AWS experts will cover everything you need to know about using step functions on AWS to handle a variety of use cases within a SaaS application.