See full event listing

What the Heck is Edge Computing Anyway?

The Edge is the new frontier of computing possibilities, offering promises, opportunities, and it’s own set of challenges. In this talk, we’ll break down what it is, why it’s awesome, and how it fits into your application architecture.

We’ll cover things like:

  • What are the benefits?
  • What are the limitations?
  • When it makes sense
  • When it doesn’t make sense
  • How to get started?

Austin is a web developer from Portland, Oregon. He’s been breaking and fixing websites for a decade and has worked with award-winning agencies, innovative tech start-ups, and government organizations.

His goal is to keep building cool things for the web and have some fun teaching folks like you through writing, open-source, live-streaming, The Function Call podcast, speaking, and running workshops.

Transcript

Austin Gil 3:51
Hey, thank you so much. Yeah, I want to take a moment really quick to actually say that. I’m very grateful and honored to be here that you’ve given me this opportunity to talk today. I know that CFE dot Dev has been providing a lot of awesome high quality resources to people for free, which is amazing. So I am very happy to be part of that. So thanks.

Sean C Davis 4:15
Amazing. And I’m really excited to hear this talk. So we’re gonna dive into edge computing, which is something that I’ve been really interested in I’ve I’ve read a lot about it and kind of kept up on the space but haven’t personally put it into practice. And so right, yeah, really excited to hear what you have. And with that, I’ll bring your slides off or slides up. I’ll get off the screen and I’ll come back on at the end and we can have a discussion and for for any of you folks who are listening in, feel free to just keep the keep the chatter going. If you’re on Crowdcast. At the bottom of the screen, there’s an Ask a Question button which you can use to ask a question. I’ll be monitoring that with the chat and then come back up and we’ll address anything that came up during The meetup at the end. All right, and with that, take it away, Austin.

Austin Gil 5:08
Yes. Hello, my name is Austin Gil. I’ve been working in the web development space for about 10 years, I’ve been doing some open source stuff as well, mostly in the Vue ecosystem. And I call myself kind of a web educator, because I spend a lot of time on creating content or working at meetups or doing presentations like this. And I’m fortunate enough to work at a company called Akamai, which is the largest and most established CDN and web application firewall, provider, and edge compute provider, and all these cool things. And part of that is what we’re going to get into today.

So I’m very excited for this topic. Because we’re going to be answering the question, What the heck is edge compute and edge compute can also sometimes be referred to as edge functions or edge workers. So really quick, I’m just going to answer that question with a long string of text, which says that edge compute is a result of distributing serverless functions to 1000s of locations around the world to handle requests from as close to the user as possible. And the result of this is a programmable, dynamic, customizable responses with the least amount of latency. So that improves your speed performance. And that’s basically it. If, if you just wanted to come in and hang out for figuring out what it is, that’s it, but that’s also really boring. And so I’m actually going to spice it up with a little bit of a metaphor.

So today, we’re going to be talking about the process that we could take in starting a business to make dog hats for the world, which makes for a just kind of decent analogy, but it makes for really fun photos. So I like to, I like to use that. So to describe edge compute, we really should start with defining what compute is. And it’s basically the amount of work that computers have to do in order to generate some output. And today, we’ll be talking about it in HTML. But it could also be generating a PDF or dynamic images, or JSON or something like that. And where compute happens is really the interesting thing. Because up until recently, we’ve had traditional servers, we’ve had client side rendering, we’ve had static site generation, and we’ve had Cloud Functions. So we’re going to discuss all of those. So traditional servers are when you have a machine that’s running software that you kind of decided upon, and it’s going to execute the code that you wrote to generate that HTML output. And there’s a lot of nice things for having traditional servers, in that you have predictable pricing, you have sort of unbounded, right runtimes, because they just stay up as long as they don’t crash. And you’re kind of in complete control of everything. And that is a double edged sword. Because we’ll see on the cons that with traditional servers, despite the fact that they’re running all the time you pay even if they’re not doing any work, they can also be a little bit more complicated to scale up or scale down, because you either have to network multiple, compute multiple machines, or add more compute resources to the same machine. Additionally, traditional servers are going to be distributed or deployed to, let’s say, like a single location one, one machine in one part of the world. And that can be that can add latency for users in other parts of the world trying to reach that server.

And again, because of the double edged nature of you being in complete complete control, you are also responsible for maintaining the server software in addition to your application or business logic. So we can think of traditional servers sort of like a commercial workspace now you can either go out and purchase a commercial workspace or in like, you own that building. Or you can go out and rent a commercial workspace. And you can provide all of the machinery and workers and people that go inside of that workspace that are going to be responsible for receiving customer requests for needed dog hats and then putting them together. And you’re going to be paying that bill for the month, regardless whether you receive a bunch of users or clients coming in asking for dog cats or not. On the other hand, we have client side rendering. And this is where you basically take some code and you provide it to the user’s browser and on the user’s browser, they’re going to be responsible for running the compute that’s going to generate the HTML for the page. Now, that could happen in JavaScript, or WebAssembly. But the key thing is that this is code that runs on their machine.

Now the pros here is, once you have the code to the JavaScript to build the page, you’re not dealing with latency because any consecutive compute that has to run is already provided on the machine. So there’s no latency. There’s no performance hit for having to go make a round trip to the server. additionally, once you have these resources with something like a service worker installed, you this can work offline. So users can take the JavaScript to run your application and go offline and continue to use your application. The cons to client side rendering is that this does require the initial download for for the user to have that page to be greater. It also means that you can’t provide secrets in your code that’s generating those builds, because any user can view the source of the page, you also don’t have control over the environment in which that compute is going to happen because users can choose which browsers they want to use. And the performance is greatly going to be dependent on the user’s device. So if you want an application that is fast and responsive, it may not be the case if a user is working with an older device, or if they have a lot of resources consuming the device at the time.

So we can think of client side rendering as instead of having that sort of, workspace, maybe we just provide a whole bunch of DIY kits, and the resources and the instructions. And we provide these to all of our customers so that customers can go off on their own and sort of knit their own dog hats. Next, we look at static site generators, which site static site generators. And this is where instead of generating the compute on demand, you basically go ahead of time, figure out all of the pages in your application that you’re going to build and build them at once. And then you just generate static resources that you can then deploy onto a server or various different deployment targets, and all of the pages already built. So at the time that a user requests, they don’t have to build the page on demand, it can just serve a static asset. And technically, this is still under the umbrella of server side rendering, because if it had whether it happens on your local computer, or on some remote computer, it’s still going to be running in an environment like no JS, for example, or go or I don’t know about PHP, but a static site on some server.

So the the benefits here are once again, because those assets are already built, and we’re not generating the code on demand, it’s going to have an immediate response time. You can also deploy it to things like CDN, which we’ll look at, in a moment, it’s also going to be very fast, cheap and secure. Because again, you’re just dealing with static assets that respond immediately. So you don’t need to worry about a whole lot of compute resources. The downsides are because everything is generated ahead of time before request, you can’t really do anything dynamic based on the user request. So things like logins or authenticated content, you can only do that if you then also combine it with client side rendering that is going to do the compute or some sort of other form of dynamic rendering. The other downside is, the more pages you have to build, the longer it’s going to take for those static site generators to build all of the pages in advance. So it grows linearly with the, with the amount of content that you have to make. So we can think of static site generators. Like if we were to take a whole bunch of plans for Nita dog hats, and we knit them in advance, and then we just have them waiting around so that when someone comes to our commercial workspace and asks for a blue knitted hat for a lab that in this size, we can just go into the back and grab the one that we have available and provide it to them, we don’t have to make them wait in the lobby while we go ahead and knit those dog hats and then give it to them.

Okay, and lastly, in terms of compute resources, there is Cloud Functions, which also are sometimes called lambda functions, or serverless functions. And this is where you have a service provider like a cloud provider, that is going to handle all of the compute necessary and scaling the application up and down. But it’s based on the functions that you provide. So they maintain the machines and the server side software to run those machines. And they receive requests and pass them along to the functions that you provide. And usually, they’ll give you some sort of URL where that function can receive requests. So the nice thing about serverless functions, or Cloud Functions, is they’re very easy to provision, you can deploy new cloud functions without a whole lot of resource and networking and planning and architecture. They also have this concept of being infinitely scalable, meaning as the URL that’s tied to that cloud function receives more traffic. Service providers can route that traffic to more and more machines that are running that same function.

Additionally, most serverless service function providers Follow a pay for what you use payment model, where if it’s not being used at all, you pay nothing. But then if you use it a lot, you pay more. So that can make pay or pricing a little bit less predictable, but also often affordable if you don’t have a lot of traffic. And because the service provider is maintaining the server software that’s running the code, you really only have to worry about the business logic or the actual function code that you are providing them, which can make it very easy for maintenance. There are some downsides to serverless functions, those are going to be that you basically have to follow the certain conventions that the provider outlines in terms of naming and files and structure and things like that. They also follow this sort of stateless principle, where, because of the nature of how they scale up and down, and you can’t trust, which machine that function is going to run on, you basically can’t rely on shared state of that function. So memory or file system, because that function needs to be able to run across multiple different machines.

And because the service providers are responsible for the server software, they get to determine what software is going to be available. So you have a limited choice of resources to choose from and languages. And generally, you choose what region you want your serverless function to be deployed to. So similar to traditional servers, that cloud function can be far away from the user that’s requesting it, therefore it could suffer from latency. So I like to think of Cloud Functions as a some robots that are trained to knit these dog hats. So think of it as sort of that same warehouse that we were talking about. But instead of us getting the warehouse and providing all of the training and people and machines in it, the dog hats, it’s just a general warehouse that can build whatever and all they need from you is the recipe or the instructions on what to make. So we provide them with the instructions to make needed dog hats. And then whenever a customer comes to the warehouse, they can request an ID a dog hat, and the warehouse knows how to knit them and provide it.

Okay, so that’s as far as I’m gonna cover with Compute, I’m sure that there’s more. But that’s all that we need to know. What we’re going to move on to is defining what is the edge part of edge compute. And the edge you can think of as basically a network of globally distributed computers that are capable of handling user requests. And the reason why we discuss edge is because of that discussion around latency. If we move things all around the world, or if we put things all around the world as close to the user, we can reduce the amount of time it takes for users to get to the request and get the requests or get the response back to the user. So that improves their performance, which improves the user experience. The best example of edge in terms of web development is a content delivery network or CDN. And this is essentially 1000s of connected computers that are going to be delivering resources to users when they request it. Now, these are great, because they’re so close to the user, oftentimes within just a few miles, that it can really reduce the amount of latency. And this also works really well for anything that’s like static. So the way that we were discussing static site generators before you can generate your entire site and distribute all of those static resources around the world, and your site is going to respond as quickly as possible to any request. The biggest downside to content delivery networks is that they are designed for static content specifically, so they work great for CSS or JavaScript or images or fonts. But if you have an application that has to respond with dynamic content to the user, CDNs are not the best choice for that.

So CDNs I like to think of as sort of, instead of like a big factory, we can think of them more like convenience stores, where if we think of those static, pre made dog hats that we had generated before, we can take those distribute those to a whole bunch of convenience stores all over the country. And then people instead of having to drive very far or you know order something from very far, they can just walk down the street to their nearest corner store and pick up a dog hat that’s available. Okay, so with sort of compute and edge out of the way we kind of understand what’s going on. I also want to take a little sidetrack into talking about performance, because that’s going to be sort of a key principle. And I like to think of performance in this 3d concept, where we have distance which is the amount of distance that a request and response have to travel. So that’s also also called latency. We Have the download size. So the amount of data that has to be downloaded in that request before the user can actually see what the response or see what the, what the page looks like.

And then we have the device, which is the hardware capability. So if you have a slow device, you’re going you could have slow performance. Also hot tip to anyone creating slides, I definitely leaned more towards alliteration than I did coherence on this one. So, you know, little tip to any presenters out there.

The other thing that to talk about in terms of performance is what’s commonly referred to as the speed of light problem. And that is that, although technology continues to improve every year, the speed of light is a constant. And so unless we can figure out a wormhole technology that can take a user request through a wormhole, around the speed of light problem, we still are always going to have an issue where request can only go as fast as the potential speed of light. So when we consider an edge compute or an edge device, or an edge node versus an origin, we can see that eventually, as technology gets better, it’s only going to hit a certain limit and be limited by the speed of light. So whatever we can get closer to the user is going to be faster. So ideally, we would do things as close to user as possible, similar to a CDN, and we would move more work onto the server. So we’re not reliant on users devices. So we would do things like sort of like Cloud Functions or servers.

And then lastly, we would probably send smaller assets. But that’s not something I really have a good recommendation for, because that’s going to be largely dependent on or subjective to your application. So I think at this point, you can probably guess where we’re going, we’re going to talk about edge compute. So we can think of edge compute sort of, as in that analogy, as if we had taken those sort of robots that are trained to knit dog hats dynamically, and place those at the location of the convenience stores. So now, people don’t have to go, you know, all the way across town or across the country to get to the the factory, they can just go to their convenience store. But they also aren’t limited to only choosing pre built or static options have knitted dog has, they can get you know, whatever color they want with whatever team logo they want, or for whatever size they want. So it’s pretty great. So edge compute is once again, programmable runtime sort of like cloud functions that are globally distributed, sort of like a CDN. And they live somewhere between the client and the origin server. So this provides the opportunity to offer dynamic server side functionality that executes as close to the user as possible. And a lot of server or a lot of edge. Compute providers also include reliable location information, that is really not doable on a CDN because you don’t have dynamic compute. And it’s not necessary on a traditional server because you know, where the server is deployed. But it’s really helpful having this location information and doing some stuff dynamically against it. Also, a lot of service providers offer access to a key value store in case you need to have persisted data on the edge.

Unknown Speaker 23:32
So some of the benefits of edge compute are going to be that we can provide these dynamic experiences that offer less latency when compared to serverless functions or Cloud Functions or servers. We also don’t ask the user to download as much when you compare it to client side rendering, there’s also doing less work on the client is better for the user’s device in terms of battery life and performance. There’s a lot of benefits to developers to because the serverless functions are easy to spin up, it’s easy to order, these functions are easy to spin up. It’s easy to have a lower barrier of entry for a proof of concept. You also have consistent environments. And unlike cross browser issues, you can have this dynamic content based on the user’s location. You don’t have to worry about secrets leaking when compared to client side rendering.

Austin Gil 24:25
You can use a one of the most ubiquitous programming languages in the world because almost every service edge compute service provider offers JavaScript as a language that you can program in. And you don’t have to maintain the headache or you don’t have to deal with the headache of maintaining server infrastructure or server side code. Well, server specific HTTP server code.

And lastly, as far as the benefits go is we can think of benefits to the owners because you can do dynamic compute at the edge you can now reduce the amount of work that has to happen on the origin server, which means that can either improve the performance of the origin server because more resources are available that can improve the reliability of it because it’s less likely to fall over as a result of too much scale. Or you can even scale down an origin server and save costs because now you have edge compute that’s handling certain types of requests. With edge compute, you also have, it’s a subset of serverless. Compute, I forgot to say that. So it gets a lot of the same benefits of serverless. Compute, for example, the automatic scaling to answer requests that are distributed globally. And additionally, most edge compute providers follow a pay for what you use model, which can be a cost saving opportunity. But nothing’s perfect.

And I never like to present, I never like to bring my presentations of a topic as if it’s the end all be all, there’s definitely some issues with Edge compute that are important to understand before adopting it. And that is, firstly, that a lot of the the biggest or the main edge compute providers use what are called VA isolates, which is going to be JavaScript running on the edge, but not in the same way that you might be familiar if you’ve been using Node js. Now, we are starting to see more compute resources coming to edge providers, and some are beginning to offer features that are available in no JS. But it’s really important to understand when you’re working with a limit, a limited platform, and what features are available or what API’s are or are not available. Also, most edge compute offerings don’t have as many resources to work with in terms of the amount of time that a function can run. So you don’t want to deal with like long running functions. And also the amount of compute that they have available in terms of memory. So you generally are dealing with more limited time and memory resources. And lastly, this is also starting to change. But most of the biggest edge compute providers only allow HTTP as the networking protocol, they don’t allow, like TCP over IP, which can limit things like accessing a database. So if you wanted to make something like a Postgres connection, you would not be able to do that had the add an edge node necessarily. But you could still have an edge node that connects to like a proxy server that talks to the database.

And we’re going to look at a little bit of some architectural limitations as well. So we are talking about how, how there’s a lot of benefits to being close to the user. But in fact, these architectural decisions are very important to make because or to think about, because close isn’t always going to be better. So I have a little key here that shows the origin server, or sorry, the have the have the name actually backwards, the computer there is going to be the user, the the person that looks like a waiter or tuxedo is going to be a server because they’re like a server at a fancy restaurant, the edge is going to be the knife emoji because it has an edge. And then the target is should actually be the origin server. But that is the ultimate endpoint where a request may be going to.

So if we consider an example of using a proxy service that’s in the same region, so think about I want to talk to a database, let’s say that’s the target, I want to talk to the database. Clients can’t talk directly to a database. browser’s URL. Yeah, I don’t think. But things Things are definitely changing on that front. But let’s just traditionally, browsers cannot talk directly to a database. And so what you need is something to sit in between to receive that request and talk to the database, then return a response. So in a traditional environment, you’d have something that looks like this, a user, which with potentially, in worst case scenario, a very long request that reaches a server, the server is deployed next to the database. So that’s a very short haul, with Edge compute, that could look something like this.

And, again, I know I mentioned database, maybe it’s not a database, but some other sort of service, depending on, you know, the sort of networking protocol that you need to use. But in this case, the user might talk make a very short hop to an edge node. But if that target is still far away, that request still might have a long way to go. But you can see they can be pretty comparable in terms of performance. If we look at a proxy service that is far away from the eventual target, then we see that a that a user might be far from the proxy service, which might also be far from the target service. And so that could that could contain two long hops versus a When users using an edge node, they can have a short hop, and then the edge node can go directly to the region that’s far away, which would be one long hop, which ultimately could be better than the prior. You could have multiple concurrent requests, where once again, sort of similar to the proxy service, but we have multiple targets here, really the length, the duration is going to be dependent on the furthest away target. And versus a, an edge compute node, you still sort of reduce some time based off that short hop, despite the fact that one service may be very far away. But when we’re dealing with multiple sequential or sort of waterfall requests that have to go, let’s say back, grab some data back and forth from the same service, this is when it can get kind of funky.

Unknown Speaker 30:47
So we can have a request from a user that goes to the proxy talks to a service goes back to the proxy talks to the service goes back to the proxy, and then ultimately gets to the user. Worst case scenario, there’s a long delay, there’s a long delay between the user, that could be sort of bad. But when you compare it to edge compute, that same environment could be really bad because of the user. And therefore the edge node is far away from that service, you could have long waterfall requests between the two. Hopefully, that’s sort of clear. So once again, the main, the main, the key principle here is not that edge is going to be always better or traditional servers are always going to be better is that we have to think of edge compute as an addition and not a replacement to any one part of the sort of edge compute continuum that we’ve been living with. Where before, we’d have to consider all of the places that we would want to run that compute, which could be client side on the in JavaScript on the client side with a service worker, or you could do Cloud Functions or use traditional servers.

Unknown Speaker 31:53
Now, we basically want to nuzzle edge compute in along with that continuum. So we still have client side JavaScript client side service worker, then maybe edge compute go in between, and then you have Cloud Functions and traditional servers. So another way I want to sort of deliver home, what edge compute can do is by offering some of the most common use cases that we see. And that’s going to be things like modifying your request as it comes in, or as it goes back to the user. So this might be to do something like injecting advertisements in a way that you won’t have to deal with client side, ad blocking, you could do fast static search. So if you know, if you have a static list of things to search through, you could put those in an edge KV store and have a request, come to that edge node, search through the store or search through the KV values and return that data without having to suffer from the latency. So this could be something like an autocomplete suggestion, or a store locator because those don’t change very frequently. Also, if you want to do anything related to geolocation, so maybe based on the user’s location, you could preemptively guess what language they’re using. Of course, you want to defer to user selected choices if you can.

Unknown Speaker 33:22
Or you could do different, you could do different dynamic logic based on policies of a given region that the user is in. I also wrote a blog post on redirect management. So redirecting, if you have to redirect a response to user, and they have to make a request to the original server, which could be far away. And then response comes very far away with the redirect rules. And then the user actually gets redirected to their ultimate origin, you could do that at the edge and reduce half of that sort of long round trip. You can do token based personalization, which is great for things like A B testing or feature flags, you can do authentication with, with stateless tools like JWT, JSON web tokens. Or if you wanted to do sort of like an orchestration of an API or serve as like a proxy where a request comes in that has to orchestrate several sub requests that need to sort of communicate together and aggregate them, you can use an edge node for that.

Unknown Speaker 34:23
So it’s also really nice because you can store secrets like API keys in your edge node, and not have to worry about either leaking them to the client, or having to have long round trips to the server. So I’m very excited about this. I don’t know if that’s coming across. But I think it’s really cool to have a new paradigm of compute options. And I think that this is going to open up some of what’s going to be like the next phase of web development. And you know, we’re gonna have dog hats all over the planet. And we can see that I’m not the only one There’s a lot of frameworks that are starting to adopt edge compute as either a potential target or even as the primary or like main target of that framework. So here’s a list of several that already have some sort of edge compute deployment support. And there’s more coming.

Unknown Speaker 35:19
And like I said, this is very exciting. But also, it does give me it does bring up this sort of existential crisis in me that it feels like we’re adding a lot of mental overhead and architectural decisions, and a lot of weighing costs and benefits for performance. And I think conservatively, you know, how much our do we stand to gain? And it could be something like, we’re talking latency around of a round trip around the planet, maybe like, in the order of hundreds of milliseconds. So how much does that really matter? And this is where I always have to respond with the, what I call the comparator, compulsory slide with lots of stats. And, you know, when we’re talking about performance, there’s about a million different resources, you can look at that say, Yes, performance impacts, user experience, which impacts conversion rates, which impacts things like sales. And so we have hundreds of examples where performance improvements can affect your bottom line, which is important for a lot of people.

Unknown Speaker 36:29
And, in fact, in 2017, occupy did a study on retail performance, and found that for every one second that the Walmart website loaded faster, their sales increased by 2%. And if we consider that in 2021, Walmart made $500 billion. If we look at 2% of that, that’s $10 billion. So in theory, they could have hired 133,000 developers, with you know, the 2020 average salary of developers being $75,000 a year, they could have hired 133,000 developers. And if those 133,000 developers improved the performance by one second, they, they would have broken even, or they actually would have made a profit. So that’s kind of wild to think about. And yeah, all this to say that I think that is compute is really cool, and really exciting. And but there’s still, you know, the right use cases for it, and when it’s going to be useful and when it’s not. And I just want to make sure that we’re educating people on what it is and what it’s not, and what you can use it for and what you probably don’t want to use it for.

Unknown Speaker 37:47
So is it worth it? Well, it’s going to depend on your use case. But if you want to use it, I would really hope that you try out occupy edge workers. Because now I get to give a little spiel about this, I want to make sure that this comes across. As you know, I do have a biased opinion, I work for Akamai, and that bias is going to show in that I just know more about what edge workers have to offer. I don’t want to disparage any of the other providers. But occupy does have the largest number of servers, there’s over 250,000 servers, I think it’s like even way more than that now. And so we’re talking orders of magnitude larger than the next closest network. So if we’re talking about something where speed is worth investing in, then I think it’s worth using the largest network available. octopi also has a really cool architecture around the edge workers in that they have more sort of life’s lifecycle hooks that you can tap into, including doing things dynamically after the client has requested something before the origin gets the request after the origin responds, or before it goes back to the client.

Unknown Speaker 39:00
And we also have, I mean, there’s a whole bunch of other benefits there that are worth mentioning. I also just want to point out that according to a lot of third party, third party auditors, occupy is the leading security provider. So I think it’s really cool to have not just your edge functions running on the largest network, but also the safest. And again, you know, that might just be because I’m on the team, but that’s what I hear. And I think I really believe that and I hope that I hope that comes across authentically. So anyway, that’s going to be it. I just need to get to the last slide that I have here somewhere. Oh, okay bunch of cuties. This is not it. Hold on. Okay, here we go. So yeah, just wanted to say thanks. I hope you enjoyed the presentation. I hope that you will learn something because that’s really my job here. And if you did learn something thing and you want to say thanks back to me, there’s a couple of links there. This second link, if you are getting started with your web development journey, and you want to run some servers yourself, I have a link to get $100 of credit to anyone that wants to set up a new Linode account. So I’m also a big fan of Linode offers great cloud computing services. And if you want to check out more information on optimized edge, just head over to akamai.com. Or go ahead and find me on all of these places that I am online. Except for that last link. That last one is a link to my dog’s Instagram. But you should check that out, too. So that’s all I got for you today. I hope you enjoyed the presentation, I hope that you learned something once again. And if you have questions or comments or concerns, please reach out to me I would love to interact with you. And that’s all I got.

Sean C Davis 40:53
Amazing. Thank you, Austin. This was this was great. I have Yes, I have I have so many questions. But we did get one from the audience. This is from Martin. And so I think I’m going to start here and then and then we’ll see where it takes us. So Martin says, I’m still a bit uncertain about the distinction between Cloud Functions and edge compute is edge compute, basically just Cloud Functions, but on the edge, as opposed to a specific regional server.

Unknown Speaker 41:24
Yeah, so that’s, I mean, more or less, I think it’s going to, there’s going to be some nuance there, depending on the provider. You know, I did mention aka my, at the end there. But I don’t want to try not to be too specific about which providers we’re talking about. So there’s going to be some difference. But ultimately, you can think of edge compute sort of like a subset of Cloud Functions, in that they are going to follow a lot of the same concepts where you provide a function to a service provider, the service provider is responsible for deploying that function and routing traffic to it and scaling it up and down and managing the servers in which that function runs. They’re also going to be similar in that they are stateless. And they are infinitely scalable, at least to the service providers that I’m familiar with. But there are some limitations, or the differences are mostly around the limitations. So where a Cloud Function has, well, Cloud Function provider probably offers more languages to choose from. And you can add more options for the sort of resources that are available, an edge function is mostly going to be limited in the amount of runtime compared to that cloud function that a function can execute, it’s going to be limited in the number of programming languages that are available, JavaScript being the most common, it’s going to be limited in the amount of resources that it can use in terms of memory for that cloud phone, or for that edge function to run when you compare it to a Cloud Function.

Unknown Speaker 43:07
And then, again, this is where the landscape is sort of shifting, in terms of the networking protocols that are available, where an cloud function would have access to. Something like TCP over IP, which is necessary to make a database connection. A lot of the biggest edge function providers do not only provide HTTP as a networking protocol, so it can connect to other servers, but it can’t connect directly to a database, for example. And even that there’s a whole there’s a there’s a big distinction to be made around talking to databases from an Edge Server, both in terms of what’s available in terms of networking protocols, what’s available in terms of timing, because, you know, there’s it takes time to make the connection to the database. And in terms of the architectural considerations on whether that’s even a good idea.

Unknown Speaker 44:10
So again, going back to that, going back to that waterfall, or consecutive request example, this is really important to consider. If I have to make a single request to a database and get a response and like get a query and get the response from that query and give it to the user, it probably doesn’t make a big difference, whether I do that from an edge node or from a cloud function that’s sitting next to the database. But if I have several consecutive requests to make, if an edge node is sitting far away from a database, and I have to go and fetch, let’s say, a list of blog posts, and then from that list of blog posts, figure out the categories and then in those and then from those categories, figure out I don’t know the most popular author in those categories. If that’s like three requests that depend on the previous request, and if the user is far away from wherever the database is distributed, that could be a law a lot of back and forth. Whereas compared to a Cloud Function, maybe the user is far away from the cloud function. But the cloud function would theoretically or should be deployed next to the database that’s going to talk to so it could be one long request to the cloud function, and then several short hops between the cloud function in the database. I hope that that answered it, it’s kind of hard to do it with just my hands waving around. But, you know, that’s, that’s a very important thing to consider.

Sean C Davis 45:39
And I know you, you spend a lot of time talking about there’s like, there’s so much nuance when it in so many factors when it comes to deciding edge, Cloud Function, serverless function, etc, you know, various computing options. What would you basically just suggested where, you know, you might, you might have a lot of dependent requests in one, or dependent things to do in one request. And then, and, and makes more sense for cloud or serverless function in that case, but it sounds like let’s take an example of, okay, if that if that function, were part of maybe a broader API, let’s say, and then there’s another, there’s another route or another service that you want to provide an API that is really simple. I only, I only need to touch the database once. Do you? Do you run into a scenario where you’re, you’re like, you’re introducing all this nuance and making this decision every time for every function? Or, you know, and saying, Okay, this, this, this, this function should be at the on the edge, this other one shouldn’t all within the same umbrella of this is my API? Or do is it better to look at maybe everything that your API is trying to do? Or everything your project is trying to do and and put everything in one place? Or the other? I’m not sure if that made sense? Yeah, I think that makes sense. It’s basically trying to come up with a strategy for deciding across the continuum of compute where to place compute. Right? Right. So I would I don’t have any anecdotal data. But I think that the best approach here is not to try and address the problem before it exists,

Unknown Speaker 47:37
I would say that what we’re seeing from actual customers is areas where they have opportunity to improve and using edge compute there where it makes sense. So rather than creating an API, and trying to plan for all of the places where edge compute is going to, theoretically be the best fit, figure out where you have measurable issues, and figure out if those issues are primarily latency related. And if they are, then it might make sense to consider edge compute for those use cases. So that that way, you can reduce the latency and have, you know, faster responses there. One other example, actually, that I want to mention, because it wasn’t in the presentation, or, or another thing to consider is, you know, when we’re dealing with Edge compute, and we’re talking about, you know, that slide with all of these sort of framework authors and tools that are available.

Unknown Speaker 48:42
It’s kind of interesting, because we’re in a weird place right now, where there’s a lot of promise for edge compute, but I think a lot of people aren’t necessarily telling or like, painting the whole picture or telling the whole story. And that is that these applications that are these framework authors that could envision and promise, having your applications run and deployed with their tools deployed to whatever target edge provider you want. Isn’t painting the whole story that you know, at the end of the day, a lot of the limitation here is going to be that database. And I could be wrong. But I’d say 90% of applications that people are building rely on some sort of stateful data, and that stateful data usually has to live somewhere. It has to be live in a database. Sure, you could rely on some of the well, I personally think that that relational databases are where most data is being stored, at least in the application that I like to build. And relational databases can’t be stored in key value stores. So you you generally you’re going to have to have one source of truth where that data exists.

Unknown Speaker 49:51
Now you could just have a distributed data architecture, where I have one database that I’m doing all of my He writes to, and I write to that database. And maybe if I want to improve the latency story from that database, and whether it’s Cloud Functions, or whether it’s edge nodes, or whether it’s, you know, your favorite framework running at the edge, they all still have to come back to this database. And the problem that we’re trying to solve is that latency issue. One solution is I have one database, still, that’s the sort of master main database thing. And when I write to it, it can be responsible for replicating the data to a network of other databases, right. So if I do that, then what I can do is technically have several databases distributed around the world that are going to be responsible for reads. And then I have a network of edge nodes that are reading from the nearest data source, that could even be a KV store, at the deployed at the edge, right. So you could have reeds be extremely quick and then have rights be, you know, still kind of slow, that has to go around the world, right to the database database is responsible for replicating that data around, but at least the reads are going to be fast. So that’s kind of one of the current challenges that that we’re facing. And that’s one of the stories that I don’t see a lot of these pro edge are like frameworks that are targeting edge really communicating. So I’m hoping that I’m hoping that is like my job as a web educator is trying to be unbiased and trying to be authentic about I’m pro edge, I think that edge is going to be cool. I also think that people should be equipped with the knowledge and skills and tools to address it. Responsibly.

Sean C Davis 51:45
So do you see an evolution of how databases function? Or like, is that going to change? I mean, what you describe with, okay, you can achieve this architecturally today and reduce, you know, minimize your latency. But is there a future where the database is just distributed on your behalf by Edge providers?

Unknown Speaker 52:10
I’ve had a lot of conversation. So let me let me say that I am, I’m probably not the leading authority enough to be able to answer this question. I’ve had a lot of conversations with people that are incredibly intelligent and thoughtful on the subject. And it depends on the database. So if a database for you, it depends on the database and your needs, if you can, if you can deal with absolutely consistent data, versus eventually consistent data, then that’s a different consideration to make, right? If I need something that’s absolutely 100%, true bank transactions or whatever, I need a single source to read and write from right. eventually consistent is fine for data that’s like, okay to be out of sync for I don’t know, seconds or whatever. So eventually, consistent versus not, that’s a consideration to make. I also think that there’s a challenge for just completely distributed systems that are relational versus not. So if you can have a database that can live within a KV store, we already have that. And edge edge workers edge KV already is, I think, an eventually consistent KV store. So it’s good for any of those use cases that you may have. If we’re talking about relational data, then that gets interesting because that, you know, again, I think most of the applications on the internet are built on relational databases that are that are using databases, or using relational databases. And that challenge gets a lot greater, partly because of this system that I was describing where reads can be distributed just fine. But writes, can be tough, because you don’t want to write two different databases that are then going to have to figure out how to sync up.

Sean C Davis 54:09
Yeah, yep. Yep, that makes sense. That makes sense. Okay, and you, you also were talking about some of these framework providers today and and how they’re making use of this technology. And that’s, I feel like that’s where this gets a little fuzzy for me, you know, like, let’s take Oh, I was playing with a combination last week was starting to get into spelt kit and spelt kit has this ready made deployment strategy to versatile and, and you can pre render, so you can go the SSG route if you want, but it’s, it is built out of the box to I think they’re still calling it server side rendering within that framework. So is that using these similar technologies is it you know, Is that kind of what you’re talking about? Or does it depend on the framework or the deployment provider? And and all that?

Unknown Speaker 55:06
Yeah. That’s kind of a different or a difficult question to answer. What these frameworks are generally going to be able to offer is some sort of SDK in which their technology can communicate with the underlying provider. But a framework is not going to be responsible for actually deploying itself to a provider. Right? I mean, maybe you could, they would need your deployment keys to deploy to Iraq, my edge workers or whatever, right. But I think that that would be I don’t think that framework authors are trying to target the actual deployment process, besides the ones that are then having their own service to provide or to offer, like for sale, for example, and next, Jas. So yeah, it’s kind of hard to say, I think right now, the role of framework authors is to provide deployment targets that can communicate with the underlying runtimes, because like I said, a lot of these runtimes are going to be the eight isolates that don’t necessarily offer all of the tools or all of the API’s available in something like node. And these framework authors are now targeting node, and, you know, Edge workers and verso and Cloudflare and fastly, and whoever else. So, yeah, I don’t I don’t know if that answers your question.

Unknown Speaker 56:43
But I think more interestingly, is going to be that these frameworks are not going to solve the problem of where that compute should be happening. So a lot of them are offering some story around client side rendering or hydration, you know, through hydration. So they kind of ship both sides, they do the work on the server, generate the initial page, and then send the JavaScript to then handle all the other work on the client. But some work is better suited on the server. And some work is better suited on the edge and some work is better suited elsewhere. And it’s going to be interesting as this as the community moves forward, it’s going to be interesting to see if these frameworks are going to be able to say, automatically or with some sort of hint, this should run on the client. This should run on edge compute, this should run a Cloud Function, or this should run on a traditional server. You know. I don’t know if that’s I don’t know how possible that’s going to be because in addition to that, you know, you have to then do the hard work of the of the infrastructure planning for which, which service providers you’re using for all of those compute environments? Yeah, so it’s a, it’s, it’s an exciting time, because it’s because it’s hard problems. And at the end of the day, it goes back to this, this existential crisis of we’re solving very hard problems or thinking about very hard problems, to save 300 milliseconds, you know? Right. Right. So, yeah, it kind of puts it into into perspective. But I think, again, you know, depending on people’s use case, people, like 300 milliseconds, for some people is a long time. And for other people, it’s not, it’s not even worth worrying about.

Sean C Davis 58:41
Yeah, so actually, that’s, that’s a great point, we were talking about enterprise developers backstage before the show, and that 300 millisecond consideration, it’s like, that’s a huge, huge problem to solve for a Walmart, that’s probably not something you should consider for your, for your blog site. You know, like, that’s, that’s basically what you’re saying is, you know, I mean, it’s, it can be fun to kind of figure get in there and figure out how you can optimize the performance of your website. But the the, what you’re essentially saying is the problems you’re solving you’re you’re solving largely for applications at scale. Is that accurate?

Unknown Speaker 59:25
I mean, yeah, the people on my team, the people that are doing more of the work. And, you know, the people that I like to I look up to and admire are solving very hard problems, and they are doing it in a way better way than I could, you know, I’m just here to hopefully take some of the knowledge that they impart on me and put it next to cute pictures of dogs in a way that people can hopefully, hopefully help understand, you know. But yeah, there’s definitely some very fascinating problems and very fun fascinating challenges. And I don’t know how much time we have. But there’s another sort of interesting use case and challenge with Edge compute. But because edge compute is a tool that you don’t have to only think of as a server, right, where a request comes in, and it does all the compute and then returns a response. Edge compute largely is something that sits in between a client and a server. So it could be a Cloud Function, it could be a traditional server, but you have the client, it goes past the request, passes through an edge node, and then either continues on to that origin server and comes back through the edge node and goes to the client or not. And there’s the there’s a lot of magic that can happen in that little bit of compute. And one of them is again, that sort of modifying the requests and response.

Unknown Speaker 1:00:48
So you can take HTML that comes back from the server, and you can manipulate it, you can change it, you can do a search and replace. And the interesting thing about search and replace is because you can’t take the entire response object and load it into memory, because that could, that could consume all of the resources for the edge worker, you have to do it using streaming data. So it’s like streaming through from the response to the client, which means you’re getting it one chunk of data at a time, which makes it really interesting if you want to do a search for a string. And that string happens to come partway through one chunk, and then partway through the next chunk. And now you’ve traversed the boundary of chunks. So how do you do that? And it’s very challenging. And we have some folks on the team that have come up with examples and solutions in our GitHub repos. If you want to take a look. I think there’s also some tooling that might make that easier in the future. But yeah, there’s there’s just some fascinating challenges to deal with. And it’s a cool, it’s a cool space to be in and be involved with.

Sean C Davis 1:01:55
For sure. For sure. Okay. Yeah, let’s, let’s, let’s end this with with one. One last question. So, I know, you I thought you did a great job of not promoting occupy too much, but but also very passionate about everything that you’ve got going on. And so I will say in, you know, as as Biased or Unbiased as you want to be. What you? Okay, so you outlined, everything that occupy is doing really well, if we were to step back from that and say, I’m a person who is, I know, I need to use edge computing. Now there are all these providers, which one should I choose? What what are what do you feel like are generally the most important factors in making that decision?

Unknown Speaker 1:02:48
Okay. I think asking to be completely unbiased, I think being an unbiased, completely unbiased is just not going to be a factor here primarily because from the nature of, I’m most exposed to one environment, and I’ve used Cloudflare in the past, so the big, the big three are going to be occupy Cloudflare. And fastly. And I have used Cloudflare in the past, and I’ve liked them, I work in occupied, I’ve spent over a year learning more about what makes them, you know, great. And I’ve never used fastly. So I think, just by nature of my experience, there’s no way of escaping the bias. So I’m just, I’ll be upfront about that. But I will say that there’s a lot of unbiased third party companies that do these audits. And I think to answer the question of what, what, like, how do I choose the best? It’s largely going to depend on who you are and your use case, you know, I think, I think companies like Cloudflare have very low cost to get involved. It’s very easy to get in and kick the tires and try it out. And I think that they have a lot of customers in in that sort of realm.

Austin Gil 1:04:11
But again, if we’re talking edge compute, and we’re, we’re saying that it’s worth getting invested to bring down again, this potentially, like, three 300 milliseconds is a long time for you, you’re probably dealing with a scale of traffic that is worth investing in. So you want to work with the best and you know, again, admittedly biased I think that occupy even through unbiased third party resources consistently comes across as being the number one in terms of security and the network is the largest. I mean, I forget what I really don’t want this to come across as disparaging someone else. I don’t want I don’t want to throw I don’t want to throw throw mud, you know, but it’s hard without making a comparison. But I think that lets you Say, compared to the closest competitor, Akamai has a network that is, again, orders of magnitude larger. And I don’t mean that as an exaggeration, that’s like hard facts, that there’s hundreds of 1000s of servers compared to 1000s of servers. So that was the last I saw, I could be wrong, I reserve the right to be wrong. But consistently on in this in this regard, if you think of the largest companies in the world, and which, which service provider they use, generally speaking, occupy is going to be in the lead. And I’m sorry that we don’t hear more about it in the developer community. But I think that’s probably because it’s mostly an enterprise focused company. But that said, you know, if you’re, if you’re an enterprise, I think optimize is going to be the best. If you’re not in, you know, enterprise building enterprise level applications, then one of the providers might be also an excellent choice. You know, I don’t want to, again, not not going to Yuck, what other people are doing, especially because I’m not in a position to, to speak to their strengths or weaknesses. I just, I don’t work for them. And so I don’t get to see behind the curtain, you know?

Sean C Davis 1:06:21
Yes. Fair, fair. Actually, I just had a couple couple other questions pop up all I think all related here. So I’m going to try to figure out what maybe you can maybe you can help me work through these. Okay, so they’re all from from the same person says, First firecracker versus lambda question mark. And then, and then asking about cloud offerings called micro VM. And then the third question, I think, is bringing it together saying, how, how are these all different? And yeah, how is micro VM different? How does it compare to lambda? So maybe, maybe asking about the difference between firecracker lambda and micro VM? Or I don’t know if there’s overlap there.

Austin Gil 1:07:04
Okay. I think these are mostly going to be questions in the context of like AWS space. I believe that that’s right. Yeah. Which I probably am not the best person to answer. Answer those. Think, yeah, I’m gonna go ahead and just say that I’m not going to be able to be the right person to ask that question, too. I don’t know enough. I mean, I’ve worked with lambda. I don’t know enough about micro VM or firecracker to, to say to have a strong, informed opinion. Okay, fair, fair, fair. Yeah. Sorry. Right. But yeah, no, no, that’s I,

Sean C Davis 1:07:42
I appreciate that much, much better than, you know, just making up an answer, I

Austin Gil 1:07:47
suppose. Right. I was tempted to but yeah, wouldn’t be a good look.

Sean C Davis 1:07:52
All right. All right. Well, this is this is great. Awesome. I really appreciated the presentation. And you answered a lot of my questions, but now I’m gonna I feel like I have so many more. So I will, you know, we’ll have to keep this conversation going at some point. But appreciate you being here. And thank you to all of you in the audience. And we’ll, we’ll see you next time.

Austin Gil 1:08:15
Yeah, thank you so much, again, for having me. It’s been a pleasure and truly admire what what CFE is doing and feel very honored and humbled to have been invited to come on and speak.

Sean C Davis 1:08:27
All right, thanks.

Tags

More Awesome Sessions