See full event listing

AI - Have it your way

AI can be leveraged to add intelligence and capabilities to any number of robotics, IoT, and automation projects.  However, we are often using AI with our most confidential, personal, and protected data - so we don’t always want to send this data out to a large corporation. Running our own on-premise AI infrastructure is an option that is becoming more practical, but can incur large up-front costs and sometimes underperform.  Cloud providers offer dedicated GPU instances, but the costs for these services add up quickly.  Can we leverage serverless AI to find a good balance of performance and cost?

Join Viam’s Principal Developer Advocate Matt Vella to learn about:

  • Choosing when to use local or serverless AI solutions
  • How to integrate AI with your hardware and smart machine projects
  • How to deploy end-to-end AI assistants

Matt Vella is a Principal Developer Advocate for Viam, the software platform for smart machines. Matt has spent decades leading teams and building platforms and software tooling across numerous industries. These days when not building robots you might find Matt on a home improvement project, gardening, or walking in the woods behind his house in the Berkshires of Massachusetts.

Transcript

Matt Vella 0:10
All right, hello everybody. Thanks so much for having me. Really happy to be here, excited for this talk and all the other talks here today. I am Matt Vella. You’re probably wondering why I kicked that off with a arguably good or terrible commercial from the 1970s Of course, I have a loose point. I will get back to that, but in the meantime, let me introduce myself. So I am a software engineer by trade, lover of pretty much everything, outdoors, part time, med scientist, a cookie or token to somebody who can actually understand the reference of what I’m doing and what that’s related to in pop culture on the left side of the screen there. And I work at a company called VM, and I’m a principal Developer Advocate there. If you’re not familiar with the company, VM, it is a software platform for robotics and smart machines, and it really is meant to simplify the development lifecycle of your build. And the idea is, there are components and services you are composing them in a no code way, which is surfacing an API. And it’s a standard API that you can then use with any of our SDKs, so you can then control the thing you’re building in the way that you want to i i am sometimes known to say that we live in extraordinary times. And what do I mean by that? I am referring to technology. And essentially, we’re at a point where hardware compute is cheap, it’s commoditized. I can get a tiny, little computer that has capabilities, you know, very surprising capabilities, like abilities to run machine learning models on the edge, on a device that costs $20 open source software keeps getting better, more ubiquitous. Our choices are tremendous today, and all this is augmented by the cloud services that are ready, available, again, improving, getting higher level is the story of technology where things become higher level, easier to use, and our choices really go up. And AI is real. This is kind of a new one, like two years ago, I probably wouldn’t have this bullet point on a slide. Sure machine learning models were real, they’re being used, but with the rise of chat, GPT, llms, everything we’re seeing happen over the past two years, people are using AI in very real ways, and yet, I cannot keep this guy off my countertops. This is my friend, my cat, onyx, and yeah, pretty much whenever we’re not around, he’s jumping up on countertops. He’s trying to get at food. Sometimes wake up in the morning. There’s cat hair on our kitchen table and countertops. It’s kind of gross. My wife is, you know, cleaning it off every morning, and I’m complaining that there’s a chemical smell in the kitchen. And thankfully, I work in technology, and maybe I can use technology to solve this problem. And so when I started thinking about this problem, I honestly wasn’t doing a lot with with AI. And I kind of was looking at this like, wow, this is a big menu, and maybe this is page one of the many. Maybe it’s one of those big diner menus. You go to a diner and you’re kind of overwhelmed. You’re like, flipping through the pages, and there’s so many categories and so many choices, and that’s kind of what we’re faced with in AI today. You know, not only are you having to choose a large language model, and there’s, you know, 10 new ones that were released this week, but also, how do you run those large language models? What are these frameworks that give you additional abilities or ways to augment what you’re trying to do traditional computer vision models? And then, of course, like the languages, you know, sometimes you’re actually restricted to Python and C Plus Plus. Sometimes that’s actually not a great thing. Maybe you want to integrate with some other language, and that integration is a little bit tricky. So this is probably, it’s kind of like a dated concept. You might have seen this. It used to be called the Iron Triangle of project management. The idea was you have to pick two. You can’t have good, inexpensive and then getting the project done on time, like you basically have to choose two. I tend to think that this is a little bit dated, because technology, again, is getting to such a place where things are higher level, more abstracted, and it’s easier to actually find a toolkit that helps you combine these things. But what does this look like for AI projects? Well, probably we want accuracy. We want the AI, whether it’s a large language model. Or something else to respond pretty quickly. We probably have budget, budget constraints, and oftentimes we don’t want privacy. And so these are probably, you know, what I would consider our constraints for AI projects. And if we can meet those constraints, working within that menu, or maybe going off that menu, then maybe we can have it our way. All right. How about this? The sun is shining, the coffee is brewed, and all is right in the world. Good day, friend. All right. So that is Sam. This is a simple companion robot that I built with probably $50 worth of parts last year. And I figured this is maybe a good tool. Maybe this is something that I can use to keep my cat off the countertop again, using cheap commodity parts. There’s a, I think it’s a Rock High zero, about a $30 computer, little single board computer in there. So what would Sam need to actually have it my way and get this project done for me, have it so maybe he’s keeping on off the countertops? Well, first he needs the ability to interpret the environment. Is there a cat and is it on the counter? Traditional way that somebody might tackle this would be to use an object detection. Model, essentially a machine learning model that understands different objects, including maybe a cat. And typically it would work like this. You’d have a camera, you pass an image from the camera to a machine learning model, and it would give you dimensions of any detections, including maybe a cat. So that’s well and good. However, it starts to get a little bit tricky if maybe the cat is partially behind the table. And so now the bounding box is overlapping with the table, and by the way, the table is not necessarily showing up in the image as a perfect square. And so now math starts to get involved. How do we actually calculate? Is this on, you know, is the cat on the countertop or on the table? And what if Sam is now turning his head? Now the calculations change because his perspective changes. What if I bump Sam on the counter and he moves over by a few inches? Gets really complicated. Is there another option? The answer today is yes, and the answer might be a visual large language model, sometimes referred to as a multimodal large language model, or VLM. Moondream is one example of a small but capable VLM. This can basically run on any CPU. Speed is going to vary, of course, but I have run this on a MacBook. I’ve run this on single board computers, and so we can try running this locally. And then what else does Sam need to do? Well, he might need the ability to speak right? He might need to say, onyx, I see you get down. So how can we use this? Well, we could try Google text to speech, and that would probably sound something like this. Awnings Get down from there. Would that be effective in getting them off encounter? I don’t know. Maybe not. It’s not that expressive. We actually have a Google nest device in the kitchen. He’s probably used to that voice already. Can we try generative text to speech, and we can, can we run that locally? Well, yeah, I tested it, and it sounds something like this, onyx Get down from there, so a bit better. And I can vary the voices. I can actually like, rotate the voices and keep Onyx on his toes. So yeah, let’s stick with that. Okay, let me pause here. Why is all this interesting? Well, for me, I think it’s interesting because today these great products, like Google nest or Amazon Alexa, they do what they do really well, but they’re rigid. I can’t actually tell it to keep a cap off a countertop or, you know, watch for a young child playing with a gas stove, but many capabilities granted by newer ML and AI technologies are within reach, and all these capabilities can be used, really, in any scenario. So I’m showing you this home scenario that is very real to me, but there are scenarios for industry, for maybe a workplace, and maybe you’re building a real product and you want to use some of these capabilities to accomplish your goals. We’ve seen some AI systems come to market recently, like the AI pin and the rabbit. And these are really interesting proof of concepts. They’re attempting to do some real things, but at the moment, they’re not super useful for doing things that people really need done. But all that is going to change because all this is within reach, and so you might go out and build the product that actually is doing real things with AI. All right, so here I have a little video where. I’m actually using chatGPT,

Matt Vella 10:04
and for its visual capabilities, you can see it takes about five seconds. It’s accurate. It’s saying, yeah, the cat is on the table. Here’s an image where the cat is not on the table. Ask the same question again. Takes about five seconds, but it’s accurate, and it’s answering. So great. We can prove that a visual language model can service this need. Okay? And yeah, this is chatGPT. It’s capable. It’s obviously state of the art. It’s still the large language model, the multimodal now, large language model that everybody’s looking at. The speed, a little slow, five seconds, but probably acceptable. We might be just checking, like, once a minute, something like that, to see if Onyx is on the table. Privacy concerns. I mean, maybe I don’t love sending images of what’s happening in my kitchen every single minute of the day to some corporate entity. I’m sure they’re taking all the right precautions. I’m sure they have all the right policies. But if I could avoid that, would I? Yeah, probably. But this might be the real kicker, the real problem here, crazy. So right now this is the cost to process a 640 by 480 image. If I was doing this twice a minute, I was doing it constantly, be $6 a day, which actually adds up to over $2,000 a year. Is it worth that amount of money? Keep my cat off the table, probably not, even if I reduce it to just checking once every two minutes, still pretty expensive. So what about using a local VLM? Well, is it capable? Yes, here’s one called you form that’s small I showed you move and dream before. There’s a number of these that have recently come out there, and I’ve tested them. And for these use cases, yes, they’re very capable. For maybe more complex use cases, you might need a bigger model, but for what I’m trying to do, absolutely capable privacy concerns. No, I’m running this on my hardware. Could be my laptop, even better yet, a single board computer that’s always up and running within my home network, so I don’t have any privacy concerns. And so how would I actually test this? So here I’m showing you the VM platform. The U form is actually exposed as a vision service module within VM. So here I am configuring it. This kind of how VM works. It allows you to, again, like in a no code way, configure the different things that are making up your machine. So here I have, like a camera. I have actually a face detector for something else I’m trying to do, and I just configured a vision service that essentially is wrapping that u form VLM. And here I’ve got the VM documentation, and what I’m why I’m showing you this is that VM essentially exposes an API for each type of component and service. So here, this is a vision service, and every Vision Service can expose API methods for detections, classification, segmentations. In this case, it’s actually classifications. It doesn’t expose any of the other methods. Because we are classifying this image with a VLM, we’re actually describing the image. And we can also go ahead and pass in a question like, What is the person in this picture doing, or is a cat on the table? All right, and so what does it look like to actually use this VLM with VM? So here’s a Python program using the VM, Python, SDK, and it’s really just two lines of code that matter here. First, let me instantiate the Vision Service. Well, first, I’ve connected to my machine. That’s important too, but now I’ve instantiated the Vision Service, and then I just call a single method called get classifications, and I am passing in the question, and it’s going to return an answer. I’m also passing in the image, all right, so what does that look like in action? So I have this little single board computer. It’s an orange pi five. It’s actually got a pretty good processor. This is about $120 so for single board computers, it’s not the most expensive, but it’s not the cheapest, either, and you can see it’s actually answering the question correctly. Is the cat on the counter? Yes, it is. It’s on a wooden dining table. How about with this picture? No, there’s no cat on the table or countertop. Bad news. It took 37 to 40 seconds. So, um. Uh, fast enough not really, not really for this use case. Like, are there some use cases where I’m maybe just scanning a room and seeing if there’s changes over time, and I’m only doing that, like, once an hour, once every 10 minutes? That might be fast enough for this use case. I kind of wanted a bit more real time. All right, so this is a serverless conference, of course, and it just so happens that while I was thinking about this project, I thought, are there other options? Are there other options that I can do, other than going to a big provider like chatgpt, where it’s too expensive and privacy concerns, or running locally? And I came across this company called modal.com and what’s really nice is they have per second pricing. It’s serverless, and they have all these options for very powerful NVIDIA GPUs. And so what would this look like pricing wise? You know, before testing it, I did it big, like, sorry, a little back in the neck calculation. And if I were running this once every two minutes, assuming a five second response cost me maybe about 35 to $50 a year, depending on which of these GPUs I’m using. Is that palatable? Probably, and honestly, cloud services go down in price over time. Large language models, BLMs also are getting faster and more capable, so that price could actually go down over time. All right. So, yeah, the two questions are, is it fast enough to keep the money down? So what does this look like? Well, modal basically has this way of describing what they call an app. An app is essentially an image. So you can see, I’m pulling in this image. It’s Ubuntu 22 it’s, it’s CUDA capable, which means that it can leverage the GPU of the NVIDIA architecture. And I’m, you know, adding a Python version, some dependencies here. And then I am going ahead and setting up in the image llama C++, which is a way to interface with a large language model, or a VLM, and then I’m actually pulling the model. This case, I’m using Moon dream, because it integrates really well with llama C++. I’m pulling it into the image, so that way, whenever the image starts up, it’s already pre baked. It doesn’t have to take the time to install all of this. And then I’m calling this app moondream. How do I use this app? Well, first I need to define some functions. So there’s a Start function. This is essentially what happens on a cold start. So when this thing is woken up and there’s not another instance already running, it’s going to bring up llama C Plus Plus and load the module, sorry, the model essentially into memory, and then this is really the method that we’re going to be using, which is called completion. I’m passing an image and a question, and it’s leveraging that model, the moon tree model, to give us an answer. And deploying, it is really just, you know, there’s a really nice CLI for this modal project. And no, I’m not associated with modal. I just ran into their service and tried it out for this project. And it’s really, it’s great to use great developer experience, actually. And so here I am deploying what I just, you know, described this app that I’m creating. You can see the app was deployed. Here I’m actually using the Python command line to show you how to use it, right? And so I import modal. I load an image. In this case, case it is the cat encounter image. I am instantiating the class that I just defined, the moon dream class, and then I’m just remotely calling that function Completion is the cat on the floor, and it returns an answer. So I now am running AI in a serverless way.

Matt Vella 18:54
Okay, so of course, I am building my project with VM and so how would I make this available to my VM project? Well, I could use that code directly in my project, but even better, I can create a VM module. And a VM module essentially just wraps what I just showed you and exposes it in a get classifications method. And why is that nice. It’s nice because that way I can swap out models in my VM project, essentially, just by going to the configuration screen and choosing, you know which model I want to use. So here I have a u forum. This is the one I set up previously. This is the moondream modal module that I just created, and to configure them, I just like, again, choose from a drop down menu, and then to use them, it’s exactly the same code. I am just instantiating both of them as a vision client, and then I’m going ahead and I am asking the questions with this. Classification method. So, yeah, very repetitive code, but again, just showing you that it’s kind of nice, because I can swap out these models, and no matter where they’re running and what underlying model they’re using and even what framework they’re using to serve the model, my code that’s actually going to use it for the real use case really stays the same. All right, so performance, that’s what we’re looking at here, right? I sped up this video a little bit, just so we could see the results pretty quickly, and here we go. So this is a cold start. This is me running inference with moondream serverless, taking about seven and a half seconds. Not bad. Do I wish it was a few seconds faster? Yes, without a cold start, however, it’s really good two and a half seconds, it’s actually faster than what we saw with chatgpt. And again, for comparison, here are these models that I’m running just locally, clocking in around 30 seconds. So what is this showing us? This is showing us that actually we can maybe choose within the menu and off the menu. There are tons of items on the menu. However, there are frameworks that help you get going with those tools and using them the way you want, faster. Do you still have to maybe understand, like, okay, what are the models out there and what are the frameworks out there? Yes, you have to do a bit of research upfront. But you know, there are these tools out there, like VM, net, modal and modal that make it easier to actually then go ahead and try that tool. You might try a tool. You might say, this is actually not the right large language model. Maybe it’s not accurate enough, or it’s not fast enough. You might say, you know, just like what I just saw, running it locally, not fast enough. Let me try something else. So for the most part, you can actually have it your way. And I realized I did run through that presentation a few minutes faster than I expected, but I will leave you with this nice AI generated art and a call for any questions.

Sean C Davis 22:21
Thanks, Matt, that was great. Gave me so many ideas of things I can do with AI. And really, really it’s, I mean, where the conversation has shifted it to, you know, AI is infiltrated into all of these conversations, especially in the tech and serverless world as well. And now I like that the play into Have it your way, and the Burger King menu. And I, I’ve, I’ve done some research into AI, and initially it feels more to me like a Cheesecake Factory menu and not a Burger King menu. Like there’s just, there’s pages, and, you know, so many options there. I’m wondering if you you mentioned you have to do research. Of course, you have to do research if somebody is wanting to get into AI to build something like what you showed. Where do you recommend a start?

Matt Vella 23:16
Yeah, I would honestly recommend starting small, right? Like with anything if you’re going to use a new framework, or, you know, new technology, it can feel pretty daunting at first, even even if it’s something that’s purpose driven. And what I would say is, is find the thing that maybe you know you’re seeing people are using for a specific purpose, start there and really, like any project works iteratively, like when I was when I was looking at some of the frameworks like llama index recently for another project, lemma index is pretty big. It does a lot of things. Lang chain is another one. It does even more things. But really what I was trying to do was essentially function calls with a large language model. And there’s some other tools out there that actually do just that, and it might be enough, and that way you’re, like, reducing the overhead in terms of complexity of like, learning all the ins and outs of a framework, or how to navigate, you know, a larger framework that can do many more things for you. So I guess just work iteratively.

Sean C Davis 24:23
That’s that’s fair. That’s fair. And you mentioned that a limited part of the menu tends to be the languages that that are available to C plus plus and Python. I mean is that? Is that generally a universal condition with all of these different providers today, or if someone is exclusively a JavaScript developer today, are there avenues for them to dip their toe into AI with just JavaScript?

Matt Vella 24:48
Yeah, well, it depends. So if you’re using something a cloud service like chatgpt, there of course, APIs that you can leverage in any language. If you’re, you know, wanting to run something locally, you might be looking at a framework like llama C Plus Plus, which, you know, has bindings for Python. It might have bonding bindings for a couple other languages. But oftentimes it’s coming down to C Plus Plus or Python, and you’re seeing that landscape changing over time. You know, just, I guess, one, one nice thing about VM is that if you’re using a VM module that wraps any of those things, it is automatically exposing a gRPC API that you can then leverage with any of the VM SDKs. And so I was using Python in those examples, but I could then, you know, just as easily use our TypeScript API, or, you know, Dart, flutter, if I’m building, you know, an application. So there are tools like that that actually wrap what’s underneath and present an interface that is usable in the way that you want to

Sean C Davis 25:55
use it. Interesting, okay, that that makes sense is, is there a Do you know, if there was a reason that the we’ve started with these languages in AI, is it a more of a performance thing or, yeah, something else,

Matt Vella 26:09
yeah, I, honestly, I’ve never researched it, so I can kind of just speculate that, you know, C Plus Plus, yes, of course, is the thing that’s built for performance. My understanding is that Python bindings to C plus plus, which is what most of the Python packages are, are fairly performant. There’s not a lot of overhead. Surprisingly, some languages that are performant, compiled languages my understanding, like go, for example, doesn’t have it has bindings to C Plus Plus that are a bit less performant than Python, even. But I also have to assume that a lot of this is coming out of the world of data science, right? And data scientists, for quite some time now, have been working in Python, and so it’s kind of a natural progression, but it’s pretty interesting. Like, Python seems to be like the go to for almost any of these things. And the nice thing there is, like, you know, if you don’t know Python, it’s pretty easy to learn Python. It’s like the at this point, what the language, franca of programming languages?

Sean C Davis 27:10
Yes, yes, yes. Brian had a question in the chat about sites or newsletter, basically, keeping up with the progression and evolution of AI. Do you have a go to, newsletter, blog, website, etc, where you’re getting your your AI updates?

Matt Vella 27:32
Yeah, it’s a good question. I I have a hard time pointing to a single source. If I have to, I use Reddit. Like, I really, you know, every morning I’m like, kind of trolling through and seeing what people are posting about in Reddit, and, you know, various subreddits there. So I guess that that’s my go to,

Sean C Davis 27:54
okay, okay. And there’s lots of lots of chatter when you showed the differences in pricing, and I had a very similar immediate reaction that between It was between chatgpt and serverless, AI, that was basically an order of magnitude difference in terms of cost. And I’m kind of a two part question for you. One is, do you have a sense of why that range is so different today? And then a couple that with, do you expect things to do? Expect something like chatgpt to get significantly less expensive over time, or maybe the other way around, in some of the other models getting more expensive. Where do you think it’s trending?

Matt Vella 28:44
Yeah, good question. So I would think, well, why is it expensive to do image processing today with chatgpt? My guess is that it’s a fairly newly released feature. I think it was added with GPT for turbo, and now they’re like, one iteration past that with GPT. What is it for? O, I think. And if you know, like when chatGPT first came out with just chat abilities, if you’re using their API, their APIs were actually pretty expensive, and already they’ve dropped tremendously in price. If you were doing just language generative AI language without the image processing, I bet it’s actually pretty competitive to the serverless. I haven’t done that math, but so to answer your question, yes, I think it’s going to probably reducing cost over time. But then, you know, of course, like they are the go to for many corporations that are using large language models today. And so probably, like, you know, their infrastructure is massive. They of course, need to make money. And they probably can, you know, they have people that are willing to, like, pay, you know, for larger scale price. Fix,

Sean C Davis 30:01
yeah, yeah, yeah. So as I was watching your demonstration of the cat on the counter, which, by the way, I can totally relate to this, and waking up in the morning and being like, why are there so many paw prints on my counter? What happened here last night? I could definitely use that device. Now that you’ve gone through that experience, what, what other ideas and inspiration has hit you recently? What’s the next thing you’re going to build with AI?

Matt Vella 30:31
Yeah, honestly, like, I want an AI assistant that can do, you know, simple tasks for me, like, and, you know, again, I work for smart machine robotics company, like, sometimes actuation will be involved. So, you know, I was thinking, you know, could I build something that’s, you know, simple door lock, but it’s a little more custom than, like, you know, the door locks that you get today, and maybe it’s actually using AI, so, you know, periodically, you know, it’s, it’s maybe like shifting out, like the secret questions and answers that you might need to use to get in the door. This is kind of like a last resort type thing, like, we all have keys, but we forget our keys sometimes, right? And so something like that. I was also thinking, like, outside of the home, you know, could you have, could you leverage AI to, you know, essentially, like lead somebody, a visitor an office, to somebody’s desk, right? And kind of like, in my mind, I’m relating this to Hitchhiker’s Guide to the Galaxy. I’m not sure if you’re familiar, there was something, there was a concept in there called genuine people personalities. And idea was, you know, anything that’s a machine had a personality, and that is very possible with AI today. Now, in those books, it was kind of like an annoying thing. Like, they always, the personalities always were like, you know, getting on people’s nerves, or, you know, maybe they’re too human, like, and they were complaining about opening the door for you constantly. But I kind of like that idea in that, could we have, like, these small, inexpensive devices that are actually doing real things for us, that are kind of to start, like, really low left low lift things,

Sean C Davis 32:14
yes, yes. I love that. I I just read a book that was, I think it was, I’m going to say the title wrong, something like A Long Way to a Small, Angry Planet, or it was, it was something like that long title. But yeah, there was a there was an AI machine with a personality, and that was kind of a core part of that book. And I could, I mean, when that was written, it maybe felt like it was farther off, and now it feels like we’re probably not that far away from that being some sort of reality, which is, it’s interesting. It’s definitely going to change life for a lot of us, for sure. Yeah,

Matt Vella 32:48
yeah, it the AI thing is really interesting. I think, you know, I love it as a human computer interface, and I think that’s fairly benign. But you know, then when it comes to things like art and music. Was I happy I could, like, create this with chatGPT, this image here in, you know, three seconds, and it’s actually pretty decent. Yes, do I love that? You know, it’s getting harder and harder to tell, you know, what’s, what’s human based, and what’s AI based, like, that’s, it’s a little weird.

Sean C Davis 33:20
Yes, yes, for sure. Oh, yeah. Okay, so over in the chat here, I wanted to point out that a couple folks have mentioned there have answered where there are JavaScript or TypeScript outlets to to AI Ray says that Google Gemini has a great JavaScript SDK Jen said Skuid cloud uses a TypeScript framework for AI, works with multiple llms and and then, and then Ray also saying that for JavaScript folks, people who are mostly comfortable in JavaScript, pushing folks to learn Python, that it’s it’s definitely worth the effort there. So thanks. Thanks for those comments and and then going to wrap with one last question from Brian. Well, Brian more but more of a comment from Brian, who says that the hard parts for me in getting into AI is that a lot of the services have no or very limited free levels to just experiment with, and that’s something that I have experienced as well. And just kind of curious to get your your take on that based on your experience of evaluating these different service providers. Did you? Did you have to come out of pocket to do these trials? Were some of them free and and kind of coupled with, how did you get to those? Yeah, yeah. Those, those estimates based on the work you were doing.

Matt Vella 34:41
Yeah, good question as well. So the answer there is also, it depends, you know, when chat GPT first came out, and I think it was even a waiting list for the API. It was like the new thing, and I was willing to, you know, actually pay for that API usage. It was, you know, fairly minimal. I think it was like $20 a month, but, you know, kind of sometimes depends on desire, right? But other services, actually, there are free tiers, like modal, the one that I was just showing, they do have a $30 credit per month, which actually is really nice, like I spent $0 so far, like testing, and that’s good, because actually there, there’s some complexity, even though it wasn’t that bad. But like, of, you know, getting this thing running is it actually, like running where it wakes up correctly? Like, it took me some time and, you know, the dollars were adding it up, but not much, you know, like, I got up to, I think, $10 before I had it working the way that I wanted. And same with VM, like VM also has, I think it’s just a $5 a month credit. But with VM, like, everything you’re running on a machine is open source, and it’s not costing you anything. What costs money is, like the cloud services, if you’re storing data in the VM platform, if you’re using, like, training for machine learning, that sort of thing. And so it really depends. I would say there are often options, and often those options are open source projects that you can run locally. But of course, then they’re the trade offs, trade offs that we talked about earlier,

Sean C Davis 36:18
for sure, for sure. Well, thanks Matt, thanks Matt, thanks for sticking around and answering these questions, and thanks for that great presentation. You.

Tags

More Awesome Sessions