From Click-Ops to Chat-Ops: AI's Double-Edged Promise
00:00:00:00 - 00:00:20:13
The techniques and capabilities are changing so fast even within the last month, right? It's a string. It should be, you know, an anomaly should be something else, right? So essentially, this is just replacing click ops with chat ops. And it missed some very important steps. Maybe an LM to retrieve this information. And developers interacting with natural language might be the future.
00:00:20:14 - 00:00:47:02
If you need to parse the source file and figure out what classes are defined within the source file and you have access to a transcript compiler, use a TypeScript compiler. Right. Who's the owner of this code generated by the elements? Instead of casual Fridays, there should be another line. Friday, maybe five years down the line. Nobody really knows how Kubernetes works, except for the Kubernetes maintainers and developers throw gates into something futuristic that we all can be worried about at some point.
00:00:53:17 - 00:01:16:06
Hey, everyone! Welcome to the first ever episode of AI in DevOps podcast. In this podcast, we explore how AI is reshaping the way we code, build, deploy, and operate. I am Rohit, your host. I'm the co-founder at facets Dot cloud and why this podcast is something that we want to work on. So everyone is talking about AI like it's magic, right?
00:01:16:07 - 00:01:36:23
But not enough people are talking about how it actually fits into day to day DevOps. Even the messy parts, the trade offs and the real impacts on teams on how they operate and things like that. So that's what we're here for. We want to hear real stories, honest takes and zero fluff. And this is a perfect segue into introducing our guest for the day, Vincent Desmet.
00:01:36:24 - 00:01:54:08
He works at, as a DevOps engineer in handshakes, not AI. And moreover, he is very opinionated and straight talking and has very hot takes and most of them are very spot on. Given his years of experience in this space. Let Vincent introduce himself. Maybe. Vincent, tell us about a little bit of your past and what you work.
00:01:54:15 - 00:02:17:11
Yes. Thank you very much for this. Introduction. I started in DevOps space when I started learning about Docker for a French startup, and then after helping to organize some meetups. And I'm based in Vietnam, and I helped to run some meetups about Docker in Ho Chi Minh City and from Docker. In around 2015, Google introduced Kubernetes. And that was a very exciting framework.
00:02:17:11 - 00:02:41:04
So I focused on after proposing to talk about it at one meetup, I actually got really, you know, excited about the like the capabilities that Kubernetes offered, mostly thanks to people like Kelsey Hightower and Brendan Burns. I watched many of their presentations on introducing the concepts behind Kubernetes and the concepts behind quotas when Kelsey was still let's go to us and I was briefly a Docker captain as well.
00:02:41:04 - 00:03:01:13
I looked into like the way that the Docker client for windows was written and found that that the samba password, like the share, was actually accessible for users. And there was a that's something they patched later on. So it was actually very interesting early stage, you know, identifying some of these problems and then also being asked to be a doctor captain for Docker.
00:03:01:19 - 00:03:27:17
But I then moved to Singapore to focus on actually running Kubernetes at startups, and then also running Kubernetes with Terraform at startups and mostly on AWS. So that's most of my experience is on Terraform, Kubernetes and AWS. And I then stopped being a Docker captain, but focused more on cloud native landscape with Kubernetes and Kubernetes meetups in, Singapore, where I joined the startup and then later, during Covid moved back to Vietnam.
00:03:27:17 - 00:03:47:20
I'm working from home and now focus, you know, of course, also very excited about the capabilities that I open up for us. And, and that's what we are also exploring currently at a time. So I am where we are using Terraform and AWS mainly to, and we're integrating AI on to our daily work as well as developers and DevOps engineers.
00:03:47:22 - 00:04:11:00
That's that's great point. And I would also love you to talk a little bit about Terra constructs. Work on. Okay. So after working with Terraform since about 2016, you know, running Terraform since version 0.7, you know, Terraform backends were not really, established yet. The concept of having an S3 bucket to store the state, it was not as polished as it is today.
00:04:11:00 - 00:04:38:04
Also, before, you know, Terraform, 1.12 or 0.12. Sorry for 0.12. HashiCorp language was very static, didn't have the list comprehensions and for each statement that didn't exist. So at that time we use Terraform heavily and we we got used to evolving our Terraform configurations. And I saw that adoption, but with other engineers like we did actually run workshops at the first startup on how to write Terraform.
00:04:38:04 - 00:05:04:17
And, you know, we built Terraform modules for the teams to deploy their, you know, classic Golden path, product deployments. But the Terraform modules were always a little bit of a, you know, difficult beast to manage. It's a very simple or a simplification of of what, you know, they refer to. It is like a class or a bundle in Ruby, but, it's actually very limited in its capabilities.
00:05:04:17 - 00:05:24:06
So when I then came into contact with CDK at another, company, I was that. Then I noticed that this was introduced by engineers themselves. It was not coming from DevOps teams, the engineers themselves working on it. AWS decided that they wanted to use AWS, CDK in my initial, you know, feedback to them was, please, can we just stick to Terraform?
00:05:24:06 - 00:05:44:07
Because we have a lot of, you know, tooling around Terraform like Atlantis for Terraform automation. You know, you're introducing a new, tech stack here. Can we just maybe look at Terraform CDK instead of AWS CDK? Funny enough, you know how this goes. One week later, a pull request comes in. Guys, I've done everything with AWS CDK, so great.
00:05:44:07 - 00:06:11:08
Now, please, can you automate this? Like, can you put it into the CI CD pipeline? So I noticed I was initially not very happy, but then working with it and then making, you know, building constructs, building reusable patterns, I very quickly noticed that a lot of the engineers, the product engineers, jumped in and introduced new features that they wanted, and they could do it easily because they were able to just, you know, build a TypeScript class, you know, build a utility function to then, you know, just the same way that they write the product features.
00:06:11:08 - 00:06:37:27
There were a lot of like, Kotlin engineers and TypeScript engineers, and they were writing unit testing just so they were already using those tools. And they were very quickly adopting those things. And I loved CDK. I love the adoption. And within months, we got a lot more adoption by product engineers compared to our Terraform that existed for two years, where, you know, most of it was delegated to platform engineers to go and figure out what exactly are the API resources that we need to create to to do something.
00:06:38:01 - 00:06:57:18
I know this podcast is about the AI. Maybe we need to come here, but but basically, to cut things short, I love the way CDK, but I did not like CloudFormation so much and it also caused problems interacting across the different technology stacks. So that's where I started playing with the idea of using. I use Terraform CDK, but obviously it's lacking a lot of the capabilities of AWS CDK.
00:06:57:18 - 00:07:24:09
So I started really like copying over the beautiful work, as it always has done on top of the CDK, on top of the construct and reworking it into the actual Terraform resources like the Terraform CDK resources. So that's what Tara constructs is it's currently mostly an AWS support of AWS CDK on top of Terraform CDK, and I'm also employing now after doing this for seven months, manually focused on IAM workflows to automate this work.
00:07:24:09 - 00:07:52:02
Because a lot of this is very tedious and repetitive work. So a lot of workflows are really accelerating or empowering the engineers to adopt or do these very tedious tasks. Yes. So you're asking me to talk about a construct. Yeah. So I am genuinely curious as to learning how much of a construct is now being WIP coded. Okay, so I have not yet adopted the vibe, coding vibe, or hype.
00:07:52:02 - 00:08:10:11
I am very firmly asking alarms to generate the code separately in a chat window. And then I read every line and I understand every line, and then I take the parts I like. I will admit that some parts of the workflow that I wrote actually ask the other line to generate like, hey, I need a way to extract, you know, using the TypeScript compiler.
00:08:10:11 - 00:08:28:12
I need a way to extract all of the classes that have that are being used. Like I have an import statement. I want to figure out where this class is being used throughout the TypeScript file. And a lot of that code was vibe coded. Okay, so I like it, but not in my daily job, but in actually writing the other Lem to generate the like conversions.
00:08:28:12 - 00:08:51:25
I used some time coding, but in my day job I still very much want to understand every line of code that, that is generated. But I do find that it really helps, late at night when I'm thinking about some algorithmic problem and I really want to meet this deadline. I know that oftentimes you go to sleep, you wake up, you're a freshman, you can do it, but sometimes you just want to push for a deadline.
00:08:51:25 - 00:09:14:07
And so there I really find this vibe coding where you can just throw the problem at the alarm, go away, get a cup of not coffee that late, but water. And then actually come back and see something and then read how the other so often it gives you multiple options. What are the tradeoffs between them? I really find that helpful, but I don't trust it to completely generate my code because I want ownership over it.
00:09:14:07 - 00:09:15:10
I want to control it.
00:09:15:10 - 00:09:38:22
right. In my personal experience as well. So I think I did start off with, as you said, like, you know, working on a chat window, getting some snippets generated on the maps generated and then integrating it with my code myself. Then also, I mean slight Segway. I did also try out tools like K does GPT you know, which give you a chatty interface with Kubernetes or get actions done or retrieve information from them.
00:09:38:24 - 00:10:06:01
Then I found that I slowly start weaning of those and get back to my day. I kind of felt that my CLI hands are faster than than than my starting to and get back. But then of of late, with the advent of m.c.p.s and and a little bit more control over what I can let LMS do and whatnot, we have dubbed into getting a LMS right complete Terraform modules with some safeguards on top of it.
00:10:06:01 - 00:10:26:16
Like for example, we make sure that the tools provided to the NLM invoke the Terraform, validate or run Chekov so that I have certain guarantees that it will go back and correct itself and generate something to my standards. Right. So that is one direction that I have, found a lot of promise. I agree that losing ownership of your code is something that is a danger with with that.
00:10:26:18 - 00:10:44:17
Yeah. So this is a direction that we have been exploring, at facets as well. And it looks promising, actually, Vincent and I think you have also recently done a lot of research on how to get good infrastructure as code generated or what are the pitfalls associated with it as well. So what did you find on that? Yes.
00:10:44:17 - 00:11:03:29
Before I get into, I think two things about, I think it also very much depends on your on your mood, like how fresh you are. How like what? Because do you want to think about this hard, or do you want to let some, like an alum do the thinking real like real quick for you? Because in five seconds I can come up with some ideas that, you know, helps you get something to get started.
00:11:03:29 - 00:11:21:24
Right. And the second thing is, I think that everything we discuss, it may completely change next month. If we go back to a month ago, there was no 1 million contacts. Gemini. Now everyone is pushing one meaning in context Gemini. The techniques and the capabilities are changing so fast even within the last month, right? We can go back and look.
00:11:21:24 - 00:11:41:08
I look at, you know, podcast with people before this podcast. I look at podcasts about this topic from a month ago, and I'm thinking to myself, oh my God, so much has changed over one month, right? So yes, like you said, I think it is improving quickly. And so this is whatever we like. Whatever found that works today, we always have to reevaluate that.
00:11:41:08 - 00:12:03:03
Is this still the best way of doing things? Maybe today I can just take the whole document and paste it into ChatGPT and it will then. Or I can put it as a markdown in my editor somewhere and point to it, and then tell the agent, go ahead and read this, and then go and generate me some more files to the point of what you just ended, which what are some techniques that we can use to improve the quality of the code that is being generated?
00:12:03:03 - 00:12:24:18
I think that's I mean, everyone always says context is king, right? And and that's definitely something that I always do is try to find relevant documentation. I often from a documentation website, go to the GitHub page that backs it because a lot of documentation websites today there's like added me here. Like at least with open source, there's some link back to an open source GitHub repo where I find the original markdown with some front matter.
00:12:24:24 - 00:12:47:20
Actually, all of that is great. I just take all of that, just raw copy the code and put it straight into the context for my alarm to to act on and to be more knowledgeable about. So what type of decisions or what type of code it's going to write. So again, also every major AI provider like cloud and OpenAI and Google Gemini, they have all published prompting guidelines to control the quality of of what you're getting.
00:12:47:20 - 00:13:08:08
So does that answer your question about like techniques to improve the quality? I think that was right. Right. And I think, you've also done some research on, you know, Terraform languages like Terraform, like DSL specific generation, which are niche sort of. You don't have a lot. We do not have a lot of Terraform code base to learn and understand from.
00:13:08:08 - 00:13:29:01
So so that on that lines as well. You have some insights, right? Yes. So a lot of companies today, they are rightly pointing out that most of the Terraform infrastructure as code is private to companies, because infrastructure is a sensitive topic. So a lot of the really production ready infrastructure and compliance ready infrastructure is not publicly available.
00:13:29:01 - 00:13:47:22
And alarms are trained on what's publicly available or what is available to the company, you know, training DL on them. I don't know how they get access to some of the data, but okay. So, another point that was highlighted by, you know, one of the post I follow, was that less than ten or so?
00:13:47:24 - 00:14:17:23
I don't know how many, but like, there's a limited amount of public available, you know, production ready modules, for example, etc.. For modules, I go to work from Anton Burbank, who is doing a lot of effort there to make them, you know, production ready and good code, but still compared to like what you just highlighted, like the comparison between generic programing languages like Python, JavaScript and so on compared to the amount of code that's available to train on, like HashiCorp language or Kubernetes Yaml currently.
00:14:18:01 - 00:14:40:06
I mean, there's a big difference between the amount of data to train on. So I think I am I have a different opinion there than than most of what some companies are focused on is like, hey, we need to use more deterministic metrics to to guide the model. Some of the advice was to give them, grammars definitions so that the model can follow those grammars and then generate code accordingly.
00:14:40:10 - 00:14:59:08
I am more of the opinion I look at at alums like junior engineers today. Right. And that the same way I saw that AWS, CDK empowers these product engineers that are not necessarily experts in infrastructure, as code suddenly became way more powerful when they started using a generic language like JavaScript, you know, in this case TypeScript or Python.
00:14:59:08 - 00:15:18:01
So I think that's where AWS, CDK is very valuable. And it's also another reason why I focus on tere constructs, because I believe that instead of using DSLs, we should be focused on building more of these higher level libraries. That's basically where I wanted to go with that. Yeah. That's interesting. So so you would rather have a LMS, right?
00:15:18:01 - 00:15:46:01
Node or I mean TypeScript or Python, that, that uses CDK or CDK tf rather than. Right. Vanilla Terraform. That's right. So I think you can argue like, if you train it properly, it can generate perfect Terraform. That is valid Terraform. And you know, it's able to run against the cloud and can deploy infrastructure rather than generating Terraform, which I think is a low level abstraction of direct resource against AWS.
00:15:46:01 - 00:16:08:12
CDK, sorry, against cloud provider, APIs. I think leveraging higher level libraries that where there's more of, the ideas of the what the API resources represent and in this case where CDK is maintained by the end. Well, yes. So they are really having a very good visual, like a very good interface, which is the same for developers.
00:16:08:15 - 00:16:32:04
And it's the same for lamps. These interfaces, these JavaScript documents really help as a proper software development kit. You to create infrastructure the same way that you would click on a button in the AWS console. You get an infrastructure piece of infrastructure and all of its capabilities like connected to each other. Unlike Terraform resources, where you have a string variable and then that string variable has its documentation that you need to go to the website for.
00:16:32:10 - 00:16:57:01
That then explains the valid values for this string are all or none. And then you know it's a string. It should be, you know, an enum or it should be something else. Right. So that's that's better captured in like software development kits and also better visualize actual programing language. Yeah. That is actually interesting. It makes me think. So when I as I mentioned, we did dabble into getting Terraform modules generated by them with MSPs.
00:16:57:09 - 00:17:23:03
So one of the tools that we gave it was that, hey, when you create variables, first you model your module first and then make sure that all of the validations that are required for your variables are there. And the moment he adds the resources or the resources. I also prompt engineer to sort of make sure that it adds all the respective lives, I mean, relevant lifecycle hooks as well as, you know, pre post validations.
00:17:23:03 - 00:17:51:18
So this made it right, a lot more mature Terraform than what a human would. Right. And yeah I see what what do you mean. So rather than do this what you suggest as right in whatever programing language the realm is fluent and and use the underlying SDK to generate synthesize the Terraform. Yes. The reason I believe that that is more powerful and more generic is because aside from the alarms being trained on it, you also get more determinism by the SDK and the library itself.
00:17:51:24 - 00:18:09:02
So the idea here is not to have this complete non-deterministic behavior of an alarm where it is not necessarily correctly linking your like, your Terraform resources together. Again, because a lot of it's our strings. And maybe you are doing a lot of prompt engineering to ensure that the strings are valid and that the resources are linked together correctly.
00:18:09:02 - 00:18:34:06
But, it's still non-deterministic. Again, everything I say today could be not true. In a couple of months, because of the alarm has become so powerful that it can actually handle a lot of that. But then we can discuss about code ownership more later. But if you actually have a library like an SDK or an CDK that has, you know, validated patterns that the code it generates is validated with E2E testing and it's trustworthy.
00:18:34:06 - 00:18:59:08
It is maintained by the cloud themselves, ideally like AWS, CDK. Then you get a lot more powerful, I mean, maintainable code because you're working with actual a higher level library that is achieving these capabilities in a well-tested way and in a more native way for other alarms to work with. That's my my opinion. Yeah, that's that's very thought provoking, to say the least.
00:18:59:08 - 00:19:24:13
And yeah. So, so as a segue way from over here. So you also are passionate about, making the product developers, you know, enabling the product developers to make some interest this demo so that the true platform engineering concept that they get involved in, they can raise pull requests, maybe they write, crickets code or, you know, how do you think that is going?
00:19:24:15 - 00:19:48:12
So so now now with the advent of AI, how do you think that is going to change? A little bit. So the way you deliver your you as a platform engineer deliver your services or your automations to the developers to consume, how do you think that will change? Or is it just going to make it a little faster because AI is there or it is going, is it going to fundamentally change how your platform engineering would look like in, let's say, a few years time?
00:19:48:19 - 00:20:16:01
Yes. I think we can see that HoloLens and I'm having the capability to take more, or to guide developers better into adopting these IDC patterns or to develop their own or take more responsibility of their infrastructure in the cloud. It may change the role of of platform teams, slightly, because basically, I think the role of platform team has always been to provide tooling and golden paths and guardrails to ensure that these are secure.
00:20:16:04 - 00:20:39:04
So as you mentioned as well, that even though you are trusting in LRM to write your infrastructure as code, you're still giving it tools to run, check off, which is a tool to statically like, validate or evaluate the security of the generated configuration. Right. So so these guardrails you can do then completely shift left like you're doing completely integrated at the early stage when you're writing the configuration.
00:20:39:08 - 00:20:57:20
But they can also be they're still applicable on towards the deployment side of things where when things are running you have platforms like with or other cloud scanners that validate. I want to mention which basically you have cloud scanners or security tools that are living in the cloud to validate that there is no configuration that violates this.
00:20:57:20 - 00:21:24:17
So we do this early with the tooling like you mentioned. And as platform engineers and security engineers, I think oftentimes these roles are combined, to make sure that we are compliant. We are still putting in place these, security guards. I think the focus that this might pivot more towards making sure that we are providing the context, I think as a platform team, currently, we are building tools to provide context to developers so that they are aware of like this is how landing zone is provisions.
00:21:24:19 - 00:21:44:22
This is how I provision my default stack or preferred or golden path stack. I think that that's really providing context to developers to know where do they go for the dashboards, for the observability? That's a platform engineer job today, right? And now maybe we will be more focused on integrating these context providers for other alarms that the engineers are using.
00:21:44:26 - 00:22:05:04
So the engineers will use all alarms, and they need to pull in the context that the platform team is building. Right, right. So so potentially in the future, platform engineers could be developing MSPs that are made available to the development teams who can interact with the in probably natural language and in English and and express the desired and get the things done.
00:22:05:04 - 00:22:31:29
But then again, the MSPs should include all the guardrails validations to make sure that it propagates the right way. Right? Yes. And that's very often very internal to each company and something that is not easily opened up by most companies. So similarly, like in the past, where a lot of companies prefer to run their own CI, CD runners prefer to, you know, manage their own security data or like personal identifiable data from from clients.
00:22:31:29 - 00:23:02:28
They are responsible for this, right? So that's why a lot of these context providers model context, protocol providers will need to be managed internally. I think, because it would be very difficult, to, to trust third parties to, to, to provide these. Right. I think there can be contracts to I, I think it depends on the size of your organization and the capabilities of your organization and how far this will be internal or vendor provided.
00:23:03:01 - 00:23:22:26
Yeah. And also, while we are on the topic, there are also many, many, many products companies coming up with products, which are more chat ops. Get me an S3 bucket and I'm able to spin it up in AWS with and things like that. Do you think this is going to devolve into basically click ops of the yes or yes?
00:23:22:26 - 00:23:42:03
So essentially this is just replacing click ops with chat ops. Do you see that as a threat or a danger, where companies take the shortcuts and end up with some unmanaged infrastructure that is all provisioned and managed by an LMS? So I think this could be tied back to a larger, larger approach of the company itself towards these other platforms.
00:23:42:03 - 00:24:07:26
If these companies provide or integrate other LMS as a way of working with the infrastructure within the company, and by this I mean actual proper contracts in place of how engineers can use those LMS, then they probably can control more how these other LMS will be used to provision infrastructure and avoid these problems. But if the companies are not taking into account the usage of all alarms and ignoring the problem and telling people, you cannot use our alarms.
00:24:07:29 - 00:24:36:03
But I see constantly people finding backdoors to help out alarms in their day job. So to use that alarms to help them in their day job. So I think it's very important for organizations to consider the usage of all alarms and provide a proper protocol around the uses of landmines to avoid avoid the problem that I think you're describing, which is people using LMS in ways on their own and without any guidance, and then ending up with difficult to manage infrastructure or data operations.
00:24:36:06 - 00:24:58:15
Yeah. So in my experience, it LMS also excel when you have clear cut boundaries on what to do and clear expectations on what to do, right. Which is where probably having the underlying orchestrator of some sort is still a necessary thing. Right? So the LMS do not directly call your cloud apps and provision an S3 bucket, because you cannot have the same predictability when you do something on stage.
00:24:58:21 - 00:25:17:06
The same thing goes into production, right? Right. So you you ideally need an orchestrator of some sort that helps generate a central source of truth, validate it, pass it through the quality gates and then only propagate to your environments. Something of that. So I think will excel also in that way in terms of getting the right result out of lens.
00:25:17:08 - 00:25:40:07
And also it helps you create, maintainable, manageable, auditable, infrastructure in the long run thing. I think it ties back into also how much capability, I mean, how much permissions you give to the engineers, right? As you mentioned, like an LRM may need to go through an orchestration framework for validation and feature gates or quality gates, but I'm also thinking about organizations and how they manage developer access.
00:25:40:07 - 00:26:02:29
Right. Most people, most organizations at least know past a certain stage, do not give developers direct production access, often because of compliance reasons and things like that. Not because they don't trust developers, but just because they're must be some type of a quality gate, no matter if they're using an addendum or not. Right? So when that's the case, there will be there will be ideally there will be a certain procedure that they are following.
00:26:02:29 - 00:26:38:14
For example, if they're going to deploy to production they may need to have IT infrastructures codes to staging at least first. So they can still use other alarms to generate the code and then submit it. I think then the difficult parts will be do to have the proper review processes there. If we're not, if we're not going like, I think what you mentioned about an orchestration system that accesses the clouds and I think that's that's potentially a future situation if we're looking at the near future adoption of alarms, aside from organizations clearly putting in place a protocol of how to integrate with alarms and what is the data that engineer engineers can share with
00:26:38:14 - 00:26:58:04
that alarms in terms of like making sure that there is no sensitive data, directly paste it into those and alarms. As long as that's in place, and to integrate into today's pipelines, where there's often infrastructure as code to make changes, they will come back down to like existing procedures of like having maybe pull request flows to, to to make changes to infrastructure and reviews and the process.
00:26:58:04 - 00:27:19:09
So easy to imagine that, maybe you have an AI agent, assistant, personal assistant for the platform engineer as well as for the, product development. And he could raise a, I mean, the AI agent could utilize the modules or utilize the constructs that the platform engineer has provided. And, compose whatever the developer requests, in natural language.
00:27:19:11 - 00:27:41:04
And similarly, the platform engineer could, have certain guidelines baked in, based on which the module or the construct gets generated as well. So I think these are. Yeah. Yeah. This is this is one way we could then easily imagine how the roles would change and how this should look like in the future. And also an assistant part of the that's helping with the review process.
00:27:41:09 - 00:28:07:04
Because if you're working with Terraform and you know that sometimes some changes in code can generate a very big plan, but sometimes it's not very clear exactly if there's anything risky within that plan. So even part of the, review process is part of an and it's very easy today to if you have a self-hosted Atlantis to, for example, ask Atlantis to send the Terraform plan to an alarm to do a summary before it gets posted back into the pull request.
00:28:07:07 - 00:28:28:13
But I found that one of the very important things there is to not only provide the plan, but to also provide the context of that plan. Like if the intention is to migrate a database from one. Maybe, maybe, you know, create a new instance of a database in a new subnet and then later provision the old, you know, promote to master and the provision the old database.
00:28:28:13 - 00:28:43:03
When you're doing a database migration, it is very important for the alum to be aware that that's the goal. And that may be in the first phase. You don't want anything to happen to the old database. You definitely don't want to destroy it or recreate it. You really just want to, snapshot to be restored into the new subnets.
00:28:43:08 - 00:29:02:29
And if you made some mistake, maybe the old database is pointing to new subnets and gets destroyed. I found that giving a plan to an alarm without context, like what is the actual intention of this plan is very dangerous because I actually had situations where I did not notice things I took like analyze any risks, and it just gives me like, well, there's some risk, some high, medium risks.
00:29:02:29 - 00:29:21:22
A very nice analysis that it generated. But it didn't. It totally did not take into account the context and it missed some very important steps. Exactly. That's a great point. And I think, if if addendums were to review some of the changes, I think passing on the information about the intent behind it. Yes. What is the actual intent would help review a lot.
00:29:21:22 - 00:29:39:15
And that actually is also a power, I think, we since we have written some amount of validations which are kind of static. So it would review the plan, look for some red flags. And so, so even there, I think I could chip in with some context with some information on what this activity is supposed to be.
00:29:39:15 - 00:30:02:07
And, based on that review, the plan, rather than applying a certain set of deterministic ticks, which coming again, has to review. Anyways, yes, I found that this alarms are very good at summarizing the question. Then always just do I trust the summary? Did it really summarize it correctly? So I often ask them to summarize and then manually went through every single change.
00:30:02:07 - 00:30:32:22
Like I look for minuses in the Terraform plan, and I make sure that the destroyed resources are really the ones in the elements that I this doesn't like 5 or 6 times. And then I started to trust. Yeah. And, it took me a while. But I think if you are having I think that's the tradeoff between deterministic and non-deterministic or having some static deterministic checks, combining it with newer non-deterministic like an alarm checks, I think this is crucial to build a trust into the, because I talked to a lot of people and they don't trust at all, and I think rightfully so, rightfully so.
00:30:32:22 - 00:31:00:20
We don't trust what the other generates. Yeah. So I think we have to find the context in which to invoke invoke the NLM so that, we, we just so at least in my mind, we leave it to make specific decision making rather than a broad. So I don't want to send the whole plan and just say that, hey, maybe I would iterate through the plan myself in a deterministic piece of code and then get, you know, feed this information into the NLM and then get it reviewed.
00:31:00:20 - 00:31:21:03
Maybe that makes more sense than, you know, just, yes, the whole thing of it and getting it done without alarms direct like very clear instructions like everyone always has. Context is king. The ability to, you know, maximize the usage of the context window. Even with the newer 1 million token context windows. Very big memories for these are alarms.
00:31:21:03 - 00:31:48:00
Now how focused are they still right. Because people say ChatGPT 4.1 introduce 1 million token context window. Now recently actually people find that past 500,000 tokens it loses some of the focus. Like it actually doesn't really remember everything that you sent to it. So the ability to to really provide context of the intent and detailed instructions of exactly what you want it to focus on and what are dangerous things that you wanted to warn about.
00:31:48:02 - 00:32:19:21
It's definitely an interesting like people make jokes about prompt engineering, but it's not thing to be joking about because it is actually quite, It is also quite finicky and it requires a careful consideration. And if you've never looked into like a prompt engineering guide or ran through some examples, I really recommend to to spend some time to read a little bit about that, because it really helps with the actual output of the, you know, the way that you described, but the way that you prompt engineer the prompt matters a lot.
00:32:19:21 - 00:32:38:07
Yes. And oftentimes it's better to give it tools to retrieve the information that it wants to. And it works very well. So I mean, with with MCP, several of the tools that we have, it right, give them the tools to retrieve it does a good job of it. Usually. And you don't overload the context with all the information that you have and make it.
00:32:38:07 - 00:33:00:17
That's true. I mean, it's true that if we decide exactly what we give to the Lem, we are not taking into account all of the possibilities. So, like, we are missing some idea of some potential resources that the LM needs. And by giving the LM the ability to call out tools by, like we say, with MCP protocol, we give it tools and we describe exactly what each of those tools does.
00:33:00:17 - 00:33:25:23
The Lem, often through change of enough thought and reasoning, is able to decide what information it needs to find out things that we cannot think about in advance. And so tool tool use is is a very important innovation in AI genetic workflows. I have not tried it a lot myself. I'm a bit concerned about the amount of like token usage and how much back and forth the tool usage goes.
00:33:25:27 - 00:33:48:00
Definitely, if you're writing applications that you're exposing where other people may be using them, I think it's very hard to control the costs. Yeah. If you give full freedom to the element in terms of tool use, like, I know a lot of these frameworks to have like certain restrictions of how many times you can call the tool and how many times it can go and keep on generating tokens because every token it generates cost money, of course.
00:33:48:03 - 00:33:48:21
Yeah,
00:33:48:21 - 00:34:13:22
A quick pause before we continue. I want to share something we are building at Facets Cloud that ties closely to today's conversation. One of the biggest challenges we see across engineering teams to date is that infrastructure, deployments and configurations are still handled as separate processes. This leads to drift between environments. Tons of manual effort and endless back and forth between developers and DevOps.
00:34:13:25 - 00:34:50:15
At facets, we are building an AI powered orchestrator for modern infrastructure. Bringing all of these moving parts together in a single, intelligent, declarative model. With facets, teams can move to true self-service infrastructure without losing governance, security, or control. DevOps teams at a fortune 50 automotive giant mobile Premier League, Waymo and several other fast scaling companies are already using facets to cut DevOps tickets by 95%, accelerate cloud migrations, and free up engineering time to focus on building products, not pipelines.
00:34:50:17 - 00:35:01:11
If you are curious, you can learn more at facets Dot cloud. We're always happy to chat about how teams are evolving their infrastructure game. All right. Back to the conversation
00:35:02:07 - 00:35:30:15
right. So, so so while while we are at the topic of using tools to retrieve information, one one thought that it crossed my mind as, some of some of the concepts that we have in ops today or platform engineering today might actually become obsolete. So, for example, cataloging is one thing I found. So we have products like, backstage, which essentially is a portal through which developers can access information and so it's just cataloging of all of the inventory that you have and put it in context.
00:35:30:16 - 00:36:07:14
Right. Maybe an LLM to retrieve this information and developers interacting with it with natural language might be the future is something that that I consider done. The way we think about, these cataloging tools might have to change. Yeah. This is something that crossed my mind. I am sure that internal developer platforms and the way that they are providing information to developers, I definitely going to change with the adoption of these algorithms like we like and the tool use the DL and the ability to go into the companies databases and whatever context providers or tools that are available to them based on the user query.
00:36:07:14 - 00:36:31:26
And that's natural language able to go and find the information that the user wants. And so get them started with the task that they want to get started. I think I also was very frustrated because I spend a lot of time learning how to do project bootstrapping with, things like cookie cutter. But more recently, in the AWS CDK ecosystem, there's a tool called project A, which allows you also created by the Israel who created, AWS CDK a little the concept behind it.
00:36:32:04 - 00:36:53:19
So project itself allows you to programmatically define the layout of the repository. So you can define there will be a GitHub workflows folder for GitHub ci CD system. There will be in case of you're using JavaScript or using Python, maybe you're using a poetry for Python for package management, or you or you know, the environment management of your Python.
00:36:53:24 - 00:37:14:05
Or maybe you're using node version manager and you need a node version and you want, you know, which package manager you choose in Node.js. All of that is controlled through project. So project gives you the ability to programmatically define every single file on the repository in a way that it's templates able, but also in a way that it can be updated programmatically.
00:37:14:05 - 00:37:37:26
So very much the same, like how the CDK is able to generate Terraform vs, CloudFormation resources and then configure them. A project is able to, you know, define what files will be created, and then you can even functionally call into the file and say, actually, I want to override this particular part of this Yaml or this Json for my particular use case, even though the golden path generation for the template is like this.
00:37:38:00 - 00:37:58:10
But me personally, I want to change this one part and then this is something that cookie cutter other template generators don't do is which is like future maintenance. I change the route template now my rerun it, and then still all of those hooks and add ons get reapplied on top of the latest template. So all of your local changes get reapplied and you have a future of maintaining it.
00:37:58:10 - 00:38:16:25
And I spend a lot of time learning project and I think it's an amazing tool. It's not easy to grasp when to write your own. And then Alan comes along and people say, I asked cloud to generate the whole repository for me. I'd say I want to use npm, I want to use, dino, I want to use this and that and boom, it generates the whole repository for me.
00:38:16:29 - 00:38:39:23
So I think this kind of like relates to the internal developer platform question, because it completely changes the capabilities that you want to focus on. Right? Because an internal development, platform is about creating golden path, creating one pain vision into like, where are my metrics, where are my monitors? Where are my a source code? Who is the owner of the microservice?
00:38:39:23 - 00:38:58:18
Right. The catalog of features. And then also what is my golden path? How do I start a new project? With the advent of the lamps being able to automatically generate, a repository if we are able to, you know, provide enough context so that the lab knows how to bootstrap a repository according to our company's requirements with tools.
00:38:58:20 - 00:39:21:26
Then the internal developer platform does what? Right? You know, the user can just say, hey, I need to deploy a new a next year project and get me started. And then the context, tells it, you know, we are using ECS Fargate and we, we use Datadog as, or we use the sentry. And here is like how to integrate with our essentially like there's some secret that we need to fetch as part of the deployment.
00:39:21:26 - 00:39:39:13
So let's get going. This is how we bootstrap your repository. Everything gets done for you. And this is how your dashboards will be created and it will be visible there, I think. And alarms can take care of all of it. Right. So so you have you have provided this provision as a sort of a tool in the arsenal of the NLM.
00:39:39:13 - 00:40:03:19
And it can it can bootstrap repositories and also probably roll out updates to the golden templates across, across projects as well. Right. Yeah. But the problem was that the alarm can actually generate a repository quite well on its own. If you give it project, it will be way more deterministic because the alarm will say, okay, I need to run Project New and it needs to be an XJS project type, which is internally within our company.
00:40:03:19 - 00:40:20:16
This is the template to do it. And then the repository will be bootstrapped. There's a couple of prompts project can do which is like okay, is it AWS is it a ECS Fargate or is it, mostly like, is it is it open next? Is it like lambdas? Is it serverless? And maybe based on that it can push the information into provision.
00:40:20:16 - 00:40:43:04
They can maybe answer all of it and then deterministically generate an absolute perfect repository. But it takes a lot of work to do that with project. And it depends like is it sufficient to give just a few tools to the alarm to then let it know that, you know, if the user wants to use open X, and then you want to do serverless with lambdas deployment into AWS, then this is the constructs that they should be using.
00:40:43:04 - 00:41:05:15
And so basically the other let me be able to generate everything, but it will be maybe not this deterministic. I think that's where the tool use and validation iterations done can get it to the point that it's acceptable. I think this will be much easier to maintain than implementing project and very deterministically control what it does. I think project will be more accurate and you will have more control, but a lot more work arounds.
00:41:05:15 - 00:41:32:16
We'll be able to quickly adopt this and, you know, bootstrap whatever you want, but you need to put in the guardrails, right? Yeah, I think this I don't know if we should call it a pattern or an anti-pattern, but generating deterministic workflows is one cool thing that we should all use a lambda for already. Right? So, That I feel strongly for, because your day to day executions, especially when dealing with DevOps or infrastructure, should follow a deterministic path.
00:41:32:23 - 00:42:08:01
But the LMS could be the one that is putting it together. So that makes a lot more. It gives me peace of mind to say that there was recently I will, maybe we can share this as part of the the podcast. There was a 12 factors for Lem proposal if you're familiar with 12 factors for microservices. Originally from Heroku, I think there are basically some very good patterns of how like some ground rules that you need to follow when you're building microservices, and then they have created like very similar factors for LMS, highlighting things like where possible, prefer deterministic workflows.
00:42:08:01 - 00:42:23:15
Right. Don't always grep for an all of them. It's going to take a lot more time. It's going to be non-deterministic if you can do something deterministically, like if you need to parse the source file and figure out what, what's, classes are, defined within the source file and you have access to a TypeScript compiler. Use a TypeScript compiler.
00:42:23:20 - 00:42:47:06
Right. Don't send the whole source file to it. And then maybe get like a decent answer, maybe not. You know, how do you control it? First off, it's going to take a lot more time. It's going to cost money. And, you know, you have a TypeScript compiler right there. Just use that. Right. So, so being able to, you know, combine, like you said, deterministic tools that in that you are able to use and expose them to the LRM so that they can do the high reasoning.
00:42:47:06 - 00:43:06:14
And because alarms are not good with details, right, they come up with hallucinations when it comes down to details. Yes. Yeah. That brings me to some of the so I think we have we have seen how, you know, like screwed integrate into your, into our ops lives and, and how the roles of platform engineers and the development teams change slightly.
00:43:06:14 - 00:43:26:16
So I think we all agree that it will be like an assistant to the platform engineer and an assistant to the developer right now. So but I also think that it indicates a shift towards adopting the true DevOps culture as well within organizations. Maybe magically, maybe. Maybe by magic, maybe. And if you don't use it the right way, you could end up in a mess as well.
00:43:26:19 - 00:43:49:03
But I think, the true DevOps philosophy of not having functions within, company or DevOps team, development team, and them interacting with each other. Now, it could all be developers who are just contributing, and they own their code from the laptop to the cloud to their production. Right. So maybe that's a do you think I can bring true DevOps culture into organizations, and if so, how?
00:43:49:03 - 00:44:12:20
I mean, I think the how is a hard part, but yeah. Yes, I think that's a very good point that with, our alums being our assistant experts next to the developers that are able be able to how do developers with areas that they are not so familiar with, such as ops and infrastructure, it will be truly adopting more of, shared responsibility.
00:44:12:22 - 00:44:45:29
DevOps culture mentality where we are not living in silos and developers are also taking more ownership over the actual deployment of the, their, their workloads. And you know, defining alarms and alerts around them at the same time. I just recently watched the presentation or the podcast from the Shift Corp founder, who mentioned that this may actually allow or may look like the developers care less about the infrastructure because they just offload this task to, to the basically push their way further and take responsibility because who's the owner of this code generated by the element.
00:44:46:03 - 00:45:10:07
And this is something I did not think about and something that we did not, you know, consider, which is is it really going to push developers to take more ownership over infrastructure if the infrastructure is generated by the AI expert, or is it going to take away the like? If you see the pattern, the hype around VP coding where people are like, I'm not even going to look at the code anymore that the alarm is generating, right?
00:45:10:07 - 00:45:40:05
I just give it what I want and it's just going to generate code for me, and I'm pretty happy with that. Like a lot of people, you know, and I think you also mentioned it earlier that as we are using our alarms, sometimes we kind of like lose full control of what is going to generate generated. Right? And if we are trusting it so much with these expert systems and they generate the code that we really don't understand anymore, because when I ask it to generate code to filter a TypeScript source file, after a while, I really don't know what exactly it's calling under the hood to to to do this filtering.
00:45:40:05 - 00:45:57:08
And when it gets stuck, I don't really know how to fix it as well. Right. So I think it's a double edged sword. I think on one side we will see the ability for engineers to do more of their own infrastructure as code because they have an expert eye, but I think it's also the responsibility of the engineer to actually try and understand what the code does.
00:45:57:08 - 00:46:16:07
And I think a lot of times I agree with that because they often give you an explanation, right? They say like, this is what I'm doing and this is why I'm doing it right. And I think this is why having a chat interface around it is also very good. And I also heard someone else says he has a rule that once a week he takes one day, but he disables all alarms completely in his ID everywhere.
00:46:16:14 - 00:46:37:04
And if he feels that he cannot do his job without the alarm, he's in trouble. And I think that's a good idea. Maybe there should be. Instead of casual Fridays, there should be an alarm Friday, right? I think amongst the pitfalls that that we must discuss, one of them is definitely this where, there is a risk of overdependence and losing some of the core skill sets, right?
00:46:37:08 - 00:46:54:07
Maybe five years down the line. Nobody really knows how Kubernetes works, except for the Kubernetes maintainers or developers. That is scary to me. Like, honestly. And to be, to be fair, even before I once Docker came around or Kubernetes came around, nobody really knows how to write a system to unify a lot of what it is, even.
00:46:54:07 - 00:47:20:26
Right. And people have lost and forgotten about a lot of the complexities that have been abstracted away from them. And let's kind of take away almost all complexities away from you. And, you sort of lose the core skill sets. I see this as one of a big threat, existential threat for engineers and probably humanity. Yeah. So I do understand that these are the suggestions that, hey, you must take a day off, but, yeah, I don't see a solution here.
00:47:20:27 - 00:47:46:09
Right. I think it is bound to happen that people will know. I mean, it's very easy to go, and I see a lot of blog posts of people saying, hey, I have this weekend project, you know, I'm married, I have children, I know I have a limited amount of time, free time. You know, aside from my day job, I have a limited amount of free time, and I have only 1 or 2 days to do some passion project and other alarms have been a godsend because I don't need to learn.
00:47:46:12 - 00:48:07:09
Somebody wrote some type of, interpreter for PlayStation to be able to run old game rom on PlayStation, for example. And he he had to write a decoder for the electric signal sending on the cable. And I was able to do it one day today, because the Netherlands understood exactly how this particular electric signal needs to be sent.
00:48:07:13 - 00:48:25:11
And in that case, I think it's a godsend for people and hobby projects. But at the same time, as engineers, if we want to maintain ownership and be able to understand, I think we do need to pause and really take the time to understand things. And then another thing I think is it's a godsend for security engineers in the future.
00:48:25:11 - 00:48:52:29
Cyclops I think that's the future for them. Maybe there will be specialized security elements as well, but I think security engineers will definitely have a lot to do. So yeah, right. And probably it's already a nightmare to them as well. So the security itself, maybe information leaks into, you know, even in this call which suggested passing out a Terraform plan onto an like quite, quite harmless at that, but it is potentially going to expose some sensitive information to an element.
00:48:52:29 - 00:49:23:00
And these are some, some of these risks that people might innocuously, you know, I mean unintentionally, come up with these cool ideas that might have some security implications here and there. And I think that is another thing to be capped. That's definitely also why I suggested within our company to use, bedrock hosted an alarm or something that we a model, maybe even fan or something that we can take and run internally so that we could keep control of the data, not only the data that we send out.
00:49:23:00 - 00:49:43:27
This is also why, as an organization, we must have in place the rules around alarm usage. And we cannot ignore it because they will definitely use it. But, another thing is if you're building alarms and you're exposing an alarm interface, there's been cases of companies or organization exposing internal details. Sounded the alarm had access to by exposing an alarm interface for external parties.
00:49:43:27 - 00:50:06:09
So it can be and there's like bounties on on jailbreaking alarms. And so far, I don't think there's been a good solution, to prevent any jailbreaks. And so if you're exposing all alarms, you even have a lot more risk about exposing internal data or whatever the alarm has access to. Yeah. Yeah. So I think security wise there are some exciting avenues as well as, a lot of, security nightmares.
00:50:06:11 - 00:50:27:07
And depending on what your role is, it's exciting or a nightmare. And I think rogue gets into something futuristic that we all can be worried about at some point. Rogue agents, as in agents that are no longer in control of by humans. And it's all sci fi. I mean, sometimes when I. So I played around with this, Kubernetes, MCP that is there.
00:50:27:07 - 00:50:43:15
So I just gave it a play around, cluster. So it kind of connected to it. And I open up cloud and ask it something. It starts doing so many of these commands. And if I just click on the allow for this chat, it really feels like super scary, honestly. Yeah. That seems to work on its own.
00:50:43:15 - 00:51:23:02
It seems to have a mind of its own and do it. So I don't know about AGI. Yeah, but I think even otherwise I'm sort of hallucinating itself is dangerous in itself. I have seen situations where it should be doing only read operations and still goes on about creating bots or whatnot, right. So I think the rogue agents might not just be things that are not under our control, it's just that it might have misinterpreted our instructions and start coming a different way the moment it starts pulling Docker images of itself into the container, into the cluster, and starts replicating, maybe it goes to showdown and finds open Kubernetes ports and starts running onto other
00:51:23:02 - 00:51:46:22
clusters. So that would be, this is something that a lot of these accelerate like this accelerate movement, accelerate the adoption of AI. And there's the accelerate, right. Take a step back. Consider the dangers that we are opening up. That's not something I, spend a lot of time on, but this is one of the biggest concerns, right, that at some point somebody makes a mistake and, yes.
00:51:46:27 - 00:52:05:19
Yeah. So what's next in AI for you? So what do you have a project in mind that that you would pursue with the AI or maybe some a project that you had in mind that you would have done a certain way, which has now changed because potentially a, change the way that works. So is there something immediately in your horizon that you would employ and for.
00:52:05:25 - 00:52:27:23
I mean, it's definitely true that being on the DevOps side of things and managing and deep diving into operational issues, I not always had the opportunity to actually build out a lot of full fledged frameworks that I would have not even considered, because I don't have the time to do that. And with alarms today, I definitely find that I'm way more eager to go and do something big that I never thought about doing.
00:52:27:23 - 00:52:44:03
So alarms have definitely enabled that the ability to just go like, okay, I don't have an idea how to do this, but I'm pretty confident that I can do it now because maybe I kick off a deep, deep research query and after ten minutes, it gives me ten different resources on how to do exactly what I want.
00:52:44:03 - 00:53:14:05
And, you know, a decently working piece of code. And so I can start with that and then iterate on it. Another thing that I think alarms are like what I'm exploring, I want to of course, also explore more into the MCP use cases where I want to have a specific tool related to framework like CDK. So basically build an MCP that is giving the capability for the alarm to very quickly look up information about the construct, higher level constructs into a CDK framework.
00:53:14:09 - 00:53:39:13
You know how a lot of MSPs now are focused on form writing Terraform, and I don't know of many existing efforts into basically like providing MCP information for specifically in the AWS CDK ecosystem. There's this concept of GSI, manifest, which is a full dictionary of every single resource that is available within, CDK package. So I think a simp tool is something that I would be excited about.
00:53:39:13 - 00:54:03:16
And basically you plug it into your cloud or your, code and you ask it like, okay, I need to create, an EKS cluster with Argo CD, and, I want it to hook up to GitHub and it just automatically here's a tool that, you know, you can find all of the Terraform provider resources for CDK, for GitHub and Argo CD, helm charts, and AWS, for example.
00:54:03:16 - 00:54:27:00
And it can go and find every single resource and glue together a nice little stack that, that provisions all of it. I think that would be exciting. Yeah, that's I think I mean, from my experience, I think writing an MCP that can retrieve all the list of all the data constructs that are available, what are its, schema or the class definition of it, and providing that information via an MCP tool onto cloud.
00:54:27:06 - 00:54:52:05
And you just describe in words what you want to set up. And I'm sure cloud would do a great job at it. So yeah, I mean, in our experience, we have tried the same with Terraform modules. And it doesn't exception of the job of such in such situations. And I think that that's done definitely something that, I hope we have time today I wrote, a retrieval augmented generation flow rack flow that uses, the Terraform documentation.
00:54:52:05 - 00:55:26:08
So I've embedded it into a vector database. So I'm not using MCP at this point, but I am using the GSI manifest to retrieve all of the class information and then embedded into the vector database and then retrieve it. So I think MCP is, more dynamic way of retrieval for context. Filling augmentation. Right. That so so I think, but it's been an interesting, to say the least, one hour, with you, Vincent, I think I got to I mean, there are a lot of takeaways here, some of the, clarity on how to think about LMS.
00:55:26:10 - 00:55:45:01
In, in our, future lives, as is very clear terminal that, you should think of it as a junior assistant to you, at least as of today. Maybe as of today. And model your workflows accordingly. I think it will be great. Think, several of the pitfalls and caveats that we need to keep in mind deep in our discipline.
00:55:45:03 - 00:56:03:29
It's also clear now after this conversation. Thanks a lot, Vincent, for for your time. Excited to have you. It will have you again in the podcast. Sometime later. Yes. Thank you. Thank you very much for inviting me. And I really enjoyed discussing this as well. I learned a lot from your, experiments as well that I hadn't thought about.
00:56:03:29 - 00:56:31:09
Like, I definitely need to, explore identity systems a lot more. I am still very much still in control of the AI, and I don't trust it fully yet. But I see a lot of people out of the effort. Love to, love to love to bomb. There's this, movie named I can't think of right now, but anyway, yes, I, how to accept AI's and love them is definitely something that I, I want to, improve on.
00:56:31:12 - 00:56:33:22
Yeah. Thank you so much. Thanks, Anton.
00:56:33:23 - 00:56:35:08
Okay.
00:56:35:08 - 00:56:56:21
All right, folks, that's a wrap for today's episode of AI and DevOps. I hope you found it fun and thought provoking. It's okay if it left you with more questions than answers. That's kind of the point. We are trying to figure out the role of AI in DevOps. If you have somebody in mind who you think we should invite onto the podcast and you would like to listen to them, send us their coordinates.
00:56:56:22 - 00:57:08:21
We'll invite them. And wherever you're listening, whether it be Spotify or Apple, whatever platform, just click the subscribe button so that you get notified whenever we have a new guest on the podcast.
