Kubernetes, Container Runtimes and Why They Matter? | E8 | StackState

43 min listen

Zandré Witte - Tech Lead Integrations Team (StackState)

Even though more and more companies are moving towards containerized environments, it can still be a somewhat difficult space to navigate in. Therefore, we're excited to have Zandré on the show to talk about Kubernetes and observability for Kubernetes.

Some topics Zandré and Anthony discuss:

  • What does Kubernetes do and why is it better than other solutions out there?

  • Why keeping up with the rate of change in Kubernetes environments is more complex but also more important than ever

  • Container runtimes: what does it mean, and is it a buzzword or do they actually matter?

You can find a written transcript of the episode below. Enjoy the recording!

Episode transcript

Zandré: (00:00): I mean, I had some now very brief nightmarish flashbacks of files, copying PHP backends over to servers. Yeah. Those days are gone, fortunately.

Annerieke (00:16): Hey there, and welcome to the StackPod. This is a podcast where we talk about all things related to observability, because that's what we do and that's what we're passionate about, but also what it's like to work in the ever-changing, dynamic tech industry. So if you are interested in that, you are definitely in the right place.

Annerieke (00:35): In today’s episode, we’re talking to Zandré Witte. Zandré is a senior software engineer and technical lead of our integrations team here at StackState, and that means he and his team work on integrations of the observability platform. One of the important features of our product is the ability to integrate with Kubernetes, and that means Zandré is the perfect person to talk to about observability for Kubernetes in general, and more specifically, container runtimes. Because first of all, what does that mean, a container runtime? And second of all, why do they matter for observability? Well, without further ado, let’s get into it. 

Anthony (01:17): Hi, my name is Anthony Evans, back again with another episode of The StackPod. Thank you for joining and listening to us today. I'm joined today by a fellow StackState colleague of mine. As you know, we interview people both internal, external, and in and around StackState, and tech, and IT in general, to get viewpoints and see what people do. And today we're going to meet with Zandré. Zandré is located out of our European offices, he's part of our development team, but maybe, Zandré, you can do yourself more justice than I can. Maybe give yourself a little bit of an intro, what you do and what you're working on.

Zandré (02:04): Sure. You were doing great, by the way.

Anthony (02:07): Thank you.

Zandré (02:10): No. Yeah. So, I'm Zandré, originally from South Africa, hence a bit of the accent. I have been at StackState now for nearly four years, just coming up on that. Currently I work as the tech lead on the integration/ingestion team. Well, let me maybe backtrack a bit. So, of course, started out in South Africa, worked for a bunch of startups, actually started out doing front-end work. Did a few designs of front end even, but progressively moved my way more towards back-end things. I found logs quite interesting to look at. And I made myself all the way into the Netherlands, joined StackState because I really believed in the difference that the product was going to make, in terms of visualizing environments and having been in the struggle of that, being able to, in one sense, give back to the software community, building tooling that makes our lives better as part of StackState.

Zandré (03:22): Yeah. So, I lead the integrations team. We touch a large variety of integrations of course, Kubernetes, OpenShift, AWS being some of them. The more, let's say, interesting challenging ones. But we also, as part of StackState, we bring all the data points together, and so that means that we also look at a bunch of legacy systems. Bring you this unified observability interface. So, yeah. That's a brief summary, where I've come from, what I'm doing right now. Yeah.

Anthony (04:01): That's cool. Where in South Africa did you grow up?

Zandré (04:05): So, I was born and raised in Cape Town, in an area they call the Northern suburbs, which is really working man's place. But, yeah. Beautiful city of course. Beautiful country. Yeah.

Anthony (04:20): I had a lot of roommates in London who were from specifically Cape Town. It would either be Cape Town, a few people from Durban, and then very few people from Johannesburg. And they would get into the UK because of an ancestry Visa or something, and so they would use that to then just come to London. So, even though I've been to South Africa, I know all about Boerewors the sausage, and the 'braais' that they do down there in South Africa. 

Zandré (05:44): It's funny that you mentioned Boerewors because the area I grew up in, and I didn't know if this would translate as well, but there's always a joke from the people that live in the suburbs close to Table Mountain, which is the more posh area. They would say there's people from behind the Boerewors curtain, which means like... Yeah. Anyway, so I know it well.

Anthony (06:09): Yeah. It's funny. It's funny. How did you end up in the Netherlands? Did you choose to go there? Did you go because of StackState? How do you end up in the Netherlands?

Zandré (06:21): Yeah. So, much to the disappointment of my Dutch colleagues, the Netherlands was not my top place to go in the world. But I've always wanted to work abroad somewhere. Being carjacked at some point loses its distinctive flavor. No, I'm just kidding. But so I wanted to work abroad, challenge myself, see what more the world has to offer in terms of technology, and just learn from different people. And the timing to come here was just perfect timing. I had just got married to my wife. We were starting a new life together. And I was contacted by a recruiter to come for a totally different company actually, but in the Netherlands, and we thought, we're young, just got married. We want to see the world, now is a great time to do it. And so ended up here in Netherlands, in Rotterdam, which has now became my hometown, it's no longer so. I've moved out of the Rotterdam region by two kilometers. But I can no longer say Rotterdam is my hometown, but that's how I ended up here. It's a great country, love it. Yeah. Get to see the world.

Anthony (07:45): Yeah. It's one of those countries where, I think it's one of the easiest, especially if you're an English speaker, I think it's one of the easiest countries to immigrate to, because even if a Dutch person speaks bad English, their English is usually 50 times better than if I was to go to, say, France, where they just refuse to speak English, and you have to adapt to their culture. Whereas I think the Dutch have a little bit more of an embracing culture. They're proud of their background and their language and whatnot, but they don't get defensive when they have to talk English or-

Zandré (08:29): No. Indeed. Yeah.

Anthony (08:32): They're pretty good, I found. In fact, to a fault. If you try and speak Dutch, they will just look at you and say, "What are you doing? We speak English."

Zandré (08:41): Well, the funniest thing is like, I think in some sense, Dutch is the hardest language to learn. Not because the language is hard, but because they will immediately pick up you're not Dutch and convert to English.

Anthony (08:55): Yeah.

Zandré (08:56): And we've had times where all I said was yes, and then they're like, "You're English. All right. Let's go." And I'm like, "I'm trying, man. Give me time." So, it's so funny. But, yeah. It's a cool place.

Anthony (09:10): That's cool. That's cool. But, yeah. No. It's interesting to hear different people's backgrounds, and especially when you take a leap to immigrate. South Africa, it is what it is, but it's a different culture all together.

Zandré (09:26): Yeah.

Anthony (09:29): Yeah. Yeah. Well, I will tell you what, let's talk a little bit about tech.

Zandré (09:31): Let's do.

Anthony (09:31): You've spent a lot of time actually working with Kubernetes, and it's a relatively new technology, but it is a technology that's being adopted across the board. And if people who maybe aren't so technical who want to know what Kubernetes is, how would you describe what Kubernetes is, when you compare it to say a virtual, like VMware or all these other technologies that do similar things? What does Kubernetes do and how does it do it better than other solutions that are out there, or the old way of doing it?

Zandré (10:14): Yeah. Well, that is a really tricky question. Also, it is something that in time we're figuring out. So, for the longest time software moved really slowly. And I mean, I don't mean that development was stagnant, I mean more in the sense of change and adoption to change was quite slow. But containerization changed the ball game a lot in that respect. Instead of having to set up VMs and spending a bunch of time getting something running, I can have it on my machine in seconds. So, that meant that the rate of development and the rate of change has been exponentially increasing over the last few years. So, I think the gap that Kubernetes saw on the market was not containerization, because that already existed. But it was solving the pain points that containerization brings, because with every new inception of technology it's not all roses and fairy tales, there's hard problems to solve.

Zandré (11:29): And I think the gap that they saw is that there was nobody doing a good enough job at orchestrating these type of environments. And so, their main goal was to set up some sort of environment to say, we will orchestrate it for you. You bring us your containers and we will make sure it runs where it should be, and we will take care of the things around that. And I think it has really changed software as an industry. If I think of just my relatively short career within tech, it's vastly different, the landscape that it used to be seven, eight years ago. So, it is changing the game, and it still is, and it's quite an interesting space to watch. And it's of course, for essence, being in this fast-paced moving environment brings the challenges of course, but it really also creates a bunch of great opportunities, and it allows us to create technology and software that wouldn't have been possible a few years ago. So, it's a really exciting space. Yeah.

Anthony (12:40): The way I often view it is that basically Kubernetes is the next iteration of an operating system, because that's what it is, it's an orchestrator. And it allows you to focus on very specific services within the ecosystem of an application. So, I remember in the old days, working with Java applications that ran within Tomcat, that then connected to a MySQL database, and if we were doing an upgrade, we would literally have to restart Tomcat with a new JAR file and they would then take the JAR file or whatever, and then it would upgrade.

Anthony (13:28): And then you'd have to worry about propagating the JAR file across other Java Tomcat nodes, that you've got, Apache Tomcat nodes. Whereas now with Kubernetes, you don't need to do that. So, if you want to make an improvement to a service, like if it's a shopping cart within the amazon.com experience, you can make a change to the shopping cart process, A, without taking down the entire environment. You can bleed people off the old functionality and then iteratively get them onto the new functionality, and it's all seamless. And it doesn't require hours of maintenance and hours of downtime in order to just push a simple code change.

Zandré (14:12): Right. Yeah. I mean, I had some now very brief nightmarish flashbacks of files, copying PHP backends over to servers. Yeah. Those days are gone fortunately. We live in a world, we don't have to do that anymore. Yeah. 

Anthony (14:37): What would you say, do you have any kind of experience in that domain? Like keeping up with the rate of change, and have you experienced any of those challenges yourself?

Zandré (14:47): Well, for us, I mean the challenge is always different. It's not so much while we do develop an application for Kubernetes, we have the added complexity that we're actually trying to monitor Kubernetes at the same time. And so, the way you just phrased it now is actually perfect, because it's no longer... If we take your example, you put a new version of your application on Tomcat, the change is immediate, and the impact of it is also immediate. And the effect is known, let's say. Now you have that span across multiple teams, potentially multiple clouds. One of the teams that never talks to you change something that shouldn't affect you, but it always does. And so, now the amount of data I need to make a decision about why something went wrong, just expanded exponentially.

Zandré (15:52): I need to know what's going on in my AWS I potentially need to know, let's say, what's going on in Google cloud, and what happened on Kubernetes? What is this team next to me doing? All of these things create a bunch of extra communication, which can just be a time waste and a time drain. But also in order to know where things went wrong, I actually need to know what happened. And that now is the complexity. Like what is going on in my landscape over time, and how do I backtrack to what the cause of this issue was? The other thing is it's not so immediate. As you said, you're deploying a new version of your checkout service, that's not live everywhere, at the point where you press a button. It actually takes time for that change to apply.

Zandré (16:43): And that means that while you're making a change right now, you might only see the effects of that change an hour or two hours down the line. And then I have to backtrack what happened in this very fast-changing landscape. Which one of these changes actually caused the issue that I'm looking at? So, security groups, for instance, great example, that has happened many times. Restricting of policies or things like that, where I used to be able to do this, now I'm no longer able to. What changed? Oh, someone changed an IM role. So, things like that really become a tricky problem to solve indeed.

Anthony (17:23): Yeah. And then, well, we actually spoke with a FinTech company the other day, they told us that all of a sudden their AWS VPC was no longer routing traffic any more. And they still don't know what happened. And they opened a ticket with AWS, and they don't know what happened. And so, when you're missing that play-by-play, things that... And I think a lot of tools, they really focus more on the algorithm of picking up something abnormal, and then once they flag an abnormality, they then focus on bringing in as much data as possible. But when you get into Kubernetes, a container that ran two weeks ago, that maybe ran a bunch of code that changed maybe indexes on a database or whatever, all of a sudden could have a bottleneck effect two weeks later when it's Black Friday and everybody's trying to get on and submit things to the shopping cart.

Anthony (18:32): And then all of a sudden it takes 50 seconds to do that in the database, as opposed to a fraction of a second before. And so, without that play-by-play and seeing those changes, you won't have any ability to accurately go back quickly at least, and see what's going on. You'll just be left with a bunch of queries and a database. And hey, guess what? All of a sudden now you need a full stack developer just to monitor Kubernetes, because they need to then understand the database languages of the different database services that are being used. The different programming languages, like Node.js or Java or whatever the developers decide to use as their programming language for their code.

Anthony (19:20): And then you also have to learn Kubernetes, and YAML files and pods and name spaces and privileged access, stuff like that. And ensuring that you're picking the right container image. And when somebody made their changes, the replica set and the deployment were also updated as well, to take into consideration the new container with the updated code. It just becomes so complicated. There's a reason why the average Kubernetes developer/administrator is paid 185 grand a year in the states. And I still find it funny because they ask for 10 years experience of Kubernetes. Kubernetes is only six years old. So, I don't know what to tell you, but...

Zandré (20:13): The magic of job description or requirements. Yeah. Indeed. Yeah.

Anthony (20:19): Yeah.

Zandré (20:22): I think it is such a... The space is so dynamic and the concepts are great, Kubernetes really did a great job at laying out these different parts. But you don't care about them when everything is working. Like I don't care what's happening in my replica set or whatever when things are working. But the moment things go wrong, I have to know, like you say, I have to know all of those things. And that can be quite a challenge.

Anthony (20:54): Yeah. I think you can tell that Kubernetes was created by developers for developers in a way, because there really isn't a lot of sexiness in the form of a user interface that can allow you to easily navigate all these relationships and all this data. It is very much command line operated. You've got to learn Kubectl. Even Docker, which has Kubernetes built into it on Windows and Mac, it still doesn't show relationships. And it's still very heavily commands, line, interface type thing, where you have to build out Docker files and stuff like that. And to share either local data with the container for whatever reason. And we were talking about this prior to this recording, and you made a good point around the fact that people are moving now towards the functionality of software, as opposed to the sexiness of software.

Zandré (22:07): Yeah.

Anthony (22:08): What do you think are some of the reasons why people really emanate toward Kubernetes, despite the fact that it doesn't really have any easy to use user interface?

Zandré (22:19): Yeah. Well, I think you get a bit of those guarantees. So, of course you have a very fast environment, you have things that can fall over, you have constant changes. But you also have an orchestrator that's really good at its job. And so, to some extent, there's less for you to care about. While we spoke about these bunch of things, so it might sound a bit contradictory. There's a lot of things that Kubernetes does for you and does well. And I think what plays nicely within this space is that, I think, in some sense, the software industry was forced into this route. Like, we cannot keep running stuff on VM where we have to do all this manual work. So, this is the inevitable route we're going. Okay, let's start building tooling around that, that helps us to look at the logs, that help us keep track of the metrics that's going on and things like that.

Zandré (23:25): So, I think, there's of course Kubernetes, but there's a great community and community tools around this, that help you do your job. It's not perfect, as you mentioned. Let's say something like the K9S to look at what's going on in your Kubernetes environment. It does a good enough job to get you away from Kubectl, but you sit in that interface and you're still missing a bunch of things. How are my containers related to one another? Which services are they calling? What does my interface that I'm looking at today, look like yesterday? Those sort of things I still have to scrape a bunch of places to figure out what is going on. But all of that being said, the space, it's two parts, to answer your question. One is like, yeah. There's a bunch of these benefits. The other thing is, I have no option, in some sense. So, yeah.

Anthony (24:32): Well, my introduction to Kubernetes was really through Docker. And when I first was getting into it, I didn't really know what containers were. I just thought that they were basically mini virtual machines. Which is true, there are usually... But they're a very focused set of components from an operating system. So, let's say you've got a container, it runs on Linux. You can literally strip away all of the other capabilities from the operating system that you don't need, just have some of the core capabilities that you need to execute your code. Like, whatever version of Python or whatever version of node.js needs to be in the template, and then it simply runs your code. And then if you put it in a stack, you've now all of a sudden got this really efficient way of just being able to make changes, but then also stuff spinning up and spinning down with very little compute needed.

Anthony (25:31): If this was all virtualized, we'd be spinning up entire Windows server operating systems. They would be running. And also, because it is an entire operating system and there's a security risk, because if I can then get in into that system, I can do whatever I want in there. I can run commands. I can do PowerShell. I can just destroy the data. I can shut off ports, I can do whatever I want. Whereas with the YAML files, you define a very specific template and set of boundaries for the container, so there's an additional security layer. Whilst still giving you that flexibility of being able to deploy code without taking down the entire system.

Zandré (26:15): Yeah. Indeed. Yeah. I think in some sense, it's like a toolbox. I recently redid our bathroom, just follow me on this-

Anthony (26:29): Kubernetes? That's great.

Zandré (26:30): So, I just wanted to highlight my pain of doing a bathroom, and then I feel better about myself.

Anthony (26:50): Yeah. Just get that out there so you can tell your wife next time-

Zandré (26:55): Yeah, exactly.

Anthony (26:55): ...I want to pay a contractor next time.

Zandré (27:02): They do it. Yeah.

Anthony (27:02): So, our marketing person specifically was interested in talking about container run times. And a lot of people talk about container run times and don't really know what that actually means. It's like a buzz word. Could you give me a definition? If somebody says, container run time to you, what does it mean?

Zandré (27:33): Yeah. So, it now becomes a hot topic, because now you see, for instance, you see OpenShift, it's been a while, but people catch the track right now. OpenShift has moved to CRI-O, which is a container run time. Kubernetes has now ditched the Docker run time in favor of only running Containerd. And so, for a long time, Docker was the only container run time, maybe you can add rocket in there as an exception. But it was not... If you were running containers, you were running Docker for the longest time. So, container run times, I mean, it was a singular thing. Maybe you could call it container run time up until very recently. But it's actually, the concept has been around for a long time. And so, there's two parts really to it.

Zandré (28:25): The one is something that's called the OCI. And so, it's the Open Container Initiative, and as containers started developing, and you saw these different flavors of runtimes coming in, they said, okay. Hang on. Love the innovation that's going on here. Do we need to define what it means to be a container? What makes up a container? What sort of actions can I do it? How do I interact with it? And it's been around for, I think, seven or eight years, but they are still defining, what does it mean to publish a container? Something you spoke about earlier was signing packages. That's something that now containers don't have yet. I'm running a container with code. I don't know, is this the official image? Because I can only look at the repo and just say, looks like it. But that's actually also something that will become a part of the OCI, like signed containers, things like that. So, that's the one side of this. Something can run containers. Okay. Let's first define what containers are.

Zandré (29:34): And then when you come into the Kubernetes space, for a long time Kubernetes talked directly to Docker. And yeah, maybe the important thing to mention here is that, we mentioned it previously just now, but Kubernetes is only orchestration, and so it doesn't have the ability to run containers. It uses container runtimes to do that. But they had Docker being their container runtime for a long time, it could direct communication with the Docker. And as more of these container runtimes started popping up they did the same thing that the OCI did and said, okay. Hang on. We, as Kubernetes, want to give flexibility because that's part of the buy-in to what Kubernetes is, the fact that I can switch out network layers. I can switch out ETCD for a different security store, things like that. We want people to be able to run Kubernetes, but run it using a container runtime that's different to Docker, if that use case is there. And we can talk about why people do that as well.

Zandré (30:43): But then they define the CRI, which is the Container Runtime Interface. But that means that's like a general API for how do I talk to container runtimes? How do I tell them, hey, start container X? That sort of spec around that. And that started, I think, this conversation, the ability to run different container runtimes, why would I want to do that? What benefits do different container runtimes give me? And I think it really hit it often, which why you're now seeing a bunch of these sort of discussions and people speaking at conference about container runtimes, where previously that might have been a small crowd. Now, a lot of people have interest in it. Is that, firstly, now it's really easy. Actually, there's two great specs that you just have to implement and you can create your own container runtime. But also, security becomes more important, and so there's a bunch of reasons to do it. Some of it is Docker is really heavy.

Zandré (31:56): And so, something like Containerd says, well, if you are running Docker, you are running part of us anyway. Might as well ditch Docker, only use us, things like that. And then of course you get some performance improvements, because you're running a smaller subset of what you used to do. And then next to that, there's also security concerns. We spoke about the VMs now and what people can do when they get into your VM. To a lesser extent that is still possible with containers, and so you get things like Kata containers, like gVisor and Firecracker, a bunch of these things, that say, okay. Let's isolate even more what people can do once they get into container. To use the toolbox analogy, let's trim down the toolbox even more, to make sure that if someone gets in here, their attack surface is really small, and so I can feel more at ease about what is going on. Very long-winded question and a bit of history there, to say that is why container runtimes now become a hot topic because now I have options. Where previously it was Docker. So, yeah.

Anthony (33:12): Yeah. No. That's interesting, because when I see container runtimes, I'm usually drawn to Docker just simply because it's the thing that I use, I'm most comfortable with it. But then having said that, I've used EKS and I've never needed to use a Docker container. I can just spin up using the native protocols. Yeah. No. It's interesting. Yeah. Okay.

Zandré (33:54): Yeah. So, that's like a bit of history and also why this discussion now becomes really interesting. And then next to that, of course, so whenever you talk to people now about container runtimes, I mean, people that are involved in container runtimes care about it, which of course makes sense. But for the typical user of Kubernetes, it makes no difference. It's like, it is such a low level there, I just want to run containers and I want this thing to orchestrate it to me. But the thing that is important here is that if you want to start observing what's going on in your platforms, container runtime is really a key part to that. If you want to run some security checks for instance on your containers, then you pick a container runtime that provides you more security, or you use a type of isolator that does that for you. So, the discussion is becoming more important, but also the observability of it is becoming more important.

Zandré (35:02): How do I deal with an ever involving landscape, where the amount of container runtimes can just grow, let's say, almost infinitely? Ideally not. But right now we have three main ones that are being used in the world, so they are CRI-O, Containerd and Docker still. Interestingly enough, Docker, the usage of Docker went down by 30% in a year. And so, you're seeing a big impact, because people are starting to use Containerd and CRI-O more, that the usage of Docker is really going down. And so, for us as an observability platform, we have to keep up with that trend. We have to say, okay. Then we talk to CRI-O, then we talk to Containerd and we get you the most up to date info about what's going on in your landscape.

Anthony (35:55): Yeah. I think what Docker is going to be challenged with, especially when it comes to OpenShift I think more than anything, is that I think the OpenShift sales people are going to tell people who are operating their systems, or when they're trying to sell OpenShift is like, listen. Docker is completely community driven. You could be using an image that could have malicious content on it. Just because you're downloading a Windows container, doesn't mean somebody's embedded some kind of malicious activity into that container image. And then also, on top of that, you've got the asset management side of things. So, if I'm using Windows containers from Docker, that goes to my Microsoft bill at the end of the year, because every container is effectively a Windows operating system as far as their concerned.

Anthony (36:54): And so, they're just going to keep charging you every time you spin up a unique container. Whereas with OpenShift, you get the support, it's Red Hat focused. You get the security layer as well, because you are actually building out your own container policies and images. In a way you're limited because you can't just import a bunch of open stuff. I mean, they're building out their marketplace and things like that, but it's nowhere near as big as Docker or community driven as Docker is.

Zandré (37:30): Right.

Anthony (37:31): It's been around longer.

Zandré (37:33): Yeah. Yeah. And I mean, that sells greatly to the small software company, because it's open-source. I get a bunch of support, but for corporate companies, that's not a great selling point. And so, then something like OpenShift, yeah. Perfect market for them. And also as part of why they ditched Docker as well in favor of CRI-O because it also just plays nicely with the whole Red Hat enterprise.

Anthony (38:07): Yeah. Well this has been a fascinating conversation. We're actually running out of time right now, but I think we spoke about a lot of stuff. I don't know if it got too technical at certain points, or not. But I do really appreciate you taking the time, and at the very least, running through some of this information. Do you have anything to share, any last tidbits or anything that you want to share with the rest of the world?

Zandré (39:44): Putting me on the spot. No. No. I think it's just, it's exciting, this change, but it also presents some real observability challenges, and that is a small plug there for us, but that's really what we're focusing on. In some sense, StackState set out to build complex changing environments, and monitor in that. And with the goal of being able to monitor these type of things, but the industry is really moving into that space. And so, without a tool like StackState you're going to have a really hard time to keep your eyes on what's going on in your environments. So, yeah. That's why we're here. We're building hard.

Anthony (40:40): Yeah. It is funny how the world has just become more and more change oriented. I remember the traditional days when the banks would do an upgrade, it would be like, hey. Let's get the change approval board all together, let's get the maintenance hours. And I'm like, man. Things are moving away from that. And those traditional ITIL ways of doing things-

Zandré (41:09): Yeah.

Anthony (41:10): ...scale, because people are just more expectant than ever on innovation, and easier, newer ways of doing things, more reliable ways of doing things.

Zandré (41:23): Yeah.

Anthony (41:25): And we're getting more and more digital. I mean, like TVs streaming.

Zandré (41:33): In some sense, it's quite funny, but this instant gratification culture is driving the change in software as well. People want their new Mac delivered to them this evening. And so, yeah. You need the tools like this to be able to create those sort of platforms.

Anthony (41:52): Yeah. So hopefully this session has helped with that a little bit. But again, thank you so much for your time. Really appreciate it.

Zandré (41:29): Great. Thank you.

Anthony (42:30): Cool. Well, take it easy and I'll speak to you soon, or later today or whenever.

Zandré (42:31): Awesome. Thanks.

Anthony (42:32): Thanks. 

Subscribe to the StackPod on Spotify or Apple Podcasts.

About StackState

StackState’s observability platform is built for the fast-changing container-based world. It is built on top of a one-of-a-kind “time-traveling topology” capability that tracks all dependencies, component lifecycles, and configuration changes in your environments over time. Our powerful 4T data model connects Topology with Telemetry and Traces across Time. If something happens, you can "rewind the movie” of your environment to see exactly what changed in your stack and what effects it has on downstream components. 

Curious to learn more? Play in our sandbox environment or sign up for a free, 14-day trial to try out StackState with your own data.


43 min listen