Transcript
Losio: We are going to discuss about modernizing DevOps with AI, boosting productivity, and redefining developer experience.
I would like to clarify what we mean by modernizing DevOps with AI and what we mean by redefining the developer experience. Nothing new here, nothing surprising to say that generative AI is changing the way we think about software development and developer experience, how it’s transforming workflows and modifying our team’s approach to innovation. In this InfoQ Live roundtable, we are going to explore and discuss the impacts of AI-driven tools in modernizing our DevOps life and in redefining somehow our DevEx.
My name is Renato Losio. I’m by day a cloud architect at Funambol. I’m an editor here at InfoQ. I’m joined by four experts in this field of generative AI and DevOps. They are coming from different companies, different backgrounds, different sectors, and they will help you understand the best practices and what are the challenges in modernizing DevOps in our workflows. I would like to give a chance to each one of them to introduce themselves and share their professional journey in improving developer experience.
Andersson: My name is Jessica. I’ve been working with platform engineering and developer experience for the last few years, both hands-on as a leader. I’m also CNCF ambassador, and a speaker at different tech conferences. I try to share experiences and I try to anchor it in real-life experiences and use cases, because I think we can learn a lot from each other.
Bonzelet: My name is Christian. I work for Bundesliga, so the professional soccer league here in Germany, not as an active player, but more behind the scenes on the tech side. I’m an AWS solutions architect, helping all of our teams to get the most out of AWS, to design solutions, to tackle business problems, and be a trusted advisor. In my background, I worked a lot in media and entertainment industry here in Cologne, and did several broadcasting stations. I’m joining this roundtable more from the architect side, less from the engineering side, but still have a lot of insights and experience to share how this shapes also my work.
Bajpai: I’m Garima Bajpai. I’m the founder for the DevOps Community of Practice here in Canada, which has several chapters: Toronto, Ottawa, Edmonton, Atlantic provinces, Montreal. I’m also the chair for the ambassador program at the Continuous Delivery Foundation. I have written two books, “Strategizing Continuous Delivery in the Cloud”, and my latest book, “CI/CD Design Patterns”. My call to action is leadership and communities. I hope that through this roundtable, we will bring some more perspective around leadership and open-source communities as we go along.
Verma: This is Shobhit. I’m trained as a statistician, but over the last 20 years or so, I’ve become what I call a full stack entrepreneur, because I have degrees in computer science. I’ve worked in quantitative finance. Then did my own startup as well. Finally, settled with machine learning, data science, and now AI. I work for Harness. At Harness, we are building a couple of agents using AI, and one of the agents is competing with GitHub Copilot, which helps you write better code, high-quality code faster in the IDE of your choice itself. I lead that team at Harness, and happy to share any experiences we have, and what we have observed by building a product using AI.
GenAI and DevOps – Today’s Landscape
Losio: I think the topic of generative AI, and also generative AI applied to, you mentioned Copilot, applied to development has been really a big topic in the last couple of years. Even to the level that you probably have two kind of practitioners, the ones that are overwhelmed by the option and the idea, and the ones that pretend it is not happening, I think. I’d like to dig a bit deep and see really, how is generative AI today transforming the way teams approach DevOps modernization? I’m thinking, yes, we know what the long-term is. I don’t want to discuss if it’s going to remove developers, and if it’s going to take our jobs tomorrow. I just want to understand today, let’s start from today, where we are, what we can do today with generative AI in terms of DevOps.
Bonzelet: What I see when working with our teams, especially, and also from my own experience, is, a few years back, it started with things like chatbots. People were using some side tool on your IDE and trying to generate infrastructure as code templates, code snippets, whatever. It turned a bit into a more integrated experience. More of these things are obviously integrated in IDEs. My observation is it goes a bit beyond just generating code. That’s why we’re talking here about DevOps, and also, in my understanding, the software development life cycle at all. That changes a bit on how I see that engineers and myself as a solutions architect also look at these tools.
For me, it’s less about things like generating code, but it’s more like getting me more productive with maybe the things that I’m not good at. I’m not a good coder, but maybe I need some help in writing documentations, or unit tests, or whatever, and I see that it’s broadened a bit what generative AI impacts.
Andersson: I think one of the things that I wrote down is something I wanted to mention, and I think Christian touched on that as well. I think bringing the GenAI into the tools that we’re using for working, like the IDEs, has been very key for more people to start using it more seamlessly in their daily work. That is something that I have seen being a bit of a game changer. I know some tools have been available and integrated for longer than others, but I feel like we have a broader range of GenAI tools available inside our IDEs right now, which means that suddenly you have more of the rather than once-off ask question, and then go do some real work again, you get more of the back and forth and integrating with your currently already written code in a different way than it was possible before.
Then you can also have the effect that Christian is talking about, like I want to maybe get help to write some tests. I see a lot of use around getting started with the boilerplate. I don’t know how the rest of you are, but a blank page is the worst thing for me when it comes to code, it comes to writing the slides, documentation, anything. A blank page is the worst thing. Getting just the boilerplate up and getting something that I can then fine-tune and adjust and maybe work around a little bit, it helps me a lot. I see other people getting empowered by that as well.
Losio: Actually, given your experience at the moment, what are the most common use cases that you see or you think is the current status?
Bajpai: I will start with the leadership perspective around generative AI and similar emerging technologies. For every organization, for every leader, for every individual in the team, the journey would be unique in the beginning. Obviously, if you think about general use cases, if you talk about role-based use cases for individuals, people have highlighted, like Jessica rightly pointed out, that it becomes a lot easier if you don’t start with a blank sheet. If you are a developer and a tester, so automated code generation is one of the key capabilities, which most of these individuals are piloting around with the help and support of the tools.
Then, if you are an architect, and as Christian pointed out, that architecting and automating a lot of DevOps kind of opportunities, it can be broken down into tasks, and you can have a companion for your automation journey in that perspective. I will also highlight deployment and release strategies, for example. This is, again, another area where you can have a lot of contribution in terms of generative AI and how the feedback loops can be generated in terms of, if you see, release manager’s role is getting more complicated with the kind of releases which are taken care of, like complex mix of things: subscription-based special software, custom software.
All those release management capabilities, release managers can look at these capabilities for their role and see how much support they can get. Individually, I think there’s a lot of potential in how do you use these tools for efficiency purposes. Most of these organizations right now are piloting these capabilities for efficiency and productivity purposes. It’s not only a productivity tool as such. When you start to explore more and get more mature into this area, I think it has a potential to disrupt organization models, for example. It has the potential to refactor your legacy code. There’s a lot more which can be done in the space moving on.
Losio: I will come back to the topic of refactoring code. I’m actually curious to know, what are the most common use cases you see actually today in the use of generative AI in the way that development teams are using to improve or to modernize their DevOps?
Verma: What I see overall is things are changing a lot. You can’t really take a metric from the past and try to just compare it with this. If you do that, you will see a lot of counterintuitive results. If previously people were spending, let’s say, 60% of the time on activities that are considered developer toil, now they’re spending maybe more. That doesn’t mean that the toil is increasing. It just means that people are shipping so much more and faster that the fun activities of writing code and all that, they are able to do it much faster and the remaining work just happens to be more around the non-fun part of DevOps. That is where the next wave of impact is going to be.
Another thing I noticed is people are getting more confident, coders are getting more confident in undertaking projects that may involve things that maybe uses language that they have less of an experience in. Because most of the programmers, they’re good programmers, they know computer science very well, they know how to test code, how to create the logic for a code. Previously before GenAI, if you don’t know a language, you don’t want to risk anything. You would just wait for, let’s hire for someone who knows Kotlin before we start a project in Kotlin. What I’ve seen both internally and externally is now people are willing to take that risk first before saying no. It’s a quick iteration.
They use generative AI and their years of experience to guide the generative AI in the right way with the right prompts and giving the right context, so that generative AI has a chance to create something of value. Then they have very good ways to test it before they can say, do we really need to depend on something or are we done? That’s a new way of working that I’ve seen in the industry and I’m very welcoming of that.
Losio: I actually see as well your background as a statistician as soon as you start really defining the context with numbers.
GenAI – Prototype vs. Adoption?
We mentioned until now and I’m fully behind this idea to prototype or to start as we just said, ok, there’s a language, I want to try it. I can start. I can avoid the empty page. I want now to install something in production, or something more significant. Garima, I think, mentioned before the topic, for example, upgrade legacy code. I still remember, I think it was Amazon, but Microsoft as well, having the same approach. Here is a tool that can update all your old Java code, and it took us five hours to do all our old code internally. You see the presentation. You think, that’s great. Would you do it internally? When I go to the development team, they say, it’s not going to happen. I think it’s risk averse. How do you get on the next stage? Do you still see it just as a prototype, you think, or you see already adoption? What advice do you get in that sense?
Bonzelet: I see this not only as a prototyping tool, but as something that companies should take seriously and think about how these new tools will shape their workflow, their ecosystems, the way teams work. I’m not on the position to put generative AI on a level where it’s not really ours, so I don’t want to build like fairytales around this, but being more realistic. It’s like any tool that we look back in our engineering practice that comes with a change in how we work.
Modern IDEs, for example, years back, were also a change how we think about, how can we boost our productivity with this? It’s the same like here, and I think that engineering teams should not be alone with this. They need to get some guidance and also trust from your leadership, from your managers. Therefore, I advocate also to play around with this, to gather experiments, gather real-life experiments, and be both positive and negative in both sides, how this shapes your workflows. Playing around is really a central part of this.
Losio: I don’t know if you want to add something or if you have any experience already on the production side. Because when I see those statistics mentioning as well, again, it’s like 92% of the companies are going to use generative AI in their development team in the next one year. That probably is true, but my skeptical point is, what’s the level? How can we get them to the production level, to really a meaningful approach? I’m not saying that prototyping or testing is not good. I love the idea of testing something and play around, but I like to try to see if you have advice or if you have experience of what should we do in terms of adoption, in terms of getting to the next step.
Andersson: I think that something that I came to think of when you asked the question, Christian, was how willing are people to run this to production? I think that is a lot depending on how mature the company is when it comes to continuous delivery, for instance. If they feel confident in the kind of tests and alerts and monitoring that they have set up, then maybe they will feel more confident in also rolling automated updates. I do know that we talked about this for updating bots earlier, such as Renovate Bot for rolling new dependencies, upgraded versions of your dependencies.
How confident are you on having those rolling to production without having a human review them before they go out? I think if your answer to that question is not that confident, you’re probably not going to roll AI to production either. You probably want to look at it. Something that I see and I hear a lot around in the areas where I’ve been is that companies are a little bit hesitant about how much of their own code they are willing to send to these generative AIs, and what kind of information are we willing to put into these chatbots, in order to preserve our confidentiality, our secrets, and those kinds of things. I think a lot of companies are still insecure and worried about these kinds of things. I think it has not been fully resolved and people are not ready to move fully in and fully commit. To reach those numbers in that option, I think that needs to be figured out as well. Everyone doesn’t have that ready yet.
Bonzelet: I would like to see personally less of the news like, however percent of companies use generative AI in production. I would like to read more news on what was the impact. That companies use generative AI in the background is great, but I’m missing this, “And with this, they are improving on their lead time by 12%, or reduced mean time to repair by whatever percent”. That makes the story a whole, and that gives you also an idea on where’s the focus point of the companies. Was it about code generation, maybe, that had impacted lead time? Was it about unit test generation, whatever? Or was it about operational things like incident response optimizations? That’s a bit what every company should keep in mind that when they adopt generative AIs, what do they want to improve, actually?
Bajpai: Christian, you rightly pointed out that feedback loop is very important. One of such feedback loop we rely on from a DevOps practitioner’s perspective is DORA report. Last year’s DORA report highlighted that indulgence of AI is hurting the developer productivity and performance. There is a substantial amount of systematic thinking and how do you onboard to these new emerging technology capabilities. We need some new players, like Shobhit mentioned, Harness, for example. I’m happy to see that those kinds of initiatives are now coming up because it gives you a choice, and choice to do some due diligence on a leadership level about which capability, which tool, which skill is needed for your unique journey towards this whole emerging technology integration.
GenAI Development, and Pushing to Prod, in a Confident Way
Losio: I would like to go back to something that Jessica mentioned about CI/CD and how confident a company is. I was thinking as well about testing, in terms of, if you have any tool or any idea, how can you push someone to, can I do A/B testing? How do you play with generative AI development and pushing it to production in a confident way?
Verma: I think there are two independent aspects of testing. One is software testing by itself. Through generative AI, you can write better tests, you can write more tests, you can have coverage where you did not used to have before. That’s always very helpful, because like we discussed in the past that you can do a lot of innovation as long as you know you have a good handle on whether it’s breaking things or not, whether it’s stable or not. It reminds me of a video of a young boy that I saw. That boy was just trying to climb something. There was the parent who was standing behind him and he was just letting the person try again, letting the kid try it again and again. You can see in the video that the moment the kid was able to land back properly without falling, like safely, then the parent was ok.
Like, you can try it 100 times, it doesn’t matter, but then the kid was successful in the end. It’s about that, that if you can fall safely, then you can try a lot of different things and learn different things. Same is true with AI. If you have good test generation capabilities, you have good tests in general, good coverage, then you can take more risks with AI and you would know whether you’re going in the right direction or not. More often than not, you can do an analysis after the fact to understand what happened, what was the approach that was taken by the developers, and what kind of approach is good for your specific organization and your code, and what process needs to be evolved further. Those kinds of discussions are more qualitative in nature, but they will have a lot more impact than anything else.
Common Challenges Devs Face When Using AI Productivity Tools
Losio: I was wondering if anyone had any specific advice on, I wouldn’t say negative, but what are the most common challenges today that developers face when using AI productivity tools? How can we, hopefully, overcome them? Do you have any specific thought about that? What is the most challenging part? Because we all say, ok, the tools are there, you can just use them. Apart from, large organization, you might be, as I think Jessica mentioned before, many constraints of what can you use, what you cannot use. When you can use it, what are the challenges? Apart from, it’s a bit like prompt engineering, what’s the next step?
Bonzelet: I see two challenges. One is more organizational. It’s about compliance and confidence to protect your source code a bit. That’s one of our most important assets, is the code that we use to build our solutions. We want to ensure that it’s protected. Certain companies want to ensure that the code and the instructions that I put into this tool or how I interact with it, it’s not getting shared, it’s not getting used for service purposes, not getting leaked, whatever. A lot of compliance things that I encourage every company to take serious on this point.
The second challenge I see is more general with AI, and broader AI, is, we as engineers like deterministic behavior, and AI is not deterministic. Whenever I put something in a chatbot or have a comment block that generates a code, today I get an answer, tomorrow I get a different answer, because the context, the surrounding on what AI uses might change over time. It’s not really deterministic, it’s not really visible for me why things behave different from day to day. There are some solutions, but that’s the challenges I see.
Bajpai: I bring the communities of community here. I think the debate with AI and open source has not settled in. There’s a lot of action in the open-source community on how the open-source AI definitions would look like. There is an initiative from OSI, for example, which has attempted to define this, and there are some evolutionary concepts in making. People are also talking about fair source, ethical source. It’s also like a challenge from a leadership perspective, when you consider building versus buying. These kinds of capabilities. You have to have substantial trust in the tooling, in the partnership, in the co-creation, which is happening with generative AI. That’s another challenge.
Lastly, I would also point out that this whole space is dominated by a few tech giants. This is also creating some perspective around how to navigate the space and how to build that competitive differentiation from a leadership perspective. There’s a lot happening as we speak. Obviously, as I said, the individual and the organization journey has to go hand in hand, and both sides of the coin have to come together. That’s some other aspects of this.
Best Practices in Modernizing DevOps
Losio: There’s actually one thing that Christian mentioned as well about the value of the code for the company, whatever. I was thinking more so in the space of DevOps, yes, it’s true. I was thinking, at the end of the day, if I have to do something in production or not in production, but I have to write my own Terraform, or CloudFormation, or Bash script, whatever I’m using, as a developer now, the chance to have some kind of generative AI assistant is always there. Even if you tell me that as a company, you’re not providing me a solution.
Somehow there’s even a higher risk that if I need help to say that this is broken, doesn’t work, either I use ChatGPT, I use GenAI, whatever I’m using, I might leak my code intentionally, or not intentionally, because the assistants are there, they’re on my mobile phone, they’re there everywhere. We tend to think about Copilot and alternative to Copilot, but there are different ways that I can interact. How can I teach a team, what should be the approach for a team to get help in modernizing their DevOps, but at the same time, what are the best practices? Do you have any advice in this space, or what’s the approach?
Verma: I did want to share one thing here, which is, I’ve noticed that generative AI has been here, it’s still fairly recent and new technology. Large players may have adopted parts of it, but for many companies, they’re a little bit late to the game in terms of adoption. Versus if you look at the individuals themselves, they have been on it because it’s so much fun to play with ChatGPT, to play with different prompts, to learn about what AI capabilities are. The individuals are more aware of how to use these AI technologies, even if the organizations have not given them explicit permissions. What I’m seeing very often externally, especially is people are going to use generative AI, if it’s going to make their work life easier. It’s going help them become more productive, more competitive, however you want to take it.
Organizations need to understand this and create a path which is secure for these developers, rather than relying on just a policy that you’re not supposed to use external tools. I think that is also one of the big hurdles that I see in adoption right now, that organizations are a little bit late in realizing that this is already happening. It’s not like it’s a button that you have to flip. It’s already happening. You just need to acknowledge it and you need to create a path internally to make it less friction.
Bajpai: I think there are two parts of this problem. The first part is related to awareness. How aware are you about your development posture? It’s a double-edged sword. All these tools can enhance the portfolio of bad actors also. Awareness, education, and also talking about these revolutionary or evolutionary changes is a must. Then the second aspect is, it also is creating new roles in the organization. If, let’s say, you see AI moderators or machine learning engineers, even compliance engineers, these are real new roles which need to be in place to ensure that the trust which we need to build on these tools is there. It’s part of the leadership thinking. How do you bring these roles, invest in these capabilities, and also ensure that people are behind this, so people understand that this is needed for your secure code to be scaled.
Andersson: My background is platform engineering, enablement, trying to make it easy for development teams to be DevOps fully and own the full lifecycles of their applications. That’s my background. That’s what I’ve been working with for a long time. When we talk about this and how teams can integrate AI, I think a lot like you need to make it easy to do the right thing. If we want teams to work with AI, we must build it and enable it in such a way that it’s easy for them to use it in a way that is safe and secure and aligned with the organization, how you want to use it. I think rules and regulations and policies, sometimes you need them for certifications, whatnot, and regulations.
People tend to be very good at working with and around policies, in my experience. If you just make it easy for them to do the right thing that you want them to do, that’s so much better. I have absolutely no idea how the platform engineering or engineering enablement of generative AI will look, because I don’t think that is a solved problem yet. I’m very excited to follow along and see what’s going to happen and what companies will come up with in the next year or so.
Bonzelet: When it comes to also the broader adoption of these kind of new tools, I know also from my experience and the discussions I have with our teams is, when I put myself in the shoes of an engineer years back, I had a pressure of delivering features in high quality in shorter times. When I now look back at where we are right now, and somebody says to me, ok, there’s this new tool, there’s this new AI, try it out. I was always, as an engineer, hesitant with new tool things, because I don’t know, is this boosting my productivity? Is this a risk for me in the end? Because I still have this pressure to deliver high quality code in shorter times, and I have this feature delivery stuff. What a lot of teams and companies miss is training and giving engineers also time to play around. Also considering that whenever we try new tools out, it will harm our velocity in the first sprints, months, whatever, until we really gained traction on these parts.
The most daunting task on generative AI is prompt engineering. Those tools are getting better to not do as much prompt engineering with things like agents. Still, it’s getting very fast frustrating if I sit in front of a chat and don’t know, what should I write, and how do I phrase my words in order to get a good output? That’s training and giving engineers time and confidence in a safe space to try these things out is a very essential aspect.
Losio: You’re basically saying that, yes, my productivity will probably be better in the long term, but short term, even with generative AI, I have to consider that any tool, there’s a learning phase and my productivity might even go down for a while. Actually, most likely it’s going to go down.
Bonzelet: We need to protect our individual engineers from this. Productivity will go up. We need to put this more in context. What kind of productivity? Maybe my productivity will not be increased because I’m the world’s best Kotlin developer ever, and I know all API specifications. I have these muscle memories and I’m very fast, but maybe productivity in other areas will increase. That’s more like a broad view. We need to make productivity, put this more in context.
Coding vs. Tedious, Undifferentiated Tasks
Losio: It brings me to an announcement that I read that sparked some debate that I’d like to share with you and see what’s your take. Every time we discuss about generative AI, about how many hours we save. One of the topics that I think is very significant in the DevOps space, is we just spend one hour per day coding and I spend most of the rest of the time in what were called, I think, tedious, undifferentiated tasks that is learning codebase, writing and reviewing documentation, testing, management, deployment, troubleshooting issues, or finding or fixing vulnerabilities. Here there are two camps. One is, yes, those are really undifferentiated tasks that shouldn’t be part of the DevOps work. You really are wasting time doing that, or that’s the essence of your work. That’s not, you’re just spending one hour coding, that’s your working challenge. I don’t know where you sit with that?
Bajpai: I think the big question at hand is that once you generate that code, how do you retain that code quality influenced and developed by generative AI without documentation, without the work, which is behind the scene. That is where a lot of the code maintainability and readability metrics would come in place. How do you build that confidence in the developer ecosystem that these tools are for good? It will take time, of course, as Christian mentioned, and even Jessica made a point that, make it easy for the developers to play around and adopt these tools.
In the long run, also, we need to look at the good quality metrics, making it a more data driven approach, how it is impacting your code maintainability, readability, code quality, and what needs to be done more. These applications and tools, which are coming up, I think they have an opportunity to also help in this direction. I was reading one of the leading developer surveys, the Stack Overflow report, and it says that 3% of developers, which are highly skilled, trust these capabilities. There is a lot of work which needs to be done in this dimension. We have just started off. I think a lot of work has to be done by the communities and the developers themselves to ensure that we move the needle in the right direction.
Verma: In my mind, I was just fascinated by the discussion and I was trying to predict what will happen in the future. When we talk about code documentation, maintainability, readability, and things like that, these are not necessarily easy to measure. You can have some lagging metrics on like how often the code changes or how easy, how much time do developers spend in making a change and things like that, and they help us measure something.
In the future, it’s entirely possible that we have additional metrics completely from the perspective of AI, and the metrics like AI maintainability, AI readability, and the documentation that AI understands. We might be faced with a choice where we can write a piece of code in a way that maybe is human readable because it’s a little bit more complex, but then there would be a choice to write it in a way where AI understands it. We give the scaffolding as part of the documentation so that the AI can understand it also.
Then, together with that scaffolding, the maintainability is more handed off to AI. It’s just a fascination, but it’s possible that those kinds of decisions will become the reality of tomorrow. It’s so good to be in this field and see all these things in action. Also, we are talking about DevOps specifically and developers as an independent component, but the future might be where people are wearing both hats. You’re doing the DevOps. You’re doing some development. You own the problem end-to-end. It’s hard to think of that world today, but tomorrow that might be a possibility. Whatever it is, easy to make predictions when there’s no money on the table, but I’m looking forward to see what happens next.
Bajpai: The Eureka moment for the product companies, like how the large-scale adoption and the productization of these tools and applications would happen is an opportunity space. As you mentioned, Shobhit, there’s a lot which can happen in the next two, three years. The other part which we are overlooking as well is like the services model and how do we go to market with this kind of emerging technology integration, is also a wide-open space. The services companies will kick in here and try to build some kind of business proposition on top of this. Again, another perspective on what could happen. Nobody has a crystal ball at hand, but these are a few key takeaways from the discussion which we are having right now.
The Role of a DevOps engineer, in a GenAI World
Losio: I was actually thinking about the future. Really, what do you see the role of DevOps engineer? Do you have any feedback on that in the future? I know that we’re still coding, but if I think about generative AI as a tool that is going to take over most of automation processes or coding, monitoring, or help at least in that space. As a practitioner myself, how it’s going to change my job in the next, I’m not saying 10 years, but in the next couple of years, if I don’t want to completely look obsolete as a tech stack.
Bonzelet: Helping is a better word than overtaking, because I think it’s really about considering this as an assistance that helps you boost productivity and not overtake you or being better. It’s not a competition. It’s a collaboration with these tools. I see that the roles of an engineer will change in that way that we will learn how to deal with these tools. I think that also the market around vendors that provide CI/CD solutions, whatever, will understand the point that we made early in the discussion is very essential, to have good, deep integrations with these things so that the engineers can focus on the things that matter.
Also, coming to the question before. I have my hard times in naming things like learning codebases, testings, undifferentiated tasks, or something. There was an interesting comment also from Toby, and I see this myself, that’s actually writing documentation, learning a new codebase is part of my engineering role and a very essential part. I know that when I read a codebase, it does not give my e-commerce company a thousand new more revenue right now when I do this, but on the long term, it’s very important because I can then build better features, whatever. I think roles will change, but it’s not a competition. It’s really about learning how to deal with these new tools and find a way. What are the tipping points where I actually need those tools and where are they not?
Practical and Immediate Action Items, with GenAI
Losio: Actually, on that side, something I want to ask again, is about the, not negative part, but I see at the beginning there are two camps, the ones that are just jumping in and the one that thinks, it’s not going to happen or whatever, I wait. I think one of the problems has been in the last couple of years as well, that everyone wants to be in the space. The AI in the DevOps world and in any other similar world has been added as an extra world.
I’m not saying as a marketing activity, but from a practitioner point of view, from many conferences, many events, many tools, we’re like, is that just really trying to promote something or there’s something I can do today? I was wondering, going basically really to the core part of like, today I joined this roundtable. I’m excited about the opportunity of generative AI in my DevOps space. What tool should I embrace? What can I do, starting tomorrow? Should I just get the license of a generative AI code assistant? Should I start to use a cloud provider? One of those tools that they pretend to help you in your journey. Do you want to give some advice from a platform engineering point of view?
Andersson: I haven’t seen everything, but the most major AI tools I’ve seen right now that is available, that is clear to you that it is actually some AI, is the chat and code assistants, and those kinds of things. Integrating that into your IDE and starting to play around with that is probably the place where I would start. I would love to see some more real AI in some of the tools that we use for managing our applications, especially around monitoring and observability and alerting. I would love to see something. I haven’t seen anything that is clearly a game changer yet in the same way as I’ve seen those code assistants haven’t been recently, from my point of view.
Verma: I don’t think there’s a developer here who hasn’t tried AI, but the more you try the base models yourself instead of a product, because products tend to do a lot of things in the background and you may or may not always know what they’re doing. The best way to even understand what is possible today using AI is to directly chat with these LLMs, like foundation models from OpenAI, Anthropic, whatever your choice is. Try to give them the context yourself and prompt yourself and see how things are changing. That way you would know exactly what the limit is today for AI.
Then, evaluating tools becomes a lot easier. Because you develop a very good sense of what is possible or what should be possible, and if a specific tool is not able to do it, then it’s not really the limitation of AI, but it’s more like, there are so many products which are selling AI. It’s hard to differentiate just by trying them all. When you start playing yourself, you develop a lot of intuition about what’s possible. I think that’s more important than anything else.
How to Choose an AI Tool that’s the Right Fit
Losio: Actually, I have a follow-up question for you on this sense, because it’s actually one of the challenges I had myself when I say about challenges of using AI. You said, it’s OpenAI, or Anthropic, one of the problems I had some time is being overwhelmed by the options, where when I’m thinking about, I’m coming from the AWS world, as well on Bedrock, you have 10,000 different models. Where should I start? I know it doesn’t matter probably to start. Because if you tell me Copilot, or if you tell me a specific tool in my developer experience, it’s clear to me how I start. It’s already there. How do I test? How do I choose something? How do I do some prototyping on that? How do I compare results?
Andersson: The one that you use is the best one. It doesn’t have to be the most advanced one, whatever. It’s the one that you use. If you’re limited by requirements or regulations at your company, pick the one that they provide.
Losio: Don’t overthink about that part, you’re saying.
Andersson: Get started is more important. Definitely.
The Role of AI, for a Senior DevOps Engineer
Losio: When I do day-to-day tasks as I try to iterate, develop generative AI as well, I’m thinking really of cases of like Terraform, CloudFormation, Bash script, whatever it is. I don’t know if I see it more like as another developer, co-worker to brainstorm, and then I take my code, or someone I delegate some of the easiest parts. What should be the direction, more so for a senior or supposed to be senior DevOps engineer?
Bonzelet: In general, I observed when trying out tools in this area, it changed a lot how I work with them if I consider them to be a human. What I mean by this is, imagine the chatbot is your company chat, whatever, and you’re interacting with your colleague from home office. If you throw just the sentence like, build feature A, B, C, and provide not that much context, it’s likely that it’s being misunderstood. If you write down whatever you want the AI to do, so in the same way like you would like to provide your colleague a context, that does not know the codebase, you’re getting into a different kind of thinking and mental model on, I need to spend some more time in writing good, precise instructions or whatever I want, instead of just throwing out, please create a Python function that, whatever it does. There are easy scenarios like summing up two numbers, but in general, the features you develop are more complex than this.
That changes a lot if you put yourself in the shoes in, ok, I have a conversation with somebody, and not just throwing poorly described instructions at this. I read blog posts, they consider the AI systems more level as a junior developer. I’m not really making up my mind on this, what level this is. I try to figure out, what productivity gains do I want to get from this? Also, yes, reflecting then the question before is, choose a sample. Like Jessica said, get started as early as possible. Choose one problem that you want to solve and try to challenge the tool if it helps you or not. A practical use case is really essential.
Verma: Yes, absolutely agree. I just wanted to share one more thing that I learned during my startup days. It was a failed startup, so a lot of lessons learned. One key lesson was, you don’t hire if you cannot manage. It seems very simple. It turns out that when you’re a startup, a small company, an entrepreneur, then you will have to do a lot of work yourself and you will be tempted to hire for things and tasks which you maybe don’t know or are not an expert in. Maybe you don’t know marketing, but then you ended up hiring a marketer. Or maybe you don’t know the coding in something specific and you’re hiring for that. It’s all good as long as you know exactly how to manage, in the sense, you need to be able to know how much effort it is, whether it is possible or not, and when things go astray in the wrong direction, either with velocity or with quality, you know that it’s happening instead of just not even knowing about it.
The same is true for today’s agents or today’s AI. You can hire the AI to do something only if you know how to manage that AI. In other words, if you’re asking it to generate some code, you need to be able to have some code or generate some code to be able to test that ability as well. That way you’re managing that AI. If you are a good manager of AI, you can hire as much AI as you want.
Bajpai: Every individual and company will have a unique journey, and it depends on your risk appetite as well. Leaders in large organizations are looking for slowing down the hiring race for software developers. What are your low-hanging fruits? Identify them and try to experiment with those kinds of things which are non-mission critical, like generating application infrastructure as code, for example, or templates, or doing log analysis. These are things which we can do right away. Then you start to think about modeling your workforce for the future.
Bonzelet: We discussed barely on automation. That we want to have this repetitively in our toolchains, so log analysis in our CI/CD, whatever, every time, or when I have code reviews, every time. That’s something that was also a very central aspect.
Andersson: I wanted to note, because we have used the terms like experiment, try out, test, a lot in this session, and I saw a question that was like, do you think GenAI is production-ready? Can you use it for real cases, or is it used for MVP and testing out? I wouldn’t mind having generated code in production, but in the current situation that I am in with the maturity model of continuous deployments, I would probably have it peer-reviewed and reviewed by a developer and integrated into an existing codebase. I read somewhere like, do you see it as a universal software developer, or do you see it as your co-worker, or does it do cleaning things for you, or do you have to monitor it each way and stuff? I think that you can apply generative AI for code, but you will probably review it, or have some guardrails for what changes it makes before you push it to production, or during pushing it to production. I think it’s production-ready.
Bonzelet: I like how we phrase it in our company in general with AI, we call it the human in the lead, not the human in the loop, because it emphasizes that you are in the front position to really do these reviews. You’re not somebody in the loop. You are taking the decision, actually.
Coding and Undifferentiated Tasks
Losio: I just want to answer Tobias, the idea that everything except coding is boring and a waste of time. When I made that statement, it was an announcement from a cloud provider, AWS, that made that statement maybe a bit too aggressively, to stress that developers are spending just one hour coding, defined everything else as not waste, but as undifferentiated tasks. That’s what I used to start the conversation. I don’t agree with the statement, but that’s where the statement was coming from.
The Maturity of AI Tools
I see a couple of questions about if the AI tools are mature enough. It depends on the tool. Yes, try out maybe on not your most critical project and start from there.
Verma: We have very concrete examples of where humans were stuck in debugging something and AI came to the aid, not only fixed the code, but also explain exactly what they’re doing. The human in the lead could actually verify whether that was the right thing to do or not. I think human in the lead is the most important. Taking help from AI will show a lot of value if you’re willing to experiment.
See more presentations with transcripts