Generative AI will eventually impact the entire DevOps life cycle from plan to operate. I started as a developer but have been a product manager for most of my career; for me, the ‘Holy Grail of DevOps’ would be one where product managers (PMs) and business analysts (BAs) were able to define a future state of a business process and press a button to deliver it without any developers, designers or testers involved.
This dream is not practical in the near term and is not really desirable in the long term, either. PMs and BAs are good at understanding the needs of users and translating them into features but aren’t interaction designers. AI may be able to generate code and tests, but the software architecture for both will still require a deep understanding of the system to structure the solution properly, especially in enterprise SaaS development. So my dream is to build a team where the BAs can define the changes and a small team of very talented architects and interaction designers can realize those changes in 10% of the time it takes today without requiring a large team to implement the details. This is similar to what has happened in manufacturing where robots and numerically controlled machines are able to do the heavy lifting with the help of operators.
Planning is Key
When generative AI is actually building the solution, proper planning will be the entire key to success. A well-documented, well-thought-out plan will enable the AI to finish most of the heavy lifting. So the majority of the focus for the dream team should be to get the plan right and to document it at just the right level of detail. Just like I was taught by my first Agile coach 20 years ago, when it comes to documentation, you need just enough just in time.
In enterprise SaaS platforms like Salesforce.com and ServiceNow, the business starts with a newly implemented system in some initial implementation and then incrementally customizes this software to better meet their needs over time. So, instead of thinking about building a system from scratch, we must think of digital transformation as the continuous improvement of an existing system over time. That means the AI must start with a good model of the
● Existing business processes,
● Metadata configuration of the system and
● Dependencies between the settings.
Development in the New World
What does this mean in practice? Let’s look at a day in the life of the dream team.
The BA will start by telling the AI what changes are desired at a high level. The AI will help refine the requirements by filling in missing details. This will happen through an iterative process of expanding, refining and summarizing until the requirements are clearly defined. Just as lint tools check source code today, the AI will examine the requirements to ensure the user stories are complete, with acceptance criteria well defined. During this process, the AI will do a detailed dependency and impact analysis to provide a heads-up to the BA when there are any unwanted side effects.
This refinement session will be more of a conversation between the BA and the AI than a one-sided writing session. After several iterations, a complete set of user stories will be created with all of the necessary details spelled out.
Once the requirements are complete, the BA will be joined by a user interaction designer and software architect.
The interaction designer will define the interaction spec through a combination of natural language description of the new processes and a functional prototype. Both of these elements will be created and refined through interactions with the generative AI. Prototypes will no longer be simple graphic artifacts to be coded by front-end engineers. The AI will generate a working prototype in the framework of your choice on demand through a combination of verbal requests and traditional point-and-click.
They will review the plan and begin designing the solution’s architecture which will result in a natural language software spec that documents the data model changes and high-level specifications for the classes that make up the solution. It is likely that the data model changes will be created in a dev environment on demand, with the ERD updated to reflect the new model.
Depending on the type of changes, the architect and interaction designer may collaborate and iterate on the specs until the “how” is well defined. When the design is ready, the dream team will meet to review the design and sign off on the approach.
Generate and Iterate
Once the plan is approved, it’s time for the AI to generate a first-pass solution. We hope and expect that generating the solution will be something that happens in minutes. In this scenario, the dream team will take a coffee break and then come back to the meeting room to review the working solution.
There is nothing like playing with the actual solution to remind you of the edge cases you forgot or to realize how tedious it is to actually use a new feature. But that’s okay because the team can quickly update the requirements to account for these changes and push the generate button again.
In fact, they can tinker to their heart’s content. But like all great artists, they must decide when it is good enough to ship.
In the near term, we will not be able to blindly rely on generative AI, so the architects will have to do code reviews, just as they do today with junior developers. Likewise, the AI may not understand all of the possible dependencies in a system. So there will be a transition period where the process is semi-automatic.
Eventually, the notion of a two-week sprint will seem anachronistic and a daily cycle of define and release to production will be commonplace. Members of the dream team will spend more time worrying about the high-level usability and effectiveness of their designs and less time struggling with actual implementation, which should make their jobs much more rewarding and the work product much better. I am looking forward to it.