Blogs

Say ‘No’ to ‘NoOps’: Why We Can’t Afford to Let AI Run Wild

We have moved beyond the initial hype surrounding AI. Now, industries and roles across the board recognize the necessity of integrating this technology into their workflows, leading to a substantial increase in adoption. Just last year, the global market for AI was valued at nearly $200 billion and now it is expected to see an annual growth rate of 40% through the end of the decade.

This growth is prominent among software development teams. Research shows that 70% of software teams have adopted AI and 30% have implemented a strategy around it. These teams witness substantial gains in code writing velocity through AI. The numbers are proof. Software development teams with an effective AI adoption strategy are reporting an increase of 250% in the development speed.

AI technology is not perfect; it has limits and drawbacks. For example, while AI may speed up code writing, it can also increase code churn — and twice as much code means the possibility of twice as many mistakes. Popular developer platform GitHub recommends taking precautions when using code not written by a human. Whether a code is written by humans or GenAI, checking the code for quality is critical (aka a clean code approach).

Despite this reality, some envision a near future of ‘NoOps,’ where the development community can fully depend on AI through advanced automation, self-healing systems and intelligent monitoring. That may one day become possible to an extent as technology continues to advance and evolve, but our current environment is not yet equipped for this.

Right now, AI is not reliable or accurate enough to replace human developers and DevOps teams. The stakes are too high, and critical thinking and oversight are necessary for companies to start preparing for a time when AI takes over all IT operations. Believing otherwise puts your organization at risk for mistakes, poor-quality code and software and ultimately, business loss.

Will AI ever fully replace developers? I lean toward a ‘no’. But here are some things to consider when thinking about ‘NoOps’ and AI adoption.

The DevOps Community Cannot Miss the AI Opportunity

AI is the present and the future. Development teams cannot afford to miss out on the value AI provides or to fall behind in learning its different use cases, quirks and drawbacks. Developers and engineers of all roles and skill levels must be well-versed in using these GenAI tools. If they aren’t, they can be left behind.

This is especially true as software becomes an even more critical business asset. Think about just how much we rely on technology in every aspect of our lives. The phones we use, the cars we drive, the smart devices we install in our homes — all these are powered by software. This requires software companies that deliver that software to ensure their foundational code is clean, meaning it is secure, reliable, maintainable and of high quality.

It is not just developers who can derive value from AI tools. When used correctly, AI coding assistants can significantly impact the creation, management and testing of software and software infrastructure to benefit DevOps teams. For example, AI can be invaluable in the CI/CD process, streamlining it with greater automated capabilities for building, testing and deployment. AI can also integrate new code into an existing environment, which speeds up deployment as well as reduces the risk of error.

Agility and efficiency are boosted through AI automation as well. It can take care of tedious tasks so that teams don’t need to waste time on grunt work and can focus on priority projects. As AI can handle tedious and repetitive tasks, it reduces human error and allows teams to focus on work that benefits from human creativity, innovation and critical thinking.

With AI, developers can hone their code, ensuring quality in more complex parts of a project while reducing mistakes in ancillary pieces of code. Our reliance on software means code must be of high quality, which also means it is secure by design. Preventing cyberattacks reduces costly, reputation-threatening incidents, and AI can significantly aid in this effort.

But remember, AI is not perfect. It can help streamline DevOps and the development process and will continue to do so as the technology keeps improving. However, human oversight is an undeniable necessity in software. Companies cannot afford to implement AI without proper safeguards. It is not just about having the right tools, it is about ensuring you are using them in the right way and checking their work just as you would do with a human colleague.

Responsible AI Usage Includes Human Oversight

Like any emerging technology, AI needs guardrails. Teams and businesses must ensure that employees use AI in a way that does not compromise output, quality or the company’s bottom line.

The reality is that you cannot replace human critical thinking with machine learning (ML) — not now or anytime soon. Humans still have to be at the crux of vital decision-making in development and DevOps processes. For instance, AI lacks the context of a codebase; it may not understand what you are trying to achieve with a project. Its output cannot simply go into deployment without examination. You need a human safeguard.

In fact, researchers at Vectara, a GenAI platform founded by former Google employees, quantified AI hallucinations to understand the limitations and failures of this technology. They found chatbots invent things between 3% and 27% of the time. To clock these kinds of issues, they even keep a ‘hallucination leaderboard on GitHub. Think about how many times you have seen examples of AI chatbots and generators creating nonsensical content or responses to questions — the same concerns apply to software development and DevOps.

Working with AI also requires transparency and accountability. You cannot trust that AI is pulling data from an unimpeachable source; you must be able to verify the origins of AI contributions to ensure their accuracy. Human teams also need to know what these tools can do: Their limitations, rules and specific use cases. This comes down to ensuring teams have proper education and training.

It is not enough though to just learn from current use cases. As these tools change and advance, the way we use them should also be evaluated consistently. Where are we seeing drawbacks? Improvements? Failures? How can we best use this technology right now? These are the kinds of open and honest conversations developers and DevOps teams should constantly have to learn from each other.

We have no way of predicting the future or knowing what AI will look like in five, 10 or 15 years — which means we will always have to stay nimble and evolve alongside it. DevOps teams will need to continue refining and adapting best practices.

An Unpredictable, AI-Powered Future

AI now plays an unignorable, major part in the software landscape. As we move toward an AI-driven future, we need to keep understanding its implications and strive for the best outcome.

As we anticipate AI’s growth and advancement over the next decade, it is important to recognize that we are not currently technologically equipped to grant these tools complete control over all processes. The idea of a ‘NoOps’ future is an intriguing one — but for the time being, it is simply not viable. Just like we are not yet in a place where driverless cars are unquestionably safer alternatives to our vehicles, or we can fully trust chatbots to answer live chat questions correctly, AI coding assistants cannot simply run by themselves.

Developers and DevOps teams cannot discount the need for human involvement in software. Like its human counterparts, AI is not perfect. Still, there is an undeniable opportunity at hand. We live in an exciting time of innovation and on the horizon of incredible transformation. But only by exercising some level of caution as we reach that precipice can we ensure we will reap the benefits when it arrives.

Peter McKee is the Head of Developer Relations and Community at Sonar, where he leads a team of developer advocates in reaching and educating developers across their preferred forums to help them write better, more secure code. Peter is also the maintainer of the open source project Ronin.js and for over 25 years has built his career developing full-stack applications as well as leading and mentoring developer teams. Prior to Sonar, Peter was the Director of Developer Advocacy at JFrog and before that, he held multiple roles at Docker including Head of Developer Relations. When not building things with software, he spends his time with his wife and seven kids in beautiful Austin, TX.

Recent Posts

Five Great DevOps Job Opportunities

A weekly summary of DevOps job opportunities, including a Cloud Services Engineer role at Intel and an engineer role at…

1 hour ago

Cribl Extends Reach of Platform for Routing DevOps Telemetry Data

Cribl this week added support for multiple additional platforms to its cloud service for collecting and routing telemetry data collected…

3 days ago

Aembit Raises $25 Million in Series A Funding for Non-Human Identity and Access Management

Silver Spring, United States, 12th September 2024, CyberNewsWire

4 days ago

Survey Sees AI and Automation Accelerating Pace of Software Development

A survey of 555 software executives published this week finds that 75% have seen up to a 50% reduction in…

4 days ago

Criminal IP Teams Up with IPLocation.io to Deliver Unmatched IP Solutions to Global Audiences

Torrance, United States / California, 12th September 2024, CyberNewsWire

4 days ago

Cloud Cost Optimization: Accounting Principles Meet Engineering Efficiency

Several techniques blend accounting principles with engineering practices to make cloud cost optimization practical and effective.

5 days ago