Microsoft recently announced it integrated ChatGPT into its low-code Power Platform developer suite. This follows the news of ChatGPT-powered Bing search and Microsoft’s pledge to invest billions into its partner company, OpenAI.
ChatGPT, the large language model (LLM), has reached viral status due to its ability to generate impressively complex textual outputs given simple prompts. And now, enterprises are hoping to cash in on its power. Microsoft’s inclusion of ChatGPT in its Power Virtual Agents and AI Builder tools will likely signal more native inclusion of AI models within other low-code development environments throughout the industry.
However, not all is peachy in the world of generative AI. Users have compiled a laundry list of ChatGPT failures, ranging from simple arithmetic errors and inaccurate information to social biases and even one attempt to dissolve a journalist’s marriage. AI seems sure of itself, but the “hallucinations” it produces can be mind-bogglingly off-base. If such inaccuracies persist within production use cases, it could lead to false suggestions that confuse users and even harm a company’s reputation.
Below, I’ll consider the implications of this new paradigm of AI-driven development, exploring the benefits and potential risks of incorporating LLMs into low-code development frameworks. We’ll also highlight how this may upset the competition and consider things leaders should keep in mind as they seek to leverage this new technology.
Benefits of Incorporating ChatGPT
Low-code development platforms (LCDPs) excel at abstracting complex functionalities into usable components. They typically offer drag-and-drop capabilities and reusable templates to enable citizen developers and professional programmers alike. And there are many potential benefits of incorporating ChatGPT into such a low-code environment. For example, as a feature within Power Virtual Agents, ChatGPT can be used to create chatbots that connect to public company resources and internal company data. This could help users seamlessly develop a contextually-aware AI trained on a company’s documents.
Utilizing AI within low-code can spur development with deep inferences and guidance. Such AI assistance can drastically accelerate development efforts through enhanced auto-completion and intelligent insights, quickly generating boilerplate code and making suggestions more customized to the task at hand than your typical static templates.
GPT-based chatbots leveraging AI trained on a company’s documents could be incredibly more valuable than an off-the-shelf model without context. In the realm of customer support, an in-house trained AI could be beneficial in summarizing company information and automating manual support activities. Providing a low-code means to generate such chatbots also brings advanced AI to a broader range of users. All in all, this could advance the pace of development, especially for mobile interfaces, automated workflows and contextually-aware chatbots.
Implications For the Low-Code Space
In the wake of ChatGPT, the market has been scrambling to respond. Tech giants have debuted their own generative AI to mixed reviews. And some, myself included, are left pondering how low-code and AI will coexist in this new era. Will natural language-driven code generation replace the need for programming entirely, whether it’s traditional or ‘codeless’?
This remains uncertain. But what appears most plausible is that these recent developments will raise the bar throughout the software industry and augment how LCDPs are built and used. AI has the power to enhance LCDPs in a few ways, including augmenting the developer experience, training bespoke ML models and building more intelligent end-user experiences.
Yet, with Microsoft funneling billions into AI research and development, it could be challenging for mid-size low-code platforms to keep pace without their own AI-driven workflows. Thus, the LCDPs that do not integrate robust AI of their own may lose out on new subscribers. Plus, engineers hoping to create AI models that run on internal datasets may prefer working with a larger cloud technology suite in which their data is natively stored.
Downsides of AI-Infused Programming
ChatGPT and other generative models are absolutely impressive, but the results can’t be 100% trusted. And at the time of writing, ChatGPT usage within PowerApps is still experimental, which is indicative of ChatGPT and generative AI in general—still in the experimentation phase.
This doesn’t bode all that well for programmers relying on its outputs, given the potential for inaccuracies. Because although the results of ChatGPT sound authoritative, they are generated from knowledge scoured from the public web, which often contains bugs, errors and inefficiencies.
The outputs of ChatGPT may even suggest entirely nonexistent features! This is, unfortunately, a current issue with geocoding API provider OpenCage. ChatGPT routinely recommends users integrate OpenCage’s “phone lookup service” for certain prompts. But … OpenCage doesn’t even offer this feature. The team has received so many complaints from angry developers that they even had to issue a statement explaining what happened.
With AI code generation, the focus shifts from hardcoding toward creating and organizing prompts and debugging errors. And although it’s giving programmers newfound agility, it doesn’t solve all software development problems—namely, the DevOps side of the equation. There will still be friction in deploying code and piecing together an optimal architecture. Also, due to rising software supply chain vulnerabilities, code automation that loops in third-party dependencies must require careful security forethought. There’s also the chance of breaking changes and ongoing maintenance hurdles with any software.
Another important element is that low-code requires governance to be secured correctly. No-code business users may not have the security oversight to understand the implications of spinning up new services. And when you add AI into the mix, the technical implications escalate. AI can result in ethical violations, and bots have been found to communicate anger and irrational ideas when poked and prodded.
Are We All Just Hallucinating?
‘Hallucinating’ is when an AI model confidently spouts nonsense or inaccurate information. And although this is the case with some models currently, they are actively accepting feedback and being retrained. Over time, their outputs will improve. But for now, engineers should be aware that they are still under experimentation. As such, enterprises should use caution and test new AI innovations on internal processes first. And AI adoption must be governed carefully to avoid misconfigurations and abuse.
AI is here, and the future is now. But although it’s dismantling some coding barriers, developers still have their work cut out for them. This reality is a win for low-code solutions offering standard software delivery pipelines and centralized collaboration features. If LCDPs can maintain pace with AI and embed it into their workflows, both sides should fare pretty well in this new era.
Surprise — this entire article was written by ChatGPT. Just kidding! None of it was. I’m just tired of seeing this “gotcha” moment at the end of news segments…
This post was written by a human.