Blogs

4 Risk Factors Devs Should Consider for ChatGPT Integrations

It’s only been a couple of months since OpenAI introduced its latest low-cost API for developers to leverage ChatGPT in their applications and already many engineering teams have jumped headlong into new integrations. But developers experimenting with the possibilities of how ChatGPT can boost their software should temper that with some risk analysis and threat modeling before deployment. So say a range of cybersecurity and risk experts, who warn organizations that leverage the ChatGPT API in their software that they’ll need to navigate a minefield of privacy, data governance, and security operational risks.

“I use ChatGPT Pro and think it’s an exciting tool for holiday cards, poems for my kid, writing inspiration, first drafts and search,” says Patrick Hall, data scientist and co-founder of BNH.AI, a boutique law firm focused on minimizing legal and technical risks of AI. He’s also the author of the upcoming book Machine Learning for High-Risk Applications. “I get concerned about generative AI used as a consumer product in high-risk domains, such as consumer finance or the law.”

With that framing in mind, the following are a few of the areas that experts like Hall suggest that developers, DevOps teams and CTOs start thinking about to avoid some potentially costly unintended consequences of ChatGPT integration in their software.

Data Privacy and Governance Concerns

One of the most immediate and obvious risk concerns IT leadership, DevOps teams and privacy experts are worried about is what happens to sensitive data when users enter it into a ChatGPT-backed prompt—whether directly or through an app using the API. Be it a customer entering personally identifiable information (PII) into an external-facing app or an employee entering proprietary company information in an internal tool using ChatGPT’s API, these use patterns open up a world of data exposure and regulatory compliance concerns. The good news is that OpenAI does not use data submitted through the API to train its models, but this shouldn’t automatically give devs peace of mind. They’ll need to be sure that using generative AI in an app truly aligns with their company’s privacy policies and with any local, federal or international data privacy laws their firm is beholden to, said Hall.

“Although OpenAI appears to maintain a robust security and privacy program and there are steps you can take on your own to limit data sharing, some people are still worried,” said Walter Haydock, founder and CEO of StackAware.

Among Haydock’s side projects, he is currently developing a tool for sanitizing data before sending it to OpenAI. Called GPT-Guard, it’s still in beta and it would need to be run on a local machine or in the cloud to sanitize data before sending it via API. But it provides some rapid prototyping of the kind of methods and tools developers may need to think about as they design their applications to interact with ChatGPT.

Intellectual Property Infringement

Developers should also be thinking through intellectual property infringement concerns when it comes to the responses generated by an app tapping into ChatGPT, Hall said.

“Devs need to consider whether a generative AI system may be coughing up other people’s licensed or copyrighted intellectual property as responses,” he said.

This is likely going to be a tricky situation that will likely be tested by the courts in the coming years. For example, will applications using ChatGPT be required to compensate licensed copyright holders for certain kinds of responses using their material? These are the kinds of questions that developers and the organizations they work for should be asking themselves, said Frank Huerta, CEO of Curtail Security and a longtime DevOps expert.

“For example, what’s the mechanism for resolving IP and copyright disputes?” he asked. “Is ChatGPT going to have its own IP? Is it going to generate it—and will it put that in the public domain? Who owns the copyright to a generated response? None of that has been considered yet, and it’s going to come up fast.”

Adding New Cybersecurity Threat Vectors

In addition to all of the concerns about the data, developers integrating ChatGPT into their applications also need to consider what that does to the software’s attack surface. ‘Never trust user input’ has always been a good rule of thumb for developers, and that advice is just as true for large language model (LLM) prompts as anything else.

A recent example scenario laid out by a data scientist on Twitter showed one of the myriad ways that malicious actors could carry out prompt injections in the same way they’d carry out a SQL injection. In that scenario, if a recruiter used ChatGPT to crawl LinkedIn profiles for a certain keyword and send an email to the people whose profiles included that keyword, it’d be possible to add text to a profile that sends an email chiding the recruiter for using an automated tool to send out emails. That’s a fairly low-stakes example, but it shows the possibility for injection attacks, said Andy Patel, a security researcher for WithSecure who has been digging into AI risks.

“So, if you are integrating it, you should think about things like that. You should think about the fact that people will determine that you are using a large language model to do something, to automate something, and then they’ll essentially SQL injection attack it,” he said.

Problems With Bias

Finally, developers and software design teams should be cognizant of the potential problems that AI bias can introduce into their software if they lean on the generated output from ChatGPT to trigger actions or to inform the user.

“Generative AI systems continue to make wrong, offensive or otherwise problematic content. Being wrong can give rise to negligence issues and biased content can lead to discrimination issues,” Hall warned. “Don’t embarrass yourself, offend others or run afoul of the law by not testing and further protect yourself by screening the heck out of generated content.”

Ericka Chickowski

An award-winning freelance writer, Ericka Chickowski covers information technology and business innovation. Her perspectives on business and technology have appeared in dozens of trade and consumer magazines, including Entrepreneur, Consumers Digest, Channel Insider, CIO Insight, Dark Reading and InformationWeek. She's made it her specialty to explain in plain English how technology trends affect real people.

Recent Posts

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

15 hours ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

20 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

1 day ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

2 days ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

3 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

3 days ago