Tag: Large Language Models

AI-Generated Code Packages Can Lead to ‘Slopsquatting’ Threat
AI hallucinations – the occasional tendency of large language models to respond to prompts with incorrect, inaccurate or made-up answers – have been an ongoing concern as the enterprise adoption of generative ...

How to Extend an Application Security Program to AI/ML Applications
While various AI/ML application risks are like traditional application security risks and can be protected using the same tools and platforms, runtime security for the new models requires new methods of securing ...