“It’s advisable to secure the systems around the AI agents in use, which include APIs, forms, and middleware, so that prompt injection is harder to exploit and less harmful if it succeeds,” said Chrissa Constantine, senior cybersecurity solution architect at Black Duck. She emphasized that true prevention requires not just patching but “maintaining configuration and establishing guardrails around the agent design, software supply chain, web application, and API testing.”
Noma’s researchers echoed that call, urging organizations to treat AI agents like production systems, inventorying every agent, validating outbound connections, sanitizing inputs before they reach the model, and flagging any sensitive data access or internet egress.
Sanitize external input before the agent sees it, suggested Elad Luz, head of research at Oasis Security. “Treat free-text from contact forms as untrusted input. Use an input mediation layer to extract only expected fields, strip/neutralize instructions, links, and markup, and prevent the model from interpreting user content as commands (prompt-injection resilience).”
Source link




