AI systems designed without governance inevitably create downstream risk, from hallucinated outputs to customer dissatisfaction and fraud exposure.

In sectors such as engineering, utilities and infrastructure where legacy systems, proprietary data and long project cycles dominate, AI cannot simply be “plugged in.” (Source: freepik)
Artificial intelligence (AI) is steadily embedding itself into enterprise decision-making, shifting the debate from what AI can do to whether organisations can trust it to operate securely. This tension -- between rapid adoption and rising cyber risk-- was at the centre of a panel talk on AI Everywhere -- But What About Security & Frauds? during the recent roundtable discussion by digital transformation enterprise OpsTree, which brought together voices from law enforcement, enterprise IT, academia and technology services.
The discussion reflected a shared concern -- while AI is accelerating productivity and automation, weak governance and unrestricted data access are creating new fault lines for fraud, data leaks and reputational damage.
For Avneesh Vats, CIO at Techno Group, the challenge was particularly acute in traditional, asset-heavy businesses that are now being pulled into a digital-first world. AI tools are increasingly accessible across organisations, often without guardrails. Employees routinely experiment with generative AI platforms, sometimes unknowingly exposing sensitive corporate data. With India’s data protection framework now in force, the cost of such lapses is no longer theoretical.
“Data is the crown jewel of any organisation,” was the underlying message. AI adoption, the panel stressed, cannot come at the expense of control. The risk is not limited to technology failure, but to brand erosion and large-scale financial fraud enabled by leaked or misused data.
That risk is already playing out in the real world. Priyanshu Dewan, ACP at the Cyber Cell of Gurugram Police, highlighted the scale of cyber fraud being reported, particularly those driven by social engineering rather than sophisticated hacking. The majority of cases today rely on deception -- impersonation, fear tactics and manipulation of authority rather than advanced AI systems. Yet the concern is that AI will soon amplify these attacks, making frauds faster, more personalised and harder to detect.
What makes enterprises especially vulnerable, Dewan noted, is the absence of verification protocols. Corporate frauds involving impersonation of senior executives and urgent payment requests have resulted in multi-crore losses, often without a single line of code being breached. AI, once layered onto such tactics, could dramatically increase their effectiveness.
This is where governance, not just technology, becomes central. Sandeep Rawat, Co-Founder and CTO of OpsTree Global, argued that organisations are no longer struggling with whether to adopt AI, but with how to operationalise it responsibly. Management enthusiasm for AI-driven outcomes often runs ahead of confidence in exposing those systems to real users. Trust, therefore, must be engineered into AI programmes from the outset.
OpsTree’s role, as Rawat outlined, has increasingly been to help enterprises design AI adoption frameworks that balance innovation with control. Rather than encouraging blanket data democratisation, the focus is on defining what data can be used, by whom, and for what purpose -- supported by consent mechanisms, orchestration layers and clearly defined governance models.
This approach has been particularly relevant for organisations operating across complex environments. In sectors such as engineering, utilities and infrastructure where legacy systems, proprietary data and long project cycles dominate, AI cannot simply be “plugged in.”
One enterprise discussed during the panel has been working through AI use cases in areas such as tender analysis and investor communications, where data sensitivity is high and errors can have material consequences.
In these cases, OpsTree supported the organisation by first sanitising and categorising historical data, limiting training datasets to defined timeframes, and ensuring that proprietary information remained protected. AI models were deployed only after careful evaluation of what data genuinely contributed to outcomes, rather than maximising data ingestion. The result was targeted automation without compromising confidentiality.
Another theme that emerged strongly was the misconception that AI can replace human judgment entirely. Multiple panelists emphasised the importance of “human-in-the-loop” systems -- particularly in regulated or customer-facing functions. AI outputs, whether in testing, analytics or customer engagement, must be observable, auditable and challengeable.
This philosophy also extends to cybersecurity. Zero-trust frameworks, data minimisation and anonymisation were highlighted as essential principles, especially for AI-native companies handling large volumes of consumer data.
Crucially, the panel converged on the idea that security and trust cannot be retrofitted. AI systems designed without governance inevitably create downstream risk, from hallucinated outputs to customer dissatisfaction and fraud exposure. In contrast, when security, consent and outcome accountability are embedded at the architecture stage, trust becomes an outcome rather than an afterthought.
As enterprises race to deploy AI under pressure from boards and competitors alike, the discussion served as a reminder that the real differentiator will not be speed of adoption, but the ability to deploy AI safely, responsibly and credibly. In that journey, cybersecurity and fraud prevention are no longer support functions, they are foundational to AI’s long-term value.
The roundtable was supported by Financial Express as the media partner and Flexi Roundtables – Top 1% Leaders was the community and curation partner.
About OpsTree: The AI-augmented digital engineering company offers enterprise platforms such as BuildPiper and REMS. BuildPiper is an AI-powered DevSecOps platform that introduces an Agentic DevSecOps framework focused on autonomous, governed, and secure software delivery with strong data sovereignty and in-house governance. On the other hand, REMS is a reliability engineering and management platform designed to centralize SRE functions using AI-driven insights, predictive analytics, and governed automation, helping enterprises improve resilience, compliance, and operational efficiency.
Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox
By continuing you agree to our Privacy Policy & Terms & Conditions
