How MeitY’s AI Governance Guidelines Are Reshaping Enterprise Data Strategy

India's new AI Governance Guidelines are forcing companies to rethink how they deal with data, consent, and accountability. Saurabh Saxena describes why the days of "build first, govern later" are over and how sound data foundations will determine the country's next phase of AI adoption.

India's push for responsible AI has brought data governance into the spotlight, and enterprises are adapting quickly. In this conversation, Saurabh Saxena, Regional Vice President of Sales at OpenText India, breaks down what the MeitY AI Governance Guidelines really mean for businesses. He explains how transparent data practices, increased auditability, and better-governed AI systems will become indispensable tenets for innovation, trust, and long-term competitiveness.

1. How will MeitY’s AI Governance Guidelines influence enterprise data governance and compliance strategies in India?

MeitY’s latest AI Governance Guidelines mark an Important shift in how organisations are managing and overseeing their data and AI systems. India is shaping its path towards global AI conversations. The new framework goes beyond basic compliance, setting stronger expectations around transparency, accountability, and fairness. It encourages organisations to adopt a proactive approach, ensuring that AI systems are safe, responsible, and grounded in trust.

For businesses, the era of “train first, govern later” is over. Data responsibility is now central to business strategy. How data is sourced, managed, used and how models evolve will no longer be seen as technical details but as key factors that define the credibility of every AI-driven product, service, and decision.

The new guidelines focus on making consent clearer, giving people simple ways to raise concerns, ensuring incidents are reported quickly, and improving coordination among regulators.

We’re already seeing this shift in motion, with organisations focusing more on understanding their data flows, model behaviour, and decision traceability as they tend to expand their AI capabilities.

Much of our work will now center on helping teams to lay their foundation so they can innovate confidently while staying aligned with evolving regulatory expectations, toward achieving the ultimate goal: trust will shape the next phase of AI adoption, and governance is the way to build it.

2. What are the key challenges enterprises face in building transparent and traceable information ecosystems that meet accountability standards?

Enterprises mainly face two broad sets of challenges: one is technical, and the other is organisational, and to ensure that transparency remains aspirational, both need to evolve together.

On the technical side, one of the biggest challenges for organisations is simply keeping track of their data. Today, data sits everywhere, on the cloud, on-premises, in different tools, and in formats and accessing it all together isn't easy. Most teams still don’t have one clear way to see how data moves, changes, or ends up training AI models. As systems grow, it becomes even harder to link training data to real-world outputs or to maintain clean records of who accessed what, when, and why. Add old systems, third-party data, and multiple cloud platforms into the picture, and getting complete, end-to-end traceability starts to feel almost impossible.

Just as complicated are the organisational challenges. In many organisations, teams still operate in silos with limited coordination on data standards, consent practices, or accountability when something goes wrong. Governance is often viewed as a barrier to speed rather than a foundation for sustainable innovation. The MeitY Guidelines call for a shift in mindset; one where transparency and accountability are built into everyday operations, not treated as a tick-box exercise. Achieving this requires data, compliance, and business teams to work together, with clearly defined roles for monitoring models, resolving issues, and driving continuous improvement.

3. Why is responsible technology adoption fundamentally dependent on secure and well-governed data foundations?

Responsible technology begins with trusted data. If the data isn’t clear, accurate, and collected with proper consent, even the most advanced model can’t be trusted. Strong data practices create the foundation for fairness and accountability, helping turn responsible AI from a concept into real, practical action.

This means understanding the source of your data, how it’s classified, who can access it, and how it’s being used as models change over time. It also requires keeping a clear record at every stage, starting from collection and processing all the way to the final decisions made using that data. In India, the DPDP Act has made this essential. What was once seen as good practice is now a business and regulatory must.

We’re already seeing a pattern: companies that invest early in strong data governance are the ones scaling AI more confidently. They move faster, communicate more transparently with customers, and are better prepared for audits, accountability, and public trust.

4. How can organisations balance innovation with the ethical use of AI while adhering to these governance guidelines?

Balancing innovation with ethical AI isn't optional; the two have to work together with governance aiming to support it The simplest way to accomplish this is through a risk-based approach, which applies more stringent checks and accountability where the risks are higher while permitting greater latitude to experiment in low-risk areas.

Distinguishing experimentation from practical implementation is one method to accomplish this. In controlled sandboxes, teams can use consented or synthetic data to safely test ideas. When a model is prepared for actual use, it should undergo extensive testing for bias, fairness, and human error, backed by explicit documentation of its intent. Rollback options, audit logs, and ongoing monitoring will aid in promptly identifying and resolving problems after deployment. This strategy fits in nicely with MeitY's emphasis on responsibility and efficient grievance resolution.

Developing governance into AI systems from the start is equally crucial. Innovation becomes both responsible and effortless when tools for audit trails, access control, and data tracking are integrated early on. A shared culture of accountability is also necessary for this, where teams from data, compliance, and business cooperate rather than operate independently.

5. What role does auditability play in enhancing citizen trust and regulatory compliance in AI deployments?

Auditability is what connects intent with proof. This is how governance evolves from a guiding idea into an everyday practice. In AI, it’s the cornerstone of transparency, accountability, and public trust.

When organisations keep clear, tamper-proof records of data access, model training, and how models change over time, they can show what happened, why it happened, and who was responsible. This kind of traceability isn’t just about meeting regulations; it reassures people that AI decisions can be checked, explained, and fixed when needed.

The MeitY AI Governance Guidelines emphasise exactly this, calling for end-to-end visibility across the AI lifecycle. With strong auditability, organisations move from simply stating they are compliant to confidently proving it, while also speeding up investigations, reducing disputes, and building trust with both regulators and citizens.

6. How do these guidelines impact the way enterprises manage data privacy and security in AI systems?

The new MeitY AI Governance Guidelines set a higher standard for how organisations must manage data privacy and security in AI. They extend traditional data protection across the entire AI lifecycle and push companies to build privacy and accountability into their systems from the start, not add them later.

This means any use of personal data in AI must be lawful, based on clear consent, and fully traceable. Companies need to explain why they collect data, properly record consent, avoid collecting more data than necessary, and follow strict rules for storing and deleting it. These guidelines closely align with India’s DPDP Act, which focuses on consent, data localisation, and user rights.

The focus is also on securing the full AI pipeline, not just protecting stored data. Organisations must now guard against risks like model inversion, data poisoning, and adversarial attacks. This requires ongoing monitoring, encryption, strict access controls, and strong incident-reporting processes. These steps help ensure that if something goes wrong, like an error or bias and then it can be quickly found, fixed, and explained, building more trust and accountability.

7. In your view, how can enterprises prepare for the evolving regulatory environment without stifling technological innovation?

Enterprises don’t need to wait for every regulation to be clear before taking the next step. A more practical approach is to view compliance not as a checklist, but as a flexible foundation that can evolve with new laws and technologies.

A good place to start is getting the basics right, such as metadata models, data classification, access controls, audit logs, and model registries. These will always matter, no matter what new guidelines come in. Teams can then keep innovating by using sandbox environments where data scientists can test ideas safely with synthetic or consented data and apply stricter checks only when moving to real-world use.

Real progress happens when legal, risk, engineering, and data teams work together. When tools for data tracking, consent, and monitoring are built in from day one, compliance actually makes innovation easier instead of slowing things down.

By managing policies in one place and using flexible, modular controls, organisations can quickly adjust to new rules from MeitY, the DPDP Act, or sector regulators. The companies that stay ahead will be the ones that see governance not as a barrier but as something that helps build safer, more trusted innovation.

Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox

Subscribe Our Weekly Newsletter!

By continuing you agree to our Privacy Policy & Terms & Conditions