Examining key pillars an organization should consider when developing AI governance policies.

In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”
For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:
- A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.
- With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
- AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.
While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.
And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.
The scope of the AI problem
AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.
During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.” And why should they? Nearly all postings and articles about AI mention governance as an afterthought, if at all.
So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.
AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:
• Ethical AI, adhering to principles of fairness, transparency, and accountability.
• AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.
• Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.
• AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.
Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”
Next, we presented the key pillars of AI governance, namely:
- Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
- Fairness and proactively detecting and mitigating biases.
- Privacy and Security to protect personal data, as well as the integrity of the model.
- Risk Management with continuous monitoring across the AI lifecycle.
The solution provider’s perspective
Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.
AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.
“The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”
Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.
Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.
Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.
The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.
The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.
“Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”
More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.
“And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.
The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”
In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.
Why governance has lagged
With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:
- The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
- The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
- Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
- Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
- The lack of transparency (and “explainability”), with which to challenge black-box AI models.
- Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
- Public confidence in AI-enabled systems, not just “trust” by users.
- Ethics and trust themes that reinforce ROI discussions.
Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.
James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p., a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”
AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”
AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:
Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”
Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”
Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.
Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”
Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)
Establishing governance
So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:
• Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.
• Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.
• Ensures continuous auditing and feedback loops for AI decision-making.
To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).
Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:
- Quantifying AI governance value in cost savings, risk reduction, and reputation
management. - Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
- Justifying investment with business case examples and alignment with stakeholders’ priorities.
- Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.
These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.
The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.
Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.
By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”
Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.
In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.
Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.