Uncategorized - Engineering.com https://www.engineering.com/category/uncategorized/ Fri, 11 Jul 2025 19:29:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Uncategorized - Engineering.com https://www.engineering.com/category/uncategorized/ 32 32 AI governance—the unavoidable imperative of responsibility https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/ Tue, 08 Jul 2025 18:03:42 +0000 https://www.engineering.com/?p=141188 Examining key pillars an organization should consider when developing AI governance policies.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”

For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

Janie Gurley, Data Governance Director, CIMdata Inc.

Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:

  • A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.  
  • With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
  • AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.

While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.

And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.

The scope of the AI problem

AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.

During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.”  And why should they?  Nearly all postings and articles about AI mention governance as an afterthought, if at all.

So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.

AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:

Ethical AI, adhering to principles of fairness, transparency, and accountability.

AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.

Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.

AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.

Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”

Next, we presented the key pillars of AI governance, namely:

  • Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
  • Fairness and proactively detecting and mitigating biases.
  • Privacy and Security to protect personal data, as well as the integrity of the model.
  • Risk Management with continuous monitoring across the AI lifecycle.

The solution provider’s perspective

Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.

AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.

“The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”

Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.

Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.

Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.

The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.

The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.

“Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”

More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.

“And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.

The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”

In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.

Why governance has lagged

With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:

  • The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
  • The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
  • Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
  • Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
  • The lack of transparency (and “explainability”), with which to challenge black-box AI models.
  • Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
  • Public confidence in AI-enabled systems, not just “trust” by users.
  • Ethics and trust themes that reinforce ROI discussions.

Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.

James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p.,  a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”

AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”

AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:

Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”

Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”

Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.

Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”

Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)

Establishing governance

So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:

• Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.

• Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.

• Ensures continuous auditing and feedback loops for AI decision-making.

To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).

Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:

  • Quantifying AI governance value in cost savings, risk reduction, and reputation
      management.
  • Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
  • Justifying investment with business case examples and alignment with stakeholders’ priorities.
  • Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.

These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.

The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.

Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.

By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”

Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.

In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.

Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
Advancing automotive electronics by thinking inside the box https://www.engineering.com/advancing-automotive-electronics-by-thinking-inside-the-box/ Mon, 16 Jun 2025 14:55:14 +0000 https://www.engineering.com/?p=140312 A look at connector solutions that help engineers meet growing in-vehicle demands.

The post Advancing automotive electronics by thinking inside the box appeared first on Engineering.com.

]]>
TTI Inc. has sponsored this post.

Image: Molex.

As modern vehicles grow more sophisticated, automakers are integrating an increasing number of electronic features inside the cabin. Infotainment systems, steering wheel controls, LED lighting arrays, heads-up displays, smart mirrors, and power-operated windows and seats all rely on compact, high-performance electronics embedded throughout the vehicle interior.

Unlike “outside-the-box” connectors—those used in safety-critical environments and governed by standards such as USCAR — “inside-the-box” connectors are installed within sealed electronic modules. These internal modules don’t face the same thermal extremes but must still operate under conditions of shock and vibration within a limited space.

To address these constraints, Molex offers a range of miniaturized connectors engineered specifically for use within automotive electronic modules. These products support flexible configurations and incorporate features that help guard against common points of failure—offering practical solutions for a wide variety of in-cabin systems.

“Although automotive standards like USCAR and LV214 aren’t required, we still design our products and test them to many of these standards just as an added layer of assurance to de-risk the connectors in these applications,” says Nathan Piette, Group Product Manager for the Power and Signal business unit at Molex.

Image: Molex.

Key Molex “Inside the Box” Products

The Micro-Fit 3.0 connector system is a longstanding option in Molex’s compact connector lineup. With a 3.0 mm pitch and current ratings up to 10.5 A per pin, it comes in a wide range of configurations, including wire-to-wire, wire-to-board, and board-to-board. Designers can choose from termination styles such as through-hole, surface-mount, and compliant pin. Most versions are rated to 105°C, with some extending to 125°C. The system supports both V-0 and V-2 resin types and offers either tin or gold terminal plating. While tin is the standard choice for cost reasons, gold offers a corrosion-resistant alternative for harsher environments.

For additional retention strength, an optional terminal position assurance (TPA) feature helps ensure terminals are fully seated during assembly, reducing the risk of intermittent connections caused by incomplete insertion. TPAs also prevent terminals from backing out if cables are tugged or bent after installation.

Micro-Fit+ builds on this platform with improved current handling—up to 13 A per pin, with a 14-gauge option in development that will raise it to 15 A. It also reduces mating force by around 40% compared to standard Micro-Fit and other comparable solutions. Added features include a connector position assurance (CPA) mechanism to reduce the risk of unmating by providing a secondary locking feature, as well as TPA for terminal retention. The entire system is rated to 125°C.

“That’s a T3 level in USCAR automotive parlance,” says Piette. “A lot of inside-the-box applications only require 85°C or 105°C temperature rating. This is a super robust system that far exceeds the performance requirements of the typical use case.”

“It’s a premium product,” adds John Crimmins, Worldwide Account Manager at Molex. “It’s foolproof. You can’t mismate it. It’s the highest power in the industry for something that small.”

Where space constraints are more pressing, the Micro-Lock Plus series offers pitches as small as 1.25 mm and 2 mm, with 1.5 mm on the way. The mated retention force — the force required to pull the connectors apart once they’re engaged — is 49 N, which is unusually high for this class of interconnects.

“When you think of small connector systems, you might think they’re flimsy or maybe delicate,” says Piette. “This is a reliable, robust micro-miniature wire-to-board system.”

The product family also supports potting, so it’s well-suited for customers who use epoxy, conformal coating, or other techniques to seal their boards against environmental ingress. The 1.25 mm version supports up to 3.6 amps per pin; the 2 mm version supports up to 4.7 amps. The series also includes TPA features.

“Molex has the broadest portfolio of micro-miniature wire-to-wire and wire-to-board products from 2 mm pitch and below on the market,” says Piette.

The Pico-Clasp family is Molex’s flagship signal connector in the micro-miniature wire-to-board category. With a 1 mm pitch, it offers one of the most compact footprints in the portfolio. The series includes a wide range of layout and termination styles, including vertical and right-angle orientations, single- and dual-row formats, and surface-mount versions. A variety of locking features are also available, including friction locks for basic retention, and both outer and inner positive locks that provide audible feedback during mating.

“Our over-80-year history of connector design and manufacturing know-how really sets us apart —especially in the power and signal space,” says Piette. “These products are core to our product portfolios overall, and so widely used and applied in the market. We have had a lot of experience and feedback in developing and optimizing these. Micro-Fit has been out for decades. The rest of the world has since copied and pasted that design because of its industry-leading quality and capabilities.”

“While some competitors have a lot of these attributes, rarely any of them have all,” adds Crimmins. “We have the most options, and they’re readily available through TTI with no lead time.”

To learn more, visit Molex at TTI.

The post Advancing automotive electronics by thinking inside the box appeared first on Engineering.com.

]]>
Dealing with legacy software during a digital overhaul https://www.engineering.com/dealing-with-legacy-software-during-a-digital-overhaul/ Tue, 10 Jun 2025 14:50:12 +0000 https://www.engineering.com/?p=140448 Columnist and manufacturing engineer Andrei Lucian Rosca explains how legacy software and systems are important pieces of the digital transformation puzzle.

The post Dealing with legacy software during a digital overhaul appeared first on Engineering.com.

]]>
The big dilemma everyone faces when overhauling digital platforms is what should the business do with legacy software? In this context, the word “legacy” represents outdated tools, software or hardware that are still being used by companies and are still vital for operations. Their age and outdated nature pose various problems, such as high maintenance costs, security vulnerabilities and integration issues, but they can be integral to the day-to-day operations of a company.

Throughout my career, I have had exposure to several types of legacy software in different companies and industries. Organizations deal with the idea of transforming to a digital platform in one of three ways: they view the legacy software as crucial and must be integrated (high resistance), they keep using it in parallel to a digital approach regardless of the high cost, or they transition completely to a digital system, which is surprisingly the least common approach.

During my time working for a global automotive company, I encountered a semi-collaborative approach to data sharing and working together. The main problem for engineering was working together in a private ecosystem. This was caused by several factors— different rules and regulations at each location, legacy software and a legacy mentality driven by several acquisitions that were never fully integrated. Our north star became the migration to a digital platform that bridged the divide and got the locations to work together. This approach was ultimately successful, and members of the organization could easily work on their projects from any location.

 Of course, issues appeared with the transition to a digital thread, including numbering schemes and streamlining or adapting processes to local needs. I learned that it is very easy to fall down the rabbit hole if you entertain every little detail. Instead, your main drive should always be to the agreed scope. During this transition, I had to quell a lot of debates on minor things that could have derailed the scope of our projects, and there were a lot of projects in the initial phase, such as our desired outcomes from moving to a complete digital thread, which software to migrate or discontinue, vendor selection and many others.

Indeed, it’s worth taking time at the beginning to design your solution as thoroughly as possible—it saves a lot of headaches down the road and most importantly, saves money. The role of an engineer in this specific spot is to balance out the budget with features. First and foremost, in this role you must bridge the divide between design and manufacturing, this was one of the first things that I learned as I was cutting my teeth in my first engineering job. You can design a product or a solution as neat as possible, but at the end you must produce it and to produce a product is a whole other beast than just drawing it on your computer. Understanding both the design component and having a surface understanding of how the product is manufactured gave me enough credit with the shopfloor people that I became the go to person for the head of manufacturing to present their topics and work with them to be able to incorporate them in the implementation process.

One of the most important but frequently ignored topics is user acceptance. People who are working with a specific software are usually SME (subject matter experts) and know the software in detail. Because of this, it can be tough to gain buy-in, but they are your most important asset in a legacy software to digital thread transformation. They have depth of knowledge that is critical to a successful migration or transition. Who knows LS outputs? Who knows how the processes were designed? Who knows which person down the process needs to be informed? The subject matter expert will make your life a thousandfold easier, so include them as early as possible, align on scope and have them help you build it.

If I were to choose one thing to avoid at all costs during a digital transformation, it would be ignoring parts of the organization. My success in this project was a result of the frequent consultation with the people handling day-to-day business of the organization. Since we started with several locations during ramp up, we ended up working very closely with people from all over the production process. This resulted in rapid feedback on anything that we did—especially on what we did wrong. That feedback is crucial, as we could incorporate it and adapt from sprint to sprint.

Legacy software is still present in many companies, but it should not be seen as malign pieces of a process that would kill a project before it starts. Rather, it was an important piece of the puzzle that fit the organization at a specific time in its existence, and as organizations mature and digital becomes the new norm, legacy software should be considered an important aspect of a migration scenario, even if it will ultimately be replaced.

Andrei Lucian Rosca is an engineer with a bachelor’s in mechanical engineering focusing on CAD software with more than 10 years of experience in Digital Transformation projects in several industries, from automotive to consumer goods. I am currently exploring innovative solutions (e.g. IoT, AI) and how to include them in future projects.

The post Dealing with legacy software during a digital overhaul appeared first on Engineering.com.

]]>
How engineers can mitigate AI risks in digital transformation – part 2 https://www.engineering.com/how-engineers-can-mitigate-ai-risks-in-digital-transformation-part-2/ Wed, 04 Jun 2025 18:03:14 +0000 https://www.engineering.com/?p=140282 Exploring five more of the most common AI risks and how to mitigate them.

The post How engineers can mitigate AI risks in digital transformation – part 2 appeared first on Engineering.com.

]]>
AI functionality is increasingly a component of digital transformation projects. Delivering AI functionality adds business value to digital transformation. However, engineers will encounter multiple AI risks in these projects. Engineers can use these risk topics as a helpful starter list for their digital transformation project risk register.

Let’s explore the last five of the ten most common AI risks and how to mitigate them. To read about the first five, click here.

Inadequate AI algorithm

The AI algorithms available to build AI models vary widely in scope, quality and complexity. Also, project teams often revise the algorithms they’ve acquired. These two facts create a risk of using an inadequate or inappropriate AI algorithm for the digital transformation problem.

Business teams can reduce their risk of using an inadequate AI algorithm by testing algorithms from multiple sources for:

  • Desired outputs using well-understood training data.
  • Software defects.
  • Computational efficiency.
  • Ability to work with lower quality or lower volume of data.
  • Tendency to drift when new training data is added.
  • Explainability.

AI algorithms are a family of mathematical procedures that read the training data to create an AI model.

Inadequate AI model

The risk of an inadequate AI model can result from many factors. The principal ones are an inadequate AI algorithm, problematic rules and insufficient training data.

Business teams can reduce their risk of using an inadequate AI model by testing the model repeatedly using the following techniques:

  • Fine-tuning model parameters.
  • Functionality testing.
  • Integration testing.
  • Bias and fairness testing.
  • Adversarial testing using malicious or inadvertently harmful input.

The AI model is the object saved after running the AI algorithm by reading the supplied training data. The model consists of the rules, numbers, and any other algorithm-specific data structures required to make predictions when the model uses real-world data for production use.

Insufficient understanding of the data elements

Some data elements or features always impact AI model results more than others. When a project team does not sufficiently understand which data elements influence model results more than others, the situation creates the risk of:

  • Inaccurate tuning of the AI algorithm.
  • Disappointing or misleading model outputs.

Business teams can reduce their risk of misunderstanding data elements by:

  • Testing how dramatically model results change in response to small changes in value or distribution of values of specific data elements.
  • Confirming that similarly named data elements across the data sources are identical or not to avoid misunderstanding the data element meanings.
  • Ensuring that the data quality of the most critical data elements is the highest.

Data elements are columns in a relational database.

Inadequate team competencies

Given the high demand for AI and data science talent, it’s common for digital transformation project teams not to have all the technical competencies they’d like. Inadequate team competencies create the risk that the quality of AI model results is insufficient, and no one will recognize the problem.

Business teams can reduce the risk of inadequate team competencies by:

  • Proactively training team members to boost competencies.
  • Assigning enough subject-matter expertise for the various data sources to the project team.
  • Engaging external consultants to fill some gaps.

The required project team roles and related competencies are likely to include:

  • Business analysts.
  • Data scientists.
  • Subject-matter experts.
  • Machine learning engineers.
  • Data engineers and analysts.
  • AI architects.
  • AI ethicists.
  • Software developers.

Insufficient attention to responsible AI

In their enthusiasm for digital transformation project work, the team often neglects responsible AI even though they are not acting unethically. Responsible AI is about ethics. Ethics is an awkward, abstract topic for project teams.

Business teams can reduce the risk of insufficient attention to responsible AI by:

  • Scoping of your fairness and bias assessment work based on the sensitivity of the data you will use.
  • Investigating the provenance of external data sources.
  • Evaluating the compliance and bias of external data.
  • Engaging with AI ethicists during design and testing.
  • Conducting a fairness and bias assessment of AI model results.
  • Designing a process to monitor AI model results regularly for compliance and bias once the AI application is in routine production use.

If you come to believe that the team is consciously acting in an unethical way, it’s time to fire people.

The OECD principles for responsible stewardship of trustworthy AI are:

  • Inclusive growth, sustainable development and well-being.
  • Human-centered values and fairness.
  • Transparency and explainability.
  • Robustness, security and safety.
  • Accountability.

When engineers proactively identify and mitigate AI risks in their digital transformation projects, they will deliver the planned business benefits.

The post How engineers can mitigate AI risks in digital transformation – part 2 appeared first on Engineering.com.

]]>
In the rush to digital transformation, it might be time for a rethink https://www.engineering.com/in-the-rush-to-digital-transformation-it-might-be-time-for-a-rethink/ Tue, 03 Jun 2025 15:03:32 +0000 https://www.engineering.com/?p=140223 One of the main themes from the PLM Road Map and PDT North America event was just how much we still have to learn about going digital.

The post In the rush to digital transformation, it might be time for a rethink appeared first on Engineering.com.

]]>
In the breakneck pace of digital transformation, is comprehension being left behind? Do we need a rethink? No one at PLM Road Map and PDT North America, a collaboration with BAE Systems’ Eurostep organization—a leading gathering of product lifecycle management (PLM) professionals—said that, at least not in so many words, but presentations by one user after another raised the issue.

In my opening presentation, I confronted these issues by positioning PLM as a strategic business approach, thereby joining it to digital transformation, which has been CIMdata’s focus for more than four decades. And in the conference’s thought leadership vignettes, multiple PLM solution providers stressed connectivity and new tools to aid understanding and comprehension; in these vignettes, many supported my positioning of PLM.

The issues of comprehension were presented to conference attendees from several points of view. Many presenters delved into data and information quality—accuracy, completeness, structure, ownership, possible corruption, its exploding volume, and the steady growth of regulation.

Some numbers that made many attendees uncomfortable:

• There are hundreds of engineering software tools and new ones appear every week. Every engineering organization uses dozens of tools, systems, solutions, “apps,” and platforms; their constant updates are often disruptive to users

• About 800 standards apply to engineering information and its connections to the rest of the enterprise, said Kenneth Swope, The Boeing Co.’s Senior Manager for Enterprise Interoperability Standards and Supply Chain Collaboration

30 terabytes of data are generated in CAD and manufacturing for each of the hundreds of engines produced by Rolls-Royce PLC every year, reported Christopher Hinds, Head of Enterprise Architecture. Some output files from CFD analyses exceed 650 GB per part, he added.

Speakers also discussed how digital transformation is revealing the shortfalls in comprehension of data and information. “If we can’t agree on what data is, we can’t use it,” observed Swope. These shortfalls are caused by accelerated product development, shorter product lifecycles, and an explosion of product modifications and differentiations thanks to the software now embedded in every product.

A graphic construction of the comprehension challenges in digital transformation. (Image: CIMdata Inc.)

In my conference-opening presentation, “PLM’s Integral Role in Digital Transformation,” I stressed that companies need to think beyond digitizing data, that merely converting analog data to digital isn’t enough. Yes, digitalization is at the core of an organization’s digital transformation … but moving to a digital business requires rethinking many organizational structures and business processes as well as understanding the growing value of data.

So how does PLM fit into this? Only by seeing PLM as a strategic business approach can its depth and breadth in the reach of digital transformation can be comprehended. PLM concentrates the organization’s focus on the collaborative creation, use, management, and dissemination of product related intellectual assets—a company’s core asset. This makes PLM the platform for integrating external entities into lifecycle processes—thereby enabling end-to-end (E2E) connectivity … and the optimization of associated business functions and entities throughout the lifecycle.

Don’t forget, I cautioned, that the data generated from your products and services often becomes more valuable than the products themselves. Why? Because product data touches all phases of a product’s life, these digital assets play a central role in an enterprise’s digital transformation. Hence I warned that digital transformation will collapse without the implementation of the right set of data governance policies, procedures, structure, roles, and responsibilities.

Many presenters also noted how PLM and digital transformation are helping them deal with the challenges of stiffer competition, rising costs, downward pressure on pricing, customer demands for more functionality and longer service lives, data-hungry Artificial Intelligence (AI), and Product as a Service (PaaS) business models

And while all these factors aggravate the issues I addressed, speakers expressed confidence that they will eventually reap the benefits of PLM and digital transformation—starting with getting better products to market sooner and at lower cost.

Another challenge with digital transformation and comprehension is the multitude of ways that presenting companies organize and identify their engineering systems and functions. All these manufacturers use basically same processes to develop and produce a new product or system but these tasks are divided up in countless ways; no two companies’ product-development nomenclature are the same.

Sorting this out is crucial to the understanding and comprehension of the enterprise’s data and information. Gaining access to departmental “silos” of data is increasingly seen as just the beginning of digging information out of obsolete “legacy” systems and outdated formats.

Dr. Martin Eigner’s concept of the extended digital thread integrated across the product lifecycle. (Image: Eigner Engineering Consult.)

In the conference’s Day One keynote presentation, Martin Eigner of Eigner Engineering Consult, Baden-Baden, Germany, spoke on “Reflecting on 40 Years of PDM/PLM: Are We Where We Wanted to Be?” The answer, of course, is both yes and no.

Dr. Eigner expressed his frustration in PLM’s fragmented landscape. We are still tied to legacy systems (ERP, MES, SCM, CRM) that depend on flawed interfaces reminiscent of outdated monolithic software, he pointed out. As digitalization demands and technologies like IoT, AI, knowledge graphs, and cloud solutions continue to grow, the key question is: Can the next generation of PLM solutions meet the challenges of digital transformation with the advanced, modern software technologies available?

“The vision of PLM till exists,” Dr. Eigner continued, “but the term was hijacked in the late 1990s while the PLM vision was still being discussed. Vendors of product data management (PDM) solutions applied the term for their PDM offerings” which “mutated from PDM to PLM virtually overnight.”

“Ultimately,” he noted, “business opportunities and ROI will be significantly boosted by the overarching Digital Thread on Premise or as a Service,” leveraged with “knowledge graphs connected with the Digital Twin.” Applying “generative AI can optionally create an Omniverse with enhanced data connectivity and traceability.”

This stage of digital transformation, he summarized, “will improve decision making and support AI application development.” In turn, these “will revolutionize product development, optimize processes, reduce costs, and position the companies implementing this at the forefront of their industries. And we are coming back to our original PLM vision as the Single Source of Truth.”

Uncomfortably ambitious productivity improvements with AI and digital transformation. Image: GE Aerospace

The challenges of getting this done were addressed by Michael Carlton, Director, Digital Technology PLM Growth at GE Aerospace, Evendale, Ohio, using what he termed as “developing a best-in-class Enterprise PLM platform to increase productivity and capacity amid rising demands for digital thread capabilities, technology transformation, automation, and AI.” His remedies included “leveraging AI, cloud acceleration, observability, analytics, and automation techniques.”

“Uncomfortably ambitious productivity improvements,” Carlton continued, include “reduction in PLM environment build cycle time, parallel development programs on different timelines, shifting testing left (i.e., sooner), improved quality throughout, automated data security tests, and growing development capacity.”

IDC slide showing how PLM maintains the digital threads that define the product ecosystem by weaving together product development, manufacturing, supply chain, service to balance cost, time, and quality. (Image: IDC.)

The issue of PLM and the boardroom was raised in a presentation, by John Snow, Research Director, Product Innovation Strategies, at International Data Corp. (IDC), Needham, Mass. In his data-packed Day 2 keynote, Snow detailed how complex this issue is and the “disconnect between corporate concerns and engineering priorities.”

PLM, observed Snow, “maintains the digital threads that define the product ecosystem: weaving together product development, manufacturing, the supply chain, and service to balance cost, time, and quality.”

The opportunity for engineering in the boardroom is that “80% of product costs is locked in during design,” however, the Cost of Goods Sold (COGS) is 10X to 15X higher than Cost of R&D (CR&D), Snow explained.

“Poor product design,” Snow continued, “has an outsized impact on COGS, but good design does,” too. Thus, “increasing the engineering budget can have a big impact on profits (if properly allocated).” Current efforts to leverage design for manufacturing & assembly (DFM/A) are falling short,” he added.

HOLLISTER’s roller-coaster journey toward PLM showing key decision points; the loop indicates a stop and restart. (Image: Hollister Inc.)

Near the other end of the corporate size scale from GE Aerospace is Hollister Inc., Libertyville, Ill., an employee-owned medical supplies manufacturer of ostomy and continence products. Stacey Burgardt, Hollister’s Senior Program Manager for PLM, addressed PLM implementation challenges in her presentation on “The Role of Executive Sponsorship in PLM Success at Hollister.”

Burgardt, formerly R&D and Quality Leader, outlined Hollister’s PLM vision as three transformations:

• To product data centric from document centric

• To digital models from drawings, and

• To live collaboration and traceability from systems of record.

In her appeal to sponsors, Burgardt estimated total expected benefits through 2030 at $29 million. This sum included significant gains from improved efficiency of associates, smaller software costs, and reduced waste, scrap, and rework.

Unlike every other presenter, Hollister has yet to implement PLM, though not from lack of effort dating back to 2018. Hollister is currently finalizing PLM solution selection and planning. Burgardt focused the need for executive sponsorship and strategies to secure it. “Identify the right executive sponsors in the governance model including the CEO and CFO,” she said, “and the

leaders of the main functions that PLM will impact, and someone who has seen a successful PLM who can advocate.

“Be persistent,” she concluded, “and be adaptable.” Address sponsors’ concerns and “If it’s not the right time, keep the embers burning and try again.”

And this led to my conference summation topic: sponsorship. The fact that PLM and digital transformation are now recognizably tougher and will take longer than once hoped led to my Executive Spotlight panel discussion at the end of Day 2: “The Role of the Executive Sponsor in Driving a PLM Transformation.” My four panelists agreed high-level sponsorships are indispensable … and we discussed how to identify, enlist, and maintain those sponsorships.

To conclude, looking back over the two days’ presentations, I think the answer is “yes” to my questions in the first paragraph. And the sooner this rethink gets going the better.

The post In the rush to digital transformation, it might be time for a rethink appeared first on Engineering.com.

]]>
VW’s digital journey balances bold moves with the realities of execution https://www.engineering.com/vws-digital-journey-balances-bold-moves-with-the-realities-of-execution/ Thu, 22 May 2025 15:44:40 +0000 https://www.engineering.com/?p=139765 Volkswagen’s digital trajectory reveals both the promise of technology adoption and the hurdles of industrial-scale implementation.

The post VW’s digital journey balances bold moves with the realities of execution appeared first on Engineering.com.

]]>
Inside the line at Volkswagen’s Chatenooga manufacturing plant. (Image: Volkswagen)

Volkswagen’s recent strategic moves highlight a company at the crossroads of transformation. On one hand, VW is making bold investments in AI-driven engineering and forging strategic alliances to position itself as a leader in next-generation automotive innovation. On the other, it faces the stark realities of large-scale execution—rising manufacturing costs, operational challenges, new electric vehicle (EV) entrant competition, and financial pressures.

To stay competitive, VW has embraced generative AI, digital twins, and software-defined vehicles. Announced in December 2024, its partnerships with PTC and Microsoft to develop Codebeamer Copilot aims to revolutionize Application Lifecycle Management (ALM) with AI automation. Meanwhile, the adoption of Dassault Systèmes’ 3DEXPERIENCE platform signals a commitment to integrating model-based engineering (MBE) for optimized vehicle development.

At the same time, Volkswagen’s $5.8 billion investment in an alliance with Rivian showcases a strategic bet on the future of electric mobility. However, alongside these forward-looking investments, Volkswagen must grapple with fundamental execution challenges—managing rising production costs, navigating supply chain disruptions, and ensuring that its transformation efforts deliver tangible business outcomes.

Accelerating engineering transformation

Volkswagen’s collaboration with PTC and Microsoft to develop Codebeamer Copilot signals a strong commitment to leveraging generative AI in Application Lifecycle Management (ALM). Codebeamer is being augmented with AI-driven automation to enhance software development efficiency, a critical step as automotive manufacturers increasingly shift towards software-defined vehicles.

Software is no longer just an enabler; it is now at the heart of automotive product differentiation. For Volkswagen, a legacy automaker, competing with software-native disruptors requires a fundamental shift in how vehicle development is structured. Codebeamer Copilot represents more than an AI-enhanced ALM tool—it is part of a broader shift toward agile, continuous software deployment, ensuring that VW’s vehicles remain at the forefront of digital innovation.

Codebeamer is an ALM platform for advanced product and software development. (Image: PTC)

Simultaneously, VW’s adoption of Dassault Systèmes’ 3DEXPERIENCE platform aims to optimize vehicle development processes. This move reinforces the industry’s pivot towards integrated digital twins, where real-time collaboration and model-based engineering (MBE) accelerate product lifecycle governance. The 3DEXPERIENCE platform aligns with the growing need for cross-functional collaboration between mechanical, electrical, and software engineering teams, bridging gaps that have historically slowed down the development process. While these investments showcase Volkswagen’s intent to streamline development, execution remains key—successful deployment will hinge on cultural adoption and seamless integration with legacy systems.

Strategic EV alliances: the Rivian gambit

Volkswagen’s $5.8 billion partnership with Rivian announced in November 2024 signals a strategic hedge against legacy constraints. The alliance provides VW with access to Rivian’s advanced EV architecture, allowing the German automaker to accelerate its EV portfolio without reinventing the wheel. In return, Rivian gains the financial backing and industrial scale necessary to compete in an increasingly saturated EV market.

This collaboration is emblematic of a broader trend in the automotive industry: the shift from closed innovation models to open collaboration. OEMs are recognizing that building everything in-house is neither cost-effective nor agile enough for the rapid technological shifts defining the industry. By working with Rivian, VW positions itself to benefit from the startup’s agility while bringing its own mass-production expertise to the table.

However, alliances alone are not enough. To realize the full potential of this partnership, VW must overcome internal friction—balancing traditional automotive development processes with the more iterative, software-driven approach championed by Rivian. Success will depend on VW’s ability to integrate new ways of working without disrupting existing operations.

Executing transformation amid industrial pressures

While Volkswagen continues to push forward with its digital and electrification strategies, operational challenges remain a persistent theme. Rising material costs, supply chain bottlenecks, and production inefficiencies have placed significant financial pressure on the company. In 2024, VW reported 4.8 million vehicle deliveries—an impressive figure, but one that comes against the backdrop of increasing competition from Tesla, Chinese automakers such as local market leader BYD, and emerging EV startups.

Manufacturing complexity is another hurdle. Unlike Tesla, which designs its vehicles with highly streamlined production methods, VW is contending with legacy platforms that require significant re-engineering to accommodate next-generation propulsion systems and digital architectures. This tension between past and future is not unique to VW but serves as a reminder that digital transformation is as much about unlearning as it is about innovation.

To bridge this gap, Volkswagen must double down on operational efficiency while ensuring that its transformation investments deliver clear, measurable returns. This means refining its global production footprint, streamlining supplier relationships, and investing in workforce upskilling to ensure that its employees are equipped for the future of mobility.

Balancing disruption with execution

Volkswagen’s trajectory exemplifies the duality of digital transformation: bold investments in AI-driven engineering and strategic alliances, juxtaposed with the realities of industrial-scale execution. The success of these initiatives will depend on VW’s ability to navigate integration complexities, mitigate disruption risks, and sustain operational resilience.

For manufacturing engineering leaders, the key takeaway is clear: transformation is not just about adopting new technologies but ensuring their successful convergence with business imperatives. It requires a relentless focus on execution—aligning investments in AI, ALM, PLM, and EV strategy with pragmatic, scalable implementation roadmaps. The future of Volkswagen, and indeed the broader automotive industry, will be defined by those who can master this balancing act.

As digital and physical converge faster than ever, Volkswagen’s journey serves as a crucial case study that highlights both the promise and pitfalls of large-scale digital reinvention. The automaker’s success will hinge on its ability to harmonize technology adoption with industrial pragmatism, ensuring that innovation is not just pursued but effectively realized at scale.

The post VW’s digital journey balances bold moves with the realities of execution appeared first on Engineering.com.

]]>
Aras Software at 25: PLM transformation through connected intelligence https://www.engineering.com/aras-software-at-25-plm-transformation-through-connected-intelligence/ Sat, 17 May 2025 13:01:53 +0000 https://www.engineering.com/?p=139728 Its trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>
Roque Martin, CEO at Aras Software, opened ACE 2025 by reflecting on Aras’ 25-year evolution—from early PLM strategy roots to hands-on innovation and enterprise-wide digital thread leadership. (Image: Lionel Grealou)

Nestled in Boston’s Back Bay during the first three days of April, ACE 2025 marked a key milestone: Aras’ 25th anniversary. It was a celebration of a quarter-century of innovation in the PLM space, built on the vision of founder Peter Schroer. What began as a small gathering has grown into a global forum for transformation. Aras Innovator continues to position itself as a challenger to legacy PLM systems, offering an open and adaptable platform.

“Building on the company’s red box concept,” as presented several years ago by John Sperling, SVP of Product Management, the Aras strategy is rooted in an overlay approach and containerization—designed to simplify integration and support relationship-driven data management. CEO Roque Martin described Aras’ evolution from its early roots in PDM and document control to today’s enterprise-scale PLM platform—enabling connected intelligence across functions and domains.

This trajectory mirrors the wider PLM market shift—from rigid systems to flexible, integrated platforms that support customization, adaptability, and data fluidity across engineering and operational boundaries.

AI, cloud, and the connected enterprise

Nowadays, it is close to impossible to discuss tech/IT/OT or digital transformation without exploring new opportunities from artificial intelligence (AI). Cloud and SaaS are established deployment standards across enterprise software solutions. Nevertheless, PLM tech solutions often lag when it comes to adopting modern architecture and licensing models.

The intersection of PLM and AI is rapidly redefining transformation strategies. Aras’ ACE 2025 conference embraced this momentum through the theme: “Connected Intelligence: AI, PLM, and a Future-Ready Digital Thread.” This theme reflects how AI has become more than an emerging trend—it is now central to enabling smarter decision-making, increased agility, and value creation from data.

While cloud and SaaS have become standard deployment models, PLM platforms have historically struggled to keep pace. Aras is challenging that with an architecture that emphasizes openness, extensibility, and modern integration practices—foundational enablers for enterprise-grade AI. In this landscape, the importance of aligning AI readiness with digital thread maturity is growing. PLM no longer sits at the periphery of IT/OT strategy—it is becoming the backbone for scalable, connected transformation.

Bridging old and new

Martin opened ACE 2025 by recalling that the term “digital thread” originated in aerospace back in 2013—not a new concept, but one whose visual metaphor still resonates. With the announcement of InnovatorEdge, Aras showcased the next leap in PLM evolution—designed to connect people, data, and processes using AI, low-code extensibility, and secure integrations.

With InnovatorEdge, Aras introduces a modular, API-first extension designed to modernize PLM without discarding legacy value. It strikes a balance between innovation and compatibility, targeting four key priorities. It balances innovation with compatibility and addresses four key areas:

  1. Seamless connections across enterprise systems and tools.
  2. AI-powered analytics to enhance decision-making capabilities.
  3. Secure data portals enabling supply chain data collaboration.
  4. Open APIs to support flexible, industry-specific configurations.

By maintaining its commitment to adaptability while embracing modern cloud-native patterns, Aras reinforces its position as a strategic PLM partner—not just for managing product data, but for navigating complexity, risk, and continuous innovation at scale.

Data foundations

As we stand at the intersection of AI and PLM, ACE 2025 made one thing clear: solid data foundations are essential to unlock the full potential of connected intelligence. Rob McAveney, CTO at Aras, stressed that AI is not just about automation—it is about building smarter organizations through better use of data. “AI is indeed not just about topping up data foundation,” he said, “but helping organizations transform by leveraging new data threads.”

McAveney illustrated Aras’ vision with a simple yet powerful equation:

Digital Thread + AI = Connected Intelligence

This means:

  • Discover insights across disconnected data silos.
  • Enrich fragmented data by repairing links and improving context.
  • Amplify business value using simulation, prediction, and modeling.
  • Connect people and systems into responsive feedback loops.

Every mainstream PLM solution provider is racing to publish AI-enabled tools, recognizing that intelligence and adaptability are no longer optional in today’s dynamic product environments. Siemens continues to evolve its intelligent enterprise twins, embedding AI into its Xcelerator portfolio to drive predictive insights and closed-loop optimization. Dassault Systèmes recently unveiled its 3D UNIV+RSE vision for 2040, underscoring a future where AI, sustainability, and virtual twin experiences converge to reshape product innovation and societal impact. Meanwhile, PTC strengthens its suite through AI-powered generative design and analytics across Creo, Windchill, and ThingWorx. Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms.

With so much movement among the established players, is Aras’ open, modular approach finally becoming the PLM disruptor the industry did not see coming? Across the board, AI is becoming the common thread—fueling a transformation from static PLM to connected, cognitive, and continuously learning platforms. Gartner VP Analyst Sudip Pattanayak echoed this in his analysis, emphasizing the need for traceability and data context as cornerstones of digital thread value. He identified four critical areas of transformation:

  1. Collaboration via MBSE and digital engineering integration.
  2. Simulation acceleration through democratized digital twins.
  3. Customer centricity driven by IoT and usage-based insights.
  4. Strategic integration of PLM with ERP, MES, and other platforms.
Sudip Pattanayak, VP Analyst at Gartner, highlighted that “PLM supports the enterprise digital thread” by building a connected ecosystem of product information. (Image: Lionel Grealou)

From a business standpoint, this translates to strategic benefits in risk management, compliance, product quality, and brand protection. For instance, digital thread traceability supports:

  • Warranty tracking and root cause analysis for recalls.
  • Maintenance, usage, and service optimization.
  • Real-time feedback loops from market to R&D.
  • Commercial impact modeling from product failures.

Pattanayak concluded that enterprises should not aim for total digital thread coverage from day one. Instead, the priority is identifying high-value “partial threads” and scaling from there—with AI capabilities built on solid, governed, and well-connected data structures.

The post Aras Software at 25: PLM transformation through connected intelligence appeared first on Engineering.com.

]]>
Trumpf AI assistant uses camera to improve laser cutting edges https://www.engineering.com/trumpf-ai-assistant-uses-camera-to-improve-laser-cutting-edges/ Tue, 06 May 2025 14:31:11 +0000 https://www.engineering.com/?p=139471 The company’s researchers cut thousands of parts to train its new AI assistant.

The post Trumpf AI assistant uses camera to improve laser cutting edges appeared first on Engineering.com.

]]>
Farmington, Conn.-based manufacturing technology company Trumpf is introducing a new “Cutting Assistant” application which uses artificial intelligence to help users improve the quality of laser-cut edges.

Production employees just take a picture of their component’s cut edge with a hand scanner. Then, the AI assesses the edge quality, evaluating it using objective criteria such as burr formation. With this information, the Cutting Assistant’s optimization algorithm suggests improved parameters for the cutting process. Then the machine cuts the sheet metal once more. If the part quality still does not meet expectations, the user has the option to repeat the process.

This solution is available for all TruLaser series laser cutting machines purchased as of May 2025, which feature a power output of 6 kW or higher.

“The Cutting Assistant is a great example of how AI-enabled tools can help overcome problems related to the skilled worker shortage and also saves time and money. When it comes to productivity, this application creates a competitive edge for fabricators,” says Grant Fergusson, Trumpf Inc. TruLaser 2D laser cutting product manager.

AI makes optimization suggestions

When laser cutting, materials that are not optimized for laser cutting often produce edges with wide variations in cut quality, forcing production employees to constantly change the technology parameters. This involves adjusting each individual parameter one by one— a process which demands a lot of time and employee experience. By integrating the Cutting Assistant into the machine software, optimized parameters can be transferred seamlessly into the software without programming.

While developing the Cutting Assistant, Trumpf experts cut thousands of parts and drew upon many years of expertise, using their extensive knowledge to train the software’s algorithm. This work on the Cutting Assistant did not stop on its release—data from applications in the field will also be incorporated into the solution to enable faster and more reliable results.

The post Trumpf AI assistant uses camera to improve laser cutting edges appeared first on Engineering.com.

]]>
American Aerospace Technologies launches AiRangerX https://www.engineering.com/american-aerospace-technologies-launches-airangerx/ Tue, 06 May 2025 11:12:06 +0000 https://www.engineering.com/?p=139468 AATI has launched AiRangerX, a certified and deployable AiRanger system designed for operation within the National Airspace System and internationally. AiRangerX is intended to support government and commercial partners with established technology aimed at improving operational readiness and mission integration. AiRangerX serves as a surrogate platform for testing and integrating AiRanger technologies such as autonomous […]

The post American Aerospace Technologies launches AiRangerX appeared first on Engineering.com.

]]>
AATI has launched AiRangerX, a certified and deployable AiRanger system designed for operation within the National Airspace System and internationally. AiRangerX is intended to support government and commercial partners with established technology aimed at improving operational readiness and mission integration.

Image: AATI ISR’s AiRanger

AiRangerX serves as a surrogate platform for testing and integrating AiRanger technologies such as autonomous navigation, long-range BVLOS operations, sensor systems, and command-and-control capabilities, without requiring full system deployment.

Accelerating global access to AiRanger technology

AiRangerX mirrors the operational design of AATI’s AiRanger UAS, supporting mission simulation, training, and system evaluation. It is intended for users in defense, homeland security, infrastructure monitoring, and emergency response to assess uncrewed aerial capabilities in field settings.

By offering global deployment within weeks, AiRangerX removes traditional barriers to adoption, helping decision-makers and operators experience firsthand how AiRanger enhances mission success.

Key features and benefits of AiRangerX:

  • Rapid global deployment: Operational anywhere in the world within weeks, enabling near-instant access to AiRanger’s capabilities.
  • Surrogate system, real results: Fully replicates AiRanger’s autonomy, command-and-control, and sensor integration in live demonstrations.
  • Full mission capability demonstration: Allows real-time scenario testing for ISR, BVLOS surveillance, disaster response, border security, and more.
  • AI-powered autonomy: Demonstrates adaptive mission behavior, intelligent rerouting, and reduced operator workload.
  • Sensor-driven insights: Simulates live EO/IR, thermal, radar, and other advanced payloads to showcase operational effectiveness in complex environments.
  • Interoperability testing: Supports integration with existing systems, networks, and mission workflows.

Shaping the future of airborne intelligence

AiRangerX reinforces AATI’s commitment to pushing the boundaries of airborne intelligence, autonomy, and mission agility. By bringing these capabilities directly to decision-makers and mission planners, AATI is setting a new standard for how advanced UAS technology is introduced, tested, and adopted.

For more information, visit americanaerospace.com.

The post American Aerospace Technologies launches AiRangerX appeared first on Engineering.com.

]]>
Turning unstructured data into action with strategic AI deployment https://www.engineering.com/turning-unstructured-data-into-action-with-strategic-ai-deployment/ Fri, 02 May 2025 13:12:31 +0000 https://www.engineering.com/?p=139379 Transform industrial data from disconnected and fragmented to a more unified, actionable strategic resource.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>
Artificial Intelligence (AI) is driving profound change across the industrial sector; its true value lies in overcoming the challenge of transforming fragmented, siloed data into actionable insights. As AI technologies reshape industries, they offer powerful capabilities to predict outcomes, optimize processes, and enhance decision-making. However, the real potential of AI is unlocked when it is applied to the complex task of integrating unstructured, “freshly harvested” data from both IT and OT systems into a cohesive, strategic resource.

This article explores the strategic application of AI within industrial environments, where the convergence of IT and OT systems plays a critical role. From predictive maintenance to real-time process optimization, AI brings new opportunities to unify disparate data sources through intelligent digital thread—driving smarter decisions that lead to both immediate operational improvements and long-term innovation. Insights are drawn from industry frameworks to illustrate how businesses can effectively leverage AI to transform data into a competitive advantage.

From raw data to ready insights

In an ideal world, industrial data flows seamlessly through systems and is immediately ready for AI algorithms to digest and act upon. Yet, the reality is far different at this stage. Much of the data that businesses generate is fragmented, siloed, unstructured, sometimes untimely available, making it difficult to extract real-time actionable insights. To realize the full potential of AI, organizations must confront this data challenge head-on.

The first hurdle is understanding the true nature of “freshly harvested” data—the new, often unrefined information generated through sensors, machines, and human input. This raw data is often incomplete, noisy, or inconsistent, making it unreliable for decision-making. The key question is: How can organizations transform this raw data into structured, meaningful insights that AI systems can leverage to drive innovation?

The role of industrial-grade data solutions

According to industrial thought leaders, the solution lies in the deployment of “industrial-grade” AI solutions that can manage the complexities of industrial data. These solutions must be tailored to meet the specific requirements of industrial environments, where data quality and consistency are non-negotiable. Seamless enterprise-wide data integration is key—whether for predictive maintenance that connects sensor data with enterprise asset management, real-time process optimization that synchronizes factory operations with ERP and MRP platforms—driving supply chain resilience that links production planning with logistics and inventory.

The first step in this process is data integration—the practice of bringing together disparate data sources into a unified ecosystem. This is where many organizations fail, as they continue to operate in data silos, making it nearly impossible to get a holistic view of operations. By leveraging industrial-grade data fabrics, companies can create a single, cohesive data environment where data from multiple sources, whether from edge devices or cloud systems, can be processed together in real time.

Data structuring—the secret to actionable insights

Once raw data is integrated, it must be structured in a way that makes it interpretable and useful for AI models. Raw data points need to be cleaned, categorized, and tagged with relevant metadata to create a foundation for analysis. This is a critical step in the data preparation lifecycle and requires both human expertise and sophisticated algorithms.

The structuring of data enables the development of reliable AI models. These models are trained on historical data, but the real power lies in their ability to make predictions and provide insights from new, incoming data—what we might call “freshly harvested” data. For example, predictive maintenance models can alert manufacturers to potential equipment failures before they occur, while quality control models can detect deviations in production in real time, allowing for immediate intervention.

The importance of explainability cannot be understated. For industrial AI applications to be truly valuable, stakeholders must be able to trust the insights generated. Clear, transparent AI models that are explainable ensure that human operators can understand and act upon AI recommendations with confidence.

Operationalizing AI for real results

Having structured data and trained models is only part of the equation. The real test is turning AI-generated insights into actionable outcomes. This is where real-time decision-making comes into play.

Organizations need to operationalize AI by embedding it within their decision-making frameworks. Real-time AI systems need to communicate directly with production systems, supply chains, and maintenance teams to drive immediate action. For example, an AI system might detect an anomaly in production quality and automatically adjust parameters, triggering alerts to the relevant personnel. The ability to act on AI insights immediately is what separates a theoretical AI application from one that delivers real-world value.

Moreover, feedback loops are essential. The AI models should not be static but should continuously learn and adapt based on new data and operational changes. This iterative approach ensures that AI doesn’t just solve problems for today but continues to improve and optimize processes over time.

Generative AI: A catalyst for innovation and workforce augmentation

While AI’s predictive capabilities are often the focal point, generative AI holds particular promise for transforming industrial workflows. By augmenting human creativity and problem-solving, generative AI helps address the skill gap in the workforce. For example, AI-assisted design can produce innovative solutions that human engineers may not have considered.

However, the integration of generative AI into industrial settings requires careful consideration. As powerful as it is, generative AI can be more costly than traditional AI models. Its inclusion in industrial applications must be strategic, ensuring that the value it brings—such as faster prototyping or more efficient design—justifies the investment.

How to build a sustainable AI strategy for data insights

Turning fragmented data into actionable insights requires a strategic approach. Based on industry frameworks from ABB and ARC Advisory Group, here’s a blueprint for effective AI adoption in industrial settings:

  1. Begin by understanding what is to be achieved through AI—whether it is optimizing efficiency, reducing downtime, or improving quality control. Align AI initiatives with these objectives to ensure focused efforts.
  2. Assess the existing data infrastructure and invest in solutions that integrate and standardize data across your systems. A unified data environment is crucial for enabling AI-driven insights.
  3. Avoid generic AI solutions. Instead, select AI tools that address specific use cases—whether it is predictive maintenance or process optimization. Tailored solutions are far more likely to provide valuable, actionable insights.
  4. In highly regulated industries, transparent and explainable AI models are essential for building trust and compliance. Make sure AI systems provide insights that are understandable and auditable.
  5. AI adoption is not a one-time implementation. Begin with pilot projects, learn from the results, and scale up gradually. This approach allows businesses to optimize AI systems while minimizing risk.

Scaling AI for broader impact

Collaboration is key to successful AI adoption. Partnering with experienced software providers, AI developers, and industry experts can help organizations navigate the challenges of scaling AI across their operations. Moreover, integrating generative AI alongside traditional AI approaches allows companies to strike a balance between innovation and cost-effectiveness.

The promise of AI in transforming industries is undeniable, but to truly realize its value, organizations must overcome the data fragmentation challenges that hinder effective AI deployment. By integrating, structuring, and operationalizing data, companies can convert raw information into actionable insights that drive measurable results. The future of industrial AI is not just about predictions and optimization—it’s about continuous learning, innovation, and the strategic use of AI to create sustainable, long-term growth.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>