Technology - Engineering.com https://www.engineering.com/category/technology/ Thu, 17 Jul 2025 20:09:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Technology - Engineering.com https://www.engineering.com/category/technology/ 32 32 New materials for safer, better surgical procedures  https://www.engineering.com/new-materials-for-safer-better-surgical-procedures/ Tue, 15 Jul 2025 19:31:30 +0000 https://www.engineering.com/?p=141358 Medical Design and Outsourcing Managing Editor Jim Hammerand on how nitinol and other advanced materials are shaping the future of surgery.

The post New materials for safer, better surgical procedures  appeared first on Engineering.com.

]]>

Minimally invasive, catheter-based surgical procedures have drastically improved outcomes and recovery times in critical procedures such as heart valve replacement, and as the technology advances, new procedures are evolving which promise the same benefits enjoyed by cardiac patients to patients suffering from renal, prostate and other diseases.

The shape memory alloy nitinol is a key technology in this revolution, and Medical Design and Outsourcing Managing Editor Jim Hammerand describes how it works, and why it’s effective, in conversation with engineering.com’s Jim Anderton.

***

Catch up on the latest engineering innovations with more Industry Insights & Trends videos and podcasts.

The post New materials for safer, better surgical procedures  appeared first on Engineering.com.

]]>
How AI agents can support design engineers https://www.engineering.com/how-ai-agents-can-support-design-engineers/ Mon, 14 Jul 2025 20:58:52 +0000 https://www.engineering.com/?p=141322 Libraries, requirements and testing are just a few areas to get started with AI in engineering.

The post How AI agents can support design engineers appeared first on Engineering.com.

]]>
Design engineers number in the hundreds of thousands, if not millions, worldwide. That represents a massive pool of valuable knowledge—industry-specific processes, personal workflows, procedures and more—that could be leveraged through machine learning.

From my own experience as a design engineer, it was clear even then that parts of the job were repetitive and could be improved or automated. One traditional approach was to build up component libraries. However, these libraries were often specific to a plant or product line, and even within the same company, different teams had their own isolated systems.

Some experienced engineers had developed individual shortcuts or retained knowledge from repeated exposure to the design–rework–release cycle. While effective, this knowledge was often locked in the minds of a few senior engineers. If one of them left the company or chose not to share their methods, that expertise became difficult or impossible to scale across teams or locations.

This raises an important question: How can we reduce knowledge silos and make engineering know-how more accessible?

One answer is to use an AI agent—not to replace the engineer, but to assist them. An AI-powered digital assistant could speed up decision-making and help engineers understand the reasoning behind choices.

Take the automotive industry, for example. Consider the screw used to mount a headlight in a Ford Focus. It might be an M6x18mm screw—but why that specific part? The choice may involve testing data, torque specs, material considerations, weight limits, or economic factors. All of this information exists and could be made accessible to a machine-learning model to assist engineers during the design phase. If integrated with a CAD tool, the AI agent could suggest appropriate components based on context and past data.

This concept can scale beyond automotive. With access to industry-specific libraries, an AI assistant could help engineers find relevant solutions quickly. It wouldn’t replace human insight but would reduce time spent on repetitive tasks—like browsing through component catalogs—and allow engineers to focus on creative problem-solving.

Companies often sit on vast datalakes of underused information. Training AI models on this data could improve efficiency and reduce costs. Consider a test engineer working in a lab: an AI agent could analyze past test results to flag potential points of failure in new assemblies, improving comparative analysis and cutting down on wasted manufacturing costs.

Other applications include:

  • Suggesting library components to reduce design time.
  • Recommending material thicknesses at the component level.
  • Providing context to help junior engineers understand the “why” behind a design decision.

Ultimately, AI agents aren’t here to replace people. They’re tools that, when used wisely, can foster both personal and organizational growth. The future isn’t man versus machine. It’s about using these tools to create a partnership where the human factor—context, ethics, judgment—drives the machine to be more useful.

AI can sift, learn and adapt. But it’s the engineer who decides what matters.

The post How AI agents can support design engineers appeared first on Engineering.com.

]]>
Autodesk mulling PTC takeover to create industrial software juggernaut https://www.engineering.com/autodesk-mulling-ptc-takeover-to-create-industrial-software-juggernaut/ Fri, 11 Jul 2025 19:07:32 +0000 https://www.engineering.com/?p=141287 The $20B bet could reshape the future of engineering software. We analyze the product mix, strategic fit and how it will affect engineers and end users.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Autodesk is reportedly considering the acquisition of PTC in what could be its largest-ever deal, rumored to be valued at more than $20 billion. Although it is still in early stages and may not materialize, the potential impact is already generating significant market and industry attention. Reports from Bloomberg, Reuters and others suggest the transaction could be structured as a mix of cash and stock, reflecting both the ambition and complexity of such a transformative move.

This is not just a transaction between two legacy software firms. It could represent a redefinition of the industrial software landscape: Autodesk, long focused on democratizing design via the cloud, meeting PTC, grounded in enterprise-scale digital transformation for manufacturers. The overlap is clear. The complementarity? Still to be proven.

Strategy, scale, and ambition

While both companies are respected in their domains, they differ significantly in size, culture, and strategic posture:

  • Autodesk reported more than $6.1 billion in FY2025 revenue (fiscal year ending January 2025), with a market cap of approximately $66.6 billion.
  • PTC reported $2.3 billion in FY2024 revenue (fiscal year ending September 2024), with a current market cap around $17 billion following the takeover speculation bump.

Autodesk is more than twice PTC’s size in revenue and has traditionally focused on AEC, creative design, and mid-market engineering. PTC, in contrast, is deeply rooted in industrial manufacturing, PLM, and IoT.

This is not a merger of equals. It reflects Autodesk’s strategic ambition to move deeper into the enterprise market. With PTC, Autodesk would gain credibility and capability in core enterprise workflows. This would mark a step change for Autodesk’s portfolio maturity—from cloud-native tools for SMBs to enterprise-scale digital thread and product lifecycle platforms.

Yet, the companies have very different go-to-market approaches. Autodesk has built its SaaS business around high-volume channels, while PTC’s sales motion is enterprise direct. That contrast creates opportunity-but also serious integration risk.

Market reactions and community feedback

PTC shares surged over 17% on July 9 after Bloomberg reported Autodesk was exploring a bid. They fell 7.5% the next day. Autodesk’s stock declined nearly 8% as investors assessed the strategic rationale and integration risks. These market movements highlight the scale and sensitivity of such a transformative bet.

In professional forums and industry circles, the deal has sparked debate. Many experts have expressed skepticism about strategic alignment. They point out potential redundancy between core CAD offerings (Creo vs. Inventor/Fusion 360) and PLM solutions (Windchill vs. Fusion Manage). Others note Autodesk’s limited experience in large, complex integrations, and voice concerns about its ability to manage an enterprise-scale acquisition.

One clear thread: this would be a high-risk, high reward move. Autodesk has never made a deal of this magnitude. It could unlock new verticals—but also strain its operating model and alienate parts of its existing base.

Analysts also speculate on regulatory hurdles. The CAD and PLM market is already concentrated. A deal of this scale may face antitrust scrutiny, particularly in the US and Europe. Financing would also be a stretch, and shareholders will expect a well-articulated synergy plan. The rumored price tag of about $20 billion raises the stakes further.

Product portfolio and strategic fit

Autodesk has invested heavily in Autodesk Platform Services (APS), with Fusion 360 acting as its design collaboration anchor. PTC’s portfolio is broader in manufacturing and enterprise engineering, with Windchill+, Arena (PLM), Onshape (cloud CAD), and ThingWorx/Kepware (IoT/edge connectivity).

While the combination would offer end-to-end coverage from SMB to enterprise, the breadth creates duplication. Customers may worry about future roadmap clarity. Will Autodesk continue Fusion Manage or prioritize Windchill+? Can Creo and Inventor coexist? And does Autodesk have a plan for ThingWorx and Kepware, which do not align with its core portfolio?

Most experts believe those IoT assets will be divested. That opens new opportunities for companies like Rockwell, Schneider Electric, or Emerson—firms more focused on industrial automation and edge connectivity. These decisions will send strong signals to the market about Autodesk’s long-term intent.

Beyond the technology, there is a broader question: is Autodesk acquiring products, a platform, and/or an extended customer base? The answer is likely to be multiple. It will determine how much integration effort is required—and how much customer disruption it might cause.

Execution and leadership will define the outcome

The true test will be execution. Autodesk has evolved into a cloud-first player over the past decade, but it has little experience with large-scale enterprise integrations. PTC, though smaller, brings a strong industrial culture and a distinct go-to-market strategy that may not align with Autodesk’s creative, SMB-rooted DNA.

Cultural integration, pricing model alignment, and partner ecosystem rationalization will be complex. If poorly managed, these differences could erode customer trust and delay value realization.

Leadership will play a pivotal role. PTC’s new CEO, Neil Barua, took over in February 2024 from long-time chief Jim Heppelmann. Barua, formerly CEO of ServiceMax (acquired by PTC in 2022), brings a sharper focus on customer-driven innovation and return on investment. His strategic priorities—and openness to integration—could influence how the two companies align.

ThingWorx and Kepware, once central to PTC’s digital transformation narrative, now appear most vulnerable to divestment. Their fate may define Autodesk’s long-term industrial strategy. Rockwell Automation’s recent exit from its $1B stake in PTC in August 2023 further suggests shifting alliances and possible competitive realignments in the broader industrial software ecosystem.

This deal, if it proceeds, will not go unnoticed. Siemens, Dassault Systèmes, and other PLM leaders are likely already reassessing their positions. A successful integration would escalate the digital thread race. A failed one could reinforce the limits of M&A in an already saturated market.

In the end, the acquisition is just the beginning. The real transformation will be defined by what Autodesk chooses to keep, integrate or let go.

Editor’s update July 14 2025: In the days after this story was published Autodesk in a regulatory filing declared this deal is no longer on the table and will instead focus on more strategic priorities, as reported by Reuters.

The post Autodesk mulling PTC takeover to create industrial software juggernaut appeared first on Engineering.com.

]]>
Top most common problems for a maintenance team — and how EcoCare solves them https://www.engineering.com/top-most-common-problems-for-a-maintenance-team-and-how-ecocare-solves-them/ Fri, 11 Jul 2025 15:36:18 +0000 https://www.engineering.com/?p=141261 What if maintenance didn’t have to feel reactive?

The post Top most common problems for a maintenance team — and how EcoCare solves them appeared first on Engineering.com.

]]>
Schneider Electric has sponsored this post.

(Image: Schneider Electric.)

For many facility management teams, no two days look the same. Most teams start with a list of planned tasks, like checking a substation. On the way to task number one, the radio goes off. There’s a blocked toilet in the cafeteria. Minutes later, a vendor needs access to a locked mechanical room. Someone else reports a jammed door in the loading dock.

Across sectors—from healthcare to manufacturing—this reactive work is a constant. Teams are pulled in different directions all day long, coordinating with multiple companies on-site and managing incoming requests. The pressure is high to keep operations smooth without breaking the budget or falling behind on scheduled maintenance.

This article explores the most common maintenance challenges these teams face and how Schneider Electric’s EcoCare service offers a more structured, proactive way forward.

Unpacking the pain points in facility maintenance

Beyond the day-to-day multitasking, maintenance teams face persistent challenges that can make reliable operation difficult to sustain over time.

One of the most common issues is the volume of continuous, layered reporting. Facility directors report on budgets and compliance, while technicians log completed checks and vendor activity. “Oftentimes, teams have to juggle paperwork, spreadsheets and logins, which makes it hard to connect the tools that aren’t connected and track requirements to make the right decisions,” says India Gibson, launch leader for EcoCare at Schneider Electric. “That leads to a lack of visibility into how gear is performing. Without digital support and data, it’s hard to optimize your maintenance planning.”

That lack of visibility is compounded by the challenge of managing teams with different levels of field experience. Meanwhile, the scope of responsibilities continues to grow. “It’s very rare that a facilities person on site at an airport is only responsible for the switchgear and transformers,” explains Martin Thomson, senior manager of recurring services at Schneider Electric. “They’re often responsible for HVAC, lighting, and other systems across the facility.”

EcoCare helps address these overlapping challenges by combining connected technology with structured service support. One key benefit is automated reporting and visibility. “We are seamlessly building reporting of maintenance activity and providing a digital quarterly review of how the gear is performing,” says Gibson. “It’s a full solution to support overall site function.”

Instead of relying on fixed schedules, EcoCare enables a condition-based approach to maintenance.

 “We’re leveraging connectivity and digital readings,” explains Gibson. “We’re looking inside the gear to better understand data and how the gear is aging and performing. What that does is help us intervene before a failure can happen.”

The longer EcoCare is in place, the more it helps stabilize daily operations. As Thomson notes: “Because of our analytics and algorithms, we’re able to reduce the number of times that unexpected things go wrong on average. There are fewer instances where someone is headed to do job A and gets dragged urgently to job B. They get to stick to their schedule a bit more consistently. If something still does go wrong, the diagnostics are much better. Instead of an issue taking six hours to fix, maybe it only takes two, because we’ve already been able to troubleshoot the first five likely causes of that failure.”

Another benefit is access to Schneider’s remote engineering team, who support a range of skill levels. “They can answer questions from someone with 40 years of experience who might need help with digital analytics,” says Thomson. “They also have the practical experience for someone who’s 22 and understands the digital part but isn’t sure what physical action to take. They act as a bridge for that age and experience gap.”

EcoCare is designed to adapt to the needs of each facility. Customers are assigned a Customer Success Manager who helps coordinate service delivery—whether that means hands-off support or regular check-ins. For expanding operations, EcoCare’s remote monitoring hubs provide scalable coverage without requiring customers to increase their on-site headcount. “Schneider can take on that workload at a much more efficient cost,” Thomson adds. “You get the best of both worlds: data expertise and practical engineering experience.”

This approach has already helped facilities avoid serious disruptions. At a Nestlé Nescafé plant, EcoCare was fully implemented following a 14-hour shutdown caused by a short circuit in an unmonitored section. With 24/7 remote monitoring and emergency support, the site has since avoided five major incidents, resulting in an estimated $2 million in savings over four years.

In another case, EcoCare detected early signs of degradation in a transformer at a hospital in upstate New York. By resolving the issue proactively, maintenance teams were able to avoid a failure that would have impacted critical infrastructure and required emergency repairs.

“Without EcoCare, there are way more surprises,” says Gibson. “Teams are super reactive, working more than they need to, and dealing with scattered data all over the place. With EcoCare, it’s early alerts, centralized visibility and reduced risk, along with the help of our experts. So maintenance teams can finally move away from just surviving to optimizing their site.”

To learn more, visit EcoCare at Schneider Electric.

The post Top most common problems for a maintenance team — and how EcoCare solves them appeared first on Engineering.com.

]]>
Who has the best customer support in 3D printing? https://www.engineering.com/who-has-the-best-customer-support-in-3d-printing/ Thu, 10 Jul 2025 14:50:19 +0000 https://www.engineering.com/?p=141257 What matters most in additive manufacturing service.

The post Who has the best customer support in 3D printing? appeared first on Engineering.com.

]]>
Inconsistency is one of the major bugaboos of the additive manufacturing (AM) industry.

In the absence of repeatability and predictability, confidence in process as relatively young as 3D printing tends to waver. Arguably, that’s one of the reasons the wider adoption of AM has taken longer than anticipated. But it’s not just a matter of machine or material performance.

When equipment doesn’t function as expected, it falls to the OEM’s customer service and support team to diagnose the issue and provide a solution – and in a manufacturing environment, that should happen quickly enough that production isn’t compromised.

For a process as complex as 3D printing, customer support is especially vital to ensuring success and, as a potential customer, it can even be the determiner of which OEM you end up using.

Whether in our personal or professional lives, we’ve all dealt with lousy customer service. If it’s bad enough, we might end up saying, “Never again,” and swear off the provider for good regardless of product quality. That’s easier said than done when we’re talking about industrial equipment with a six- or seven-figure price tag, so ideally you’ll know what to expect before you sign the check.

To that end, engineering.com is running a survey to gain a better understanding of customer support from the major 3D printer OEMs. We want to hear what you think, so follow the link below, answer a few questions and, in appreciation of your time, you can enter to win a $25 gift card from Amazon.

3D Printing Customer Support Survey

The post Who has the best customer support in 3D printing? appeared first on Engineering.com.

]]>
Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward https://www.engineering.com/siemens-realize-live-2025-ai-powered-digital-transformation-is-the-path-forward/ Wed, 09 Jul 2025 20:15:56 +0000 https://www.engineering.com/?p=141233 Complexity is not a problem to solve, it’s an advantage to harness.

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
At Siemens Realize LIVE Europe 2025, the message was clear: complexity is not a problem to solve—it’s an advantage to harness. AI, digital threads, and domain-specific PLM are no longer future concepts; they are converging into operational realities.

This year’s event illustrated how Siemens is doubling down on the strategy it has long articulated: enabling faster innovation by embedding intelligence, integration, and collaboration into the digital backbone of manufacturing and product development.

Record attendance at Siemens Realize LIVE 2025 in Amsterdam, with Jones discussing the key to mastering complexity; and of course, that includes the use of AI. (Image: Siemens.)

AI as a strategic accelerator

AI at Siemens has moved beyond pilots—now, the race is on for scale. As Bob Jones, Chief Revenue Officer and EVP of Global Sales and Customer Success, put it: “It’s not just about adopting AI—it’s about being the fastest to adopt it.” Speed matters.

Jones emphasized mastering complexity through AI. Siemens is embedding intelligence across the Xcelerator portfolio to boost speed, clarity, and confidence in decision-making:

  • Ask, find, act: Teamcenter Copilot and AI Chat allow users to query data in natural language, surfacing insights instantly.
  • Fix faster: RapidMiner spots quality issues and recommends improvements.
  • Make documents dynamic: AI extracts procedures from static PDFs to accelerate training and compliance.
  • Automate handoffs: Teamcenter, Rulestream, and Mendix streamline design-to-order workflows.

Joe Bohman, EVP of PLM Products, summed it up: Siemens is “training AI in the language of engineering and manufacturing.” This is not about generic automation—it is about embedding domain-specific intelligence aligned with physics, lifecycle context, and operational constraints.

Reinforcing that intent, Siemens appointed Vasi Philomin—former AWS VP of Generative AI—as EVP of Data and AI, reporting to CTO Peter Koerte. At Amazon, Philomin launched Amazon Bedrock and led foundation model development. His arrival signals Siemens’ commitment to scaling industrial AI as a foundational capability—not a feature—across the Xcelerator suite.

From static data to dynamic digital threads

A deeper shift is underway: from managing Bills of Materials to orchestrating Bills of Information. Is this just new language or real change? Either way, it reflects a move from static data capture to dynamic, role-based information delivery across the product lifecycle-enabling faster, more informed decisions at every stage.

Siemens is championing a PLM architecture that supports this shift, built around:

  • Secure, object-level data access tailored to specific roles and responsibilities.
  • Microservices and large language models (LLMs) delivering contextual guidance across engineering, manufacturing, and service domains.
  • A digital thread backbone connecting design, production, quality, and support in real time.

As advertised, this approach goes well beyond traditional traceability. It unlocks the ability to deliver the right data, in the right format, to the right person—when and where it is needed. It transforms engineering knowledge from static documentation into living, operational intelligence.

For globally distributed and regulated industries, this kind of digital continuity is no longer optional—it is a minimum requirement. Engineering is being redefined not just by tools, but by how data is structured, shared, and transformed into actionable insights that drive innovation and execution at scale.

Rethinking CPG

Siemens is reimagining PLM for the CPG industry, extending Teamcenter beyond packaging to support end-to-end collaboration across R&D, regulatory, marketing, and supply chain. By integrating formulation and specification management—built on Opcenter RD&L—Siemens is positioning Teamcenter to compete with SAP PLM in process-heavy, compliance-driven sectors. The solution is promising but still maturing.

A recent partnership with FoodChain ID boosts this trajectory by embedding global regulatory intelligence into the digital thread, helping CPG companies design for compliance from the start.

Key focus areas include:

  • Formulation and specification support, bridging science-led R&D with enterprise PLM.
  • Cross-functional collaboration across R&D, regulatory, marketing, and sourcing in a shared digital workspace.
  • Recipe reuse across global sites, increasing agility and compliance—as demonstrated by Unilever.
  • Scenario modeling and digital twins, enabling design-for-supply-chain strategies.
  • Regulatory intelligence integration, to guide compliant product development from the outset.

While Siemens’ CPG capabilities are evolving, the strategy requires further clarity. The long-term goal is ambitious: to build a robust PLM backbone that accelerates product innovation while addressing regulatory compliance and supply chain complexity from day one.

Xcelerator-as-a-service and agent-driven automation

Building on this momentum, Siemens’ Xcelerator-as-a-Service approach follows a clear goal: keep things simple, flexible, and always up to date.

Key enablers include:

  • Lifecycle data management, with built-in traceability and change control.
  • Low-code tools, via Mendix, embedded across Teamcenter and Opcenter.
  • AI agents, reducing manual effort, streamlining workflows, and reinforcing governance.

The transition toward software-defined products is accelerating. Siemens is doubling down on:

  • SysML v2, enabling next-generation model-based systems engineering
  • Polarion, aligning software and hardware requirements in unified backlogs
  • Supplier frameworks, integrating BOMs and compliance for cross-domain coordination

This is more than a technical evolution—it is a strategic upgrade toward future-ready operations, where complexity is coordinated, traceable, and compatible by design.

Regulatory-ready digital twins and Industry 4.0 interoperability

Regulation is not lagging behind innovation anymore—it is driving it. The upcoming EU Digital Product Passport (DPP 4.0) marks a turning point. Siemens is preparing its customers to meet these mandates not as a constraint—but as a catalyst for trustworthy digital systems.

Their approach includes:

  • Asset Administration Shell (AAS): machine-readable digital twins that maintain continuity from design through operation.
  • OPC UA-based interoperability: enabling secure, standards-based data exchange across partners and platforms.
  • Embedded sustainability and compliance tracking: making ESG and traceability data a native part of the engineering model.

This is digital transformation built for permanence. With regulations requiring traceable, reusable digital records, AI can only accelerate what is built on the right data foundation.

Make no mistake. Complexity is not just managed; it is mined for advantage. The metaphor echoed at the event is apt: like bamboo, digital transformation takes time to root—but when it does, it grows fast and strong. For industrial leaders, the question is no longer why transform—but rather: How fast can intelligence be embedded into the product and value chain?

The post Siemens Realize LIVE 2025: AI-powered digital transformation is the path forward appeared first on Engineering.com.

]]>
Virtual testing is transforming automotive engineering https://www.engineering.com/virtual-testing-is-transforming-automotive-engineering/ Wed, 09 Jul 2025 18:47:31 +0000 https://www.engineering.com/?p=141227 VI-grade’s advanced driving simulators and cloud-based data solutions enable engineers to shift from traditional sequential development cycles to more agile approaches that reduce costs and time.

The post Virtual testing is transforming automotive engineering appeared first on Engineering.com.

]]>
We’ve been covering the “zero prototypes” design philosophy lately and following the VI-grade team’s progress on their journey, which they don’t trek alone. The company has numerous partnerships and nurtures an ecosystem of global stakeholders necessary to drive this ambitious effort.

On June 12, 2025, VI-grade hosted the North American Zero Prototypes Day at its SimCenter in Novi, Michigan, located within Multimatic’s facility. This day piggybacked on the company’s international Zero Prototypes Summit event in Udine, Italy, held in May. Videos of each presentation are available online, but here are some key takeaways from experts and exclusive interviews.

Virtual and physical testing are two sides of the same coin

“It’s quite clear that the industry is moving at a rapid pace towards virtualization,” said Omar Elsewify, chassis development engineer (Vehicle Dynamics) at Hitachi Astemo. “OEMs are pushing more and more for their suppliers to match their agility in terms of virtual capabilities. More and more, we’re seeing OEMs ask their suppliers to be involved in the validation and integration stage, as well as virtually developing and virtually validating. As those virtual requirements have increased for validation, so have the testing requirements, and what used to be acceptable in terms of just providing software-in-the-loop validation has now increased the requirements to provide hardware-in-the-loop capabilities, driver-in-the-loop capabilities, as well as vehicle-in-the-loop capabilities, in some cases.”

Astemo developed a high-fidelity AI-MBD damper model as a software-in-the-loop solution that runs in real time and captures real-world dynamics. The company uses VI-grade’s Full Spectrum Simulator (FSS) to validate the model with hardware-in-the-loop and driver-in-the-loop simulations.

“The damper for them is real, because in the end, they have to come up with a real product. But very often, they don’t have a real car, so they use a virtual vehicle model and perform an auto-in-the-loop simulation, in which you have the damper in the loop with everything else,” said Guido Bairati, VP of global sales at HBK (VI-grade’s parent company) and former managing director at VI-grade. “The driving simulator has the great advantage of putting the driver in the loop into the equation, so you can easily test your component with a real driver.”

Simulators are game-changing for design and development cycles, helping engineers test everything from components and powertrains to human-machine interfaces (HMIs), driver comfort, and more. David Trumpy, senior principal software engineer at Harmon Becker Automotive Solutions, discussed how his team integrates their sound design library in a VI-grade noise, vibration, and harshness (NVH) simulator, which stakeholders use to evaluate and provide feedback for future iterations.

“The simulator is sending the control data over to our library, and we send the audio samples back, so it’s added in just like another layer or another sound component. We find that the desktop simulator enables our team to manage multiple configurations, so they can switch between different versions, different software releases by using different sound objects,” said Trumpy. “We can share these sounds across team members. We have team members around the globe working on sound design and feature development, and then we can test over a wide array of use cases, like fixed driving modes. In the fixed driving mode, we can use playback from a recorded vehicle, and in free driving mode, we can interact with the vehicle like a driver in the loop. We can also live-tune our algorithms while we’re running in the simulator. So we can connect our tuning tool directly to our algorithm while it’s running and change parameters and hear the difference live in real time.”

Trumpy admitted that they aren’t ready to give up their test vehicles due to differences between the simulator setup and vehicle testing. He noted that data in the simulator can be overly idealistic, so high-fidelity powertrain models are essential for tuning algorithms, as well as testing a wide range of driving scenarios.

NVH simulation allows engineers to test sound and vibration and make adjustments in real time. This image was captured by Engineering.com at the 2025 North American Zero Prototypes Day event.

Though the goal of zero prototypes is to produce a first car that can be sold and driven off the lot, Bairati reminds us that virtual testing and physical testing are two sides of the same coin. Yet, virtual testing technologies go beyond cool features and cost savings and now ensure survival in this increasingly competitive global market that pushes companies toward virtualization.

“The industry is seriously pursuing zero prototypes to reduce costs and delays, with companies aiming for 100% virtual development,” said Bairati. “This urgency is particularly pronounced in China, where rapid development cycles are becoming the norm. They are talking about developing a car in 12 to 18 months. If you want to really develop a car in one year, there is no way but virtual.”

This technological evolution is fundamentally changing how engineers work, shifting from the traditional sequential V-cycle process to more iterative approaches.

“We are moving from a sequential type of development to more parallel development, enabled by continuous testing loops that provide faster feedback and more agile development cycles,” said Bairati. “Physical testing will not disappear, but it will change, and it will be used to create better virtual models. And then the virtual models will be used to optimize the physical test that you have to do at the end of a development cycle.”

Advanced simulators help speed up development

Admittedly, using a vehicle simulator is a fun experience, but the technology is significantly more advanced than any arcade game and is designed to collect specific, real-time data for engineers to improve and innovate.

“Our mission is to help you win the zero prototypes challenge, to help you innovate faster. And we do that through a combination of driving simulators, simulation software, and HiL systems,” said Dave Bogema, senior director of product management at VI-grade.

Two years ago, VI-grade announced its Compact FSS, which combines vehicle dynamics and NVH in one simulator. Last year, the company took another step forward by launching its Driver-in-Motion Full Spectrum Simulator (DiM FSS), which provides six degrees of freedom for a highly immersive experience, allowing engineers to evaluate several vehicle attributes simultaneously.

“Over the last 10 years, we have been developing simulators with higher peak accelerations, lower latency, and improved dynamics overall. This year, we’re announcing the next step in that evolution, the HexaRev,” said Bogema. “It’s a six-degree-of-freedom motion platform unlike any other.”

The design is simple, with six motors connected directly to the cockpit. There are no ball screws, gears, belts, chains or anything that could create extra noise or vibration. The system is quiet and provides an immediate response.

“When you have a traditional six-degree-of-freedom system, as you have multiple degrees of freedom active at the same time, your overall motion envelope kind of shrinks,” said Bogema. “The HexaRev maintains a much larger overall motion envelope, so it gives you a much bigger dynamic space to work with. If you combine the HexaRev with the Hyperdock, which is our carbon fiber cockpit that we launched last year, this turns it into an FSS simulator, able to simulate that whole range from zero to 20 kilohertz.”

This year, VI-grade also launched the Compact HMI, a highly configurable simulator that can switch between multiple cockpit configurations using a touchpad. It also serves as a real-time driving simulator, allowing the vehicle to be tested in various environments and scenarios, and to evaluate HMI concepts.

“When you’re evaluating HMI concepts, you really need the driver in the loop, because at the end of the day, it’s the driver who decides whether that HMI is good enough,” said Bogema.

Prioritizing humans in an AI-driven future

As vehicle design becomes increasingly complex, and engineers must design, develop, and test faster, VI-grade launched VI-DataDrive Cloud to help engineers extract insights from data quickly.

“VI-DataDrive Cloud is focused on VI-CarRealTime. If you look at how we normally use VI-CarRealTime, you would sit at a workstation, build your models, design your tests, run those simulations at your local workstation, and then analyze the results again at that local workstation,” said Bogema. “What we’re doing here is changing that paradigm, still allowing you to design the test at your local workstation, but then upload to the cloud and run lots of simulations in parallel. What this enables you to do is process a lot more data.”

The solution also enables engineers to share data rapidly, collaborate more effectively, and make decisions faster. As Bairati mentioned, making sense of data is key for iterative development.

“Primarily, what I see is customers struggling with a large amount of data. All customers are telling us that generating data is not an issue. They actually generate from a virtual and physical test more data than they are able to manage,” he said. “The issue they have is how to make that data available within the organization and to look into the data. In the near future, there will be AI agents who are looking into data and telling engineers in which direction to go for the next development step.”

Bairati also said the ultimate goal is to create digital twins that can be continuously updated with real-world fleet data. And though AI isn’t yet widely used, it’s emerging rapidly and will eventually become ubiquitous in the virtual testing and vehicle development process.

However, AI won’t usurp human jobs and expertise. In fact, Iain Dodds, VI-grade’s technical director in North America, cautions against unchecked reliance on AI and even reliance on people who lack domain knowledge.

“You get people who don’t understand the domain coming in and just take it from a pure data perspective. And that gets scary,” said Dodds. “You’ve got to make sure the conclusions you get from your data are valuable and relevant, and not just data for data’s sake.”

Dodds demonstrated the VI-WorldSim graphics environment and explained how the product has matured, yet they’re just scratching the surface of its capabilities.

“WorldSim can create a graphics data feed from a sensor. It can simulate a sensor, camera, radar, LiDAR, and the quality of the data coming out can actually be used to do ADAS development,” he said.

Engineers can use WorldSim for data generation and algorithm validation, and can even integrate hardware components into the simulation. “The simulation is getting that good,” Dodds said, yet the technology adoption requires trust in the model and trust in the people developing the algorithms. Greg Stevens, research director of Mcity at the University of Michigan, addressed this during his presentation on “the curse of rarity.”

“If you have an autonomous vehicle, today it’s basically being driven with AI algorithms, and AI algorithms are really good at handling situations that they’ve seen before. They’re not really great right now at adapting to novelty, to situations they haven’t seen before,” said Stevens. “That’s where the curse of rarity comes in because the goal is to get enough training data to train your AI algorithm properly.”

Normal driving data is a dime a dozen. Engineers can collect data at any time of day or night to capture normal conditions. As Stevens noted, the challenge is “the safety-critical and near-miss data.” Accidents, especially fatal accidents, are considered rare, which makes it difficult to simulate physically.

“It just doesn’t scale. Simulation has to be part of the solution here,” he said.

Next steps on the zero prototypes journey

In addition to AI advancements and faster, more interactive design and development cycles, Bairati emphasized the growing importance of multi-attribute simulation for virtual testing and vehicle development.

“Historically, there were simulators for different disciplines. Think about Formula One. They only care about vehicle dynamics. They only care about lap time. They don’t care if a car is noisy or not,” Bairati said. “But customers are asking us for simulators that are able to study how the car handles, how the car rides, and how the car sounds, or the tire, not just the car, but also components at the same time. Because when you drive your car, you experience all these different attributes together simultaneously. You care about how the car handles on a winding road, but at the same time, you also enjoy if a car is silent and how it rides. So, we at VI-grade put a lot of effort in the last three to four years to have a multi-attribute simulator where you can study all different disciplines.”

To move such advanced technology forward, partnerships will continue to be a priority, such as that of VI-grade and Multimatic.

“Our relationship with VI-grade dates back decades,” said Peter Gibbons, technical director of vehicle dynamics at Multimatic. “Fifteen years ago, we commissioned the first viable simulator for vehicle development, and that happened in Toronto. From then on, it’s continued to the point of doing this cooperative relationship here, where basically VI-grade and Multimatic are joined as one for this facility.”

John Kipf, engineering director of operations at Multimatic, added that the partnership has many successful years ahead.

“We’ll continue to grow, particularly as we do a lot of ride-based things within the vehicle dynamics sphere, it’s an opportunity for us to explore and do more in that,” Kipf said. “That’s really how this place came to be. It’s been really good, and we support each other as much as possible.”

To learn more about VI-grade and Multimatic’s SimCenter Detroit, visit
vi-grade.com/en/services/simcenters/simcenter_detroit.

The post Virtual testing is transforming automotive engineering appeared first on Engineering.com.

]]>
Las Vegas Fontainebleaus: A recap of Hexagon LIVE 2025 https://www.engineering.com/las-vegas-fontainebleaus-a-recap-of-hexagon-live-2025/ Wed, 09 Jul 2025 14:38:53 +0000 https://www.engineering.com/?p=141222 Looking back and looking ahead at Hexagon’s latest annual industry event.

The post Las Vegas Fontainebleaus: A recap of Hexagon LIVE 2025 appeared first on Engineering.com.

]]>
Las Vegas is a strange place.

For some, it represents the pinnacle of revelry: it’s where you go for birthdays, bachelor/bachelorette parties, or big celebratory weekends. The food and drink are plentiful and varied. You can’t walk more than a hundred feet without passing a slot machine. Opportunities for indulgence abound. The whole sentiment behind that well worn phrase which begins, “What happens in Vegas…” is an invective to cut loose and succumb to your most debaucherous impulses. It’s basically Bacchanalia 24/7.

Or it’s where you go for industry events.

At this point, I’ve been to Sin City more times than I care to count and I’ve yet to visit for the first reason. Whenever I’m asked if I’m travelling “For business or pleasure?” my answer has always been the former. This year, I came for Hexagon LIVE: one among the hundreds (thousands?) of attendees hosted at the Fontainebleau, the newest hotel on the strip. There were plenty of familiar faces and the usual trade show trappings, but there were also some big surprises and, of course, the food was amazing.

Here’s what I saw.

The world’s most sophisticated measuring tape

The Fontainebleau is a modern casino seeking to evoke the spirit of Old Las Vegas. Black and white photos of icons like Elvis and Sinatra line the walls. The opening night reception took place against a backdrop of live jazz, power bowls, and white-gloved hands holding glasses of champagne through a fake plastic hedge at the back of the room.

Packs of attendees with matching lanyards and matching looks of confusion streamed along labyrinthine marble corridors, trying to find their way to the reception. More than once over the course of the week, I heard someone in the elevator ask if we really had to traverse the casino to get between the hotel and the conference center.

I assume it was their first time in Vegas.

There’s a certain irony in the juxtaposition of a company that prides itself on precision hosting an event in a city so devoted to excess, but the spectacle of the venue ultimately accorded well with the vision presented in the opening keynote.

Ola Rollén, Hexagon’s former president and CEO and current chairman of its board of directors, took to the stage with his usual brand of laidback charisma, weaving a narrative that mixed the history of measurement – from the cubit and the furlong to the mile and the meter – with the history of his company, starting with its acquisition of Brown & Sharpe in 2000.

He presented the now customary metaphor of building a bridge between digital and physical worlds, buttressed by the idea that Hexagon (or more specifically, what it makes) is “the world’s most sophisticated measuring tape” with examples of coordinate measuring machines (CMMs) that boast sub-micron accuracy and satellite-based systems that can map geographic features from orbit to within a few centimeters.

All of this, while certainly technically impressive, was basically table stakes. But then Rollén made two announcements that I think most of us weren’t expecting.

Octave & AEON

The first was the introduction of a new spin-off built from some of the biggest pieces of Hexagon’s software business. Dubbed Octave, the new company will combine Hexagon’s Asset Lifecycle Intelligence and Safety Infrastructure and Geospatial divisions with ETQ and Bricsys.

The result will be a billion-dollar “start-up” (a sort of pre-fabricated unicorn) with approximately 7,200 employees.

At a time when consolidation is running rampant in virtually every sector, this divestment might seem unusual. Nevertheless, according to Octave’s new CEO, Mattias Stenberg, it’s the right move to make.

“Hexagon is an amazing company,” he said in a press conference. “But it’s also turned into quite a wide monster. So, I think what I and the board and several others have felt over the last couple of years is that it would be a benefit to focus like this. My message to customers is that we’ll have a bigger budget and more autonomy in deciding where to invest.”

The formation of Octave will have an impact in the near term but the second (and arguably more dramatic) announcement was made with an eye to the future. With a theatrical flare, Rollén introduced the world to AEON, Hexagon’s own entry into the rapidly expanding population of humanoid robots.

While it may seem an odd choice for Hexagon to get into the robotics game, the move fits with the general enthusiasm for AI that seems to be gripping the industrial tech world. Moreover, Hexagon’s particular expertise in advanced sensor technology – one of the prerequisites for humanoid robotics in particular – makes the company well-positioned to develop a robot of its own, at least according to Rollén.

“Hexagon’s legacy in precision measurement and sensor technologies has always been about enabling next-generation autonomy,” he said. “Hexagon is one of the best-placed companies in the world to lead and shape the field of humanoid robotics.”

It’s hard to say this early on whether AEON will end up going beyond the few pilot projects announced with Schaeffler and Pilatus, but I will note that Rollén’s attempt to shake AEON’s hand as he left the stage was entirely unacknowledged by the machine, suggesting that it’s still a far cry from being able to operate independently.

Unless it was a deliberate slight, in which case we should all be worried.

State of manufacturing

Along with new product announcements and customer use cases, Hexagon LIVE included data from not one but two major surveys from the manufacturing sector. A global survey focused on executive perspectives while a second targeted the US and included insights from entry-level employees as well as management.

The global survey, entitled Advanced Manufacturing Report and conducted by Forrester, includes responses from 1,000 manufacturing executives. The US survey, conducted by Hexagon’s Manufacturing Intelligence division, is not yet available to the general public, but engineering.com got a sneak preview of the results as part of our attendance at Hexagon LIVE. Given that exclusivity, let’s focus on the results of the latter report.

Recent discussions about the prospects for U.S. manufacturing tend to concentrate on two main challenges the sector is facing: tariffs and talent shortages. While the former is a (relatively) novel issue, the latter has been under discussion for decades, going at least as far back as 1998. That’s when the National Skill Standards Board (NSSB) and the National Association of Manufacturers (NAM) began expressing concerns about a skills gap, with NAM stating that nine of its ten member associations were unable to find enough skilled workers to meet their needs.

But here’s one challenge for manufacturing that you might not expect to top the list: outdated technology. Nevertheless, that was one of the major findings of the US report, with 72% of respondents stating that outdated technology is preventing them from attracting and retaining workers. If that’s right, it implies that the underlying cause of the skills gap in manufacturing might not be misperception of the sector – as is often speculated – but a lack of new technology.

Paul Rogers, president and CEO of Hexagon, Americas and Asia Pacific, discusses the results of Hexagon’s US manufacturing survey at Hexagon LIVE 2025. (IMAGE: author)

Indeed, other results from the survey appear to support this conclusion, with 60% of respondents reporting that they’re doing enough to make the sector more appealing to new talent. However, there’s also a notable disconnect between executives and entry-level employees regarding the question of whether or not the perception of manufacturing is improving: 86% of executives say it is, while only 59% of entry-level employees agree.

What explains this apparent disagreement?

According to Stephen Graham, executive vice president and general manager of Nexus, Hexagon’s digital reality platform, the cause may be due to a discontinuity between generations.

“I’m not aware of us doing a survey back in 1998,” he said. “But I’ll bet that the perception [of manufacturing] has gotten a lot worse since then because now we have Gen Z coming in, and they’re used to using social media to collaborate on everything. Most manufacturing organizations don’t have technologies that are anything like that.”

Paul Rogers, Hexagon’s president and CEO for the Americas and Asia Pacific, echoed that sentiment in a press event discussing the survey.

“My kids are Gen Z,” he said, “and when you think of what they would identify with manufacturing, it’s dark places with sparks flying, dirty coveralls, things of that nature. But what they’re really expecting is a fully digital environment where everything is high-tech and automated. So, we have to change the perception for Gen Z but, more importantly, we have to change the perspective of the existing workforce and retrain them to think more digital.”

Rogers went on to contrast the user experience with industrial technology with that of consumer tech, where the latter tends to be much more sophisticated and, more importantly, intuitive. “I’ve talked to some major customers and they’re indicating that what they need is for someone to walk in off the street and be ready to go in a few hours,” he said.

Ultimately, this suggests that manufacturing’s perception problem and its technology deficits are interrelated. If that’s right, then both challenges will need to be addressed to deal with the skills gap. The adoption of new, more intuitive tech could help improve the perception of manufacturing – particularly for Gen Z – while more Gen Z members entering the manufacturing workforce could help accelerate that adoption in turn.

Hexagon LIVE 2025

There’s much more to cover from this year’s event, including some incredible stories involving additive manufacturing (stay tuned for those). But, as I headed home, I found myself looking ahead to wonder what next year’s Hexagon LIVE will look like.

Will those white-gloved hands holding champagne be replaced by AEON robots?

Will Octave and its components still be part of The Zone show floor or will the new company have its own event?

Perhaps, most of all, I wondered this: How do you top the announcements of a billion-dollar spin off and a humanoid robot in the same keynote?

I guess I’ll have to wait until next year to find out.

The post Las Vegas Fontainebleaus: A recap of Hexagon LIVE 2025 appeared first on Engineering.com.

]]>
AI governance—the unavoidable imperative of responsibility https://www.engineering.com/ai-governance-the-unavoidable-imperative-of-responsibility/ Tue, 08 Jul 2025 18:03:42 +0000 https://www.engineering.com/?p=141188 Examining key pillars an organization should consider when developing AI governance policies.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
In a recent CIMdata Leadership Webinar, my colleague Peter Bilello and I presented our thoughts on the important and emerging topic of Artificial Intelligence (AI) Governance. More specifically, we brought into focus a new term in the overheated discussions surrounding this technology, now entering general use and, inevitably, misuse. That term is “responsibility.”

For this discussion, responsibility means accepting that one will be held personally accountable for AI-related problems and outcomes—good or bad—while acting with that knowledge always in mind.

Janie Gurley, Data Governance Director, CIMdata Inc.

Every new digital technology presents opportunities for misuse, particularly in its early days when its capabilities are not fully understood and its reach is underestimated. AI, however, is unique, making its governance extra challenging because of the following three reasons:

  • A huge proportion of AI users in product development are untrained, inexperienced, and lack the caution and self-discipline of engineers; engineers are the early users of nearly all other information technologies.  
  • With little or no oversight, AI users can reach into data without regard to accuracy, completeness, or even relevance. This causes many shortcomings, including AI’s “hallucinations.”
  • AI has many poorly understood risks—a consequence of its power and depth—that many new AI users don’t understand.

While both AI and PLM are critical business strategies, they are hugely different. Today, PLM implementations have matured to the point where they incorporate ‘guardrails,’ mechanisms common in engineering and product development that keep organizational decisions in sync with goals and strategic objectives while holding down risks. AI often lacks such guardrails and is used in ways that solution providers cannot always anticipate.

And that’s where the AI governance challenges discussed in our recent webinar, AI Governance: Ensuring Responsible AI Development and Use, come in.

The scope of the AI problem

AI is not new; in various forms, it has been used for decades. What is new is its sudden widespread adoption, coinciding with the explosion of AI toolkits and AI-enhanced applications, solutions, systems, and platforms. A key problem is the poor quality of data fed into the Large Language Models (LLMs) that genAI (such as ChatGPT and others) uses.

During the webinar, one attendee asked if executives understand the value of data. Bilello candidly responded, “No. And they don’t understand the value of governance, either.”  And why should they?  Nearly all postings and articles about AI mention governance as an afterthought, if at all.

So, it is time to establish AI governance … and the task is far more than simply tracking down errors and identifying users who can be held accountable for them. CIMdata has learned from experience that even minor oversights and loopholes can undermine effective governance.

AI Governance is not just a technical issue, nor is it just a collection of policies on paper. Everyone using AI must be on the same page, so we laid out four elements in AI governance that must be understood and adopted:

Ethical AI, adhering to principles of fairness, transparency, and accountability.

AI Accountability, assigning responsibility for AI decisions and ensuring human oversight.

Human-in-the-Loop (HITL), the integration of human oversight into AI decision-making to ensure sound judgments, verifiable accountability, and authority to intercede and override when needed.

AI Compliance, aligning AI initiatives with legal requirements such as GDPR, CCPA, and the AI Act.

Bilello noted, “Augmented intelligence—the use of AI technologies that extend and/or enhance human intelligence—always has a human in the loop to some extent and. despite appearances, AI is human-created.”

Next, we presented the key pillars of AI governance, namely:

  • Transparency: making AI models explainable, clarifying how decisions are made, and making the results auditable.
  • Fairness and proactively detecting and mitigating biases.
  • Privacy and Security to protect personal data, as well as the integrity of the model.
  • Risk Management with continuous monitoring across the AI lifecycle.

The solution provider’s perspective

Now let’s consider this from the perspective of a solution provider, specifically the Hexagon Manufacturing Intelligence unit of Hexagon Metrology GmbH.

AI Governance “provides the guardrails for deploying production-ready AI solutions. It’s not just about complying with regulations—it’s about proving to our customers that we build safe, reliable systems,” according to Dr. René Cabos, Hexagon Senior Product Manager for AI.

“The biggest challenge?” according to Cabos, is “a lack of clear legal definitions of what is legally considered to be AI. Whether it’s a linear regression model or the now widely used Generative AI [genAI], we need traceability, explainability, and structured monitoring.”

Explainability lets users look inside AI algorithms and their underlying LLMs and renders decisions and outcomes visible, traceable, and comprehensible; explainability ensures that AI users and everyone who depends on their work can interpret and verify outcomes. This is vital for enhancing how AI users work and for establishing trust in AI; more on trust below.

Organizations are starting to make changes to generate future value from genAI,with large companies leading the way.

Industry data further supports our discussion on the necessity for robust AI governance, as seen in McKinsey & Company’s Global Survey on AI, titled The state of AI – How organizations are rewiring to capture value, published in March 2025.

The study by Alex Singla et al. found that “Organizations are beginning to create the structures and processes that lead to meaningful value from gen AI.” Even though already in wide use—including putting senior leaders in critical roles overseeing AI governance.

The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks. Overall, the use of AI—gen AI, as well as Analytical AI—continues to build momentum: more than three-quarters of respondents now say that their organizations use AI in at least one business function. The use of genAI in particular is rapidly increasing.

“Unfortunately, governance practices have not kept pace with this rewiring of work processes,” the McKinsey report noted. “This reinforces the critical need for structured, responsible AI governance. Concerns about bias, security breaches, and regulatory gaps are rising. This makes core governance principles like fairness and explainability non-negotiable.”

More recently, McKinsey observed that AI “implications are profound, especially Agentic AI. Agentic AI represents not just a new technology layer but also a new operating model,” Mr. Federico Burruti and four co-authors wrote in a June 4, 2025, report titled, When can AI make good decisions? The rise of AI corporate citizens.

“And while the upside is massive, so is the risk. Without deliberate governance, transparency, and accountability, these systems could reinforce bias, obscure accountability, or trigger compliance failures,” the report says.

The McKinsey report points out that companies should “Treat AI agents as corporate citizens. “That means more than building robust tech. It means rethinking how decisions are made from an end-to-end perspective. It means developing a new understanding of which decisions AI can make. And, most important, it means creating new management (and cost) structures to ensure that both AI and human agents thrive.”

In our webinar, we characterized this rewiring as a tipping point because the integration of AI into the product lifecycle is poised to dramatically reshape engineering and design practices. AI is expected to augment, not replace, human ingenuity in engineering and design; this means humans must assume the role of curator of content and decisions generated with the support of AI.

Why governance has lagged

With AI causing so much heartburn, one might assume that governance is well-established. But no, there are many challenges:

  • The difficulty of validating AI model outputs when systems evolve from advisor-based recommendations to fully autonomous agents.
  • The lack of rigorous model validation, ill-defined ownership of AI-generated intellectual property, and data privacy concerns.
  • Evolving regulatory guidance, certification, and approval of all the automated processes being advanced by AI tools…coupled with regulatory uncertainty in a changing global landscape of compliance challenges and a poor understanding of legal restrictions.
  • Bias, as shown in many unsettling case studies, and the impacts of biased AI systems on communities.
  • The lack of transparency (and “explainability”), with which to challenge black-box AI models.
  • Weak cybersecurity measures and iffy safety and security in the face of cyber threats and risks of adversarial attacks.
  • Public confidence in AI-enabled systems, not just “trust” by users.
  • Ethics and trust themes that reinforce ROI discussions.

Trust in AI is hindered by widespread skepticism, including fears of disinformation, instability, unknown unknowns, job losses, industry concentration, and regulatory conflicts/overreach.

James Markwalder, U.S. Federal Sales and Industry Manager at Prostep i.v.i.p.,  a product data governance association based in Germany, characterized AI development “as a runaway train—hundreds of models hatch every day—so policing the [AI] labs is a fool’s errand. In digital engineering, the smarter play is to govern use.”

AI’s fast evolution requires that we “set clear guardrails, mandate explainability and live monitoring, and anchor every decision to…values of safety, fairness, and accountability,” Markwalder added. “And if the urge to cut corners can be tamed, AI shifts from black-box risk to a trust engine that shields both ROI and reputation.”

AI is also driving a transformation in product development amid compliance challenges to business, explained by Dr. Henrik Weimer, Director of Digital Engineering at Airbus. In his presentation at CIMdata’s PLM Road Map & PDT North America in May 2025, Weimer spelled out four AI business compliance challenges:

Data Privacy, meaning the protection “of personal information collected, used, processed, and stored by AI systems,” which is a key issue “for ethical and responsible AI development and deployment.”

Intellectual Property, that is “creations of the mind;” he listed “inventions, algorithms, data, patents and copyrights, trade secrets,data ownership, usage rights, and licensing agreements.”

Data Security, ensuring confidentiality, integrity, and availability, as well as protecting data in AI systems throughout the lifecycle.

Discrimination and Bias, addressing the unsettling fact that AI systems “can perpetuate and amplify biases present in the data on which they are trained,” leading to “unfair or discriminatory outcomes, disproportionately affecting certain groups or individuals.”

Add to these issues the environmental impact of AI’s tremendous power demands. In the April 2025 issue of the McKinsey Quarterly, the consulting firm calculated that “Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures by 2030…” (The article is titled The cost of compute: A $7 trillion race to scale data centers.)

Establishing governance

So, how is governance created amid this chaos? In our webinar, we pointed out that the answer is a governance framework that:

• Establishes governance policies aligned with organizational goals, plus an AI ethics committee or oversight board.

• Develops and implements risk assessment methodologies for AI projects that monitor AI processes and results for transparency and fairness.

• Ensures continuous auditing and feedback loops for AI decision-making.

To show how this approach is effective, we offered case studies from Allied Irish Bank, IBM’s AI Ethics Governance framework, and Amazon’s AI Recruiting Tool (which had a bias against females).

Despite all these issues, AI governance across the lifecycle is cost-effective, and guidance was offered on measuring the ROI impact of responsible AI practices:

  • Quantifying AI governance value in cost savings, risk reduction, and reputation
      management.
  • Developing and implementing metrics for compliance adherence, bias reduction, and transparency.
  • Justifying investment with business case examples and alignment with stakeholders’ priorities.
  • Focusing continuous improvement efforts on the many ways in which AI governance drives innovation and operational efficiency.

These four points require establishing ownership and accountability through continuous monitoring and risk management, as well as prioritizing ethical design. Ethical design is the creation of products, systems, and services that prioritize benefits to society and the environment while minimizing the risks of harmful outcomes.

The meaning of ‘responsibility’ always seems obvious until one probes into it. Who is responsible? To whom? Responsible for what? Why? And when? Before the arrival of AI, the answers to these questions were usually self-evident. In AI, however, responsibility is unclear without comprehensive governance.

Also required is the implementation and fostering of a culture of responsible AI use through collaboration within the organization as well as with suppliers and field service. Effective collaboration, we pointed out, leads to diversity of expertise and cross-functional teams that strengthen accountability and reduce blind spots.

By broadening the responsibilities of AI users, collaboration adds foresight into potential problems and helps ensure practical, usable governance while building trust in AI processes and their outcomes. Governance succeeds when AI “becomes everyone’s responsibility.”

Our conclusion was summed up as: Govern Smart, Govern Early, and Govern Always.

In AI, human oversight is essential. In his concluding call to action, Bilello emphatically stated, “It’s not if we’re going to do this but when…and when is now.” Undoubtedly, professionals who proactively embrace AI and adapt to the changing landscape will be well-positioned to thrive in the years to come.

Peter Bilello, President and CEO, CIMdata and frequent Engineering.com contributor, contributed to this article.

The post AI governance—the unavoidable imperative of responsibility appeared first on Engineering.com.

]]>
How infrastructure teams are managing higher expectations on tighter budgets https://www.engineering.com/how-infrastructure-teams-are-managing-higher-expectations-on-tighter-budgets/ Tue, 08 Jul 2025 16:11:36 +0000 https://www.engineering.com/?p=140864 Three projects demonstrate how digital tools and smarter planning are helping teams manage complexity.

The post How infrastructure teams are managing higher expectations on tighter budgets appeared first on Engineering.com.

]]>
Bentley Systems has sponsored this post.

Image: Bentley.

Infrastructure teams today face a difficult balancing act. Urban populations are growing, climate targets are tightening, and there is rising pressure to deliver projects faster—often with constrained budgets and leaner workforces.

To meet these demands, organizations are turning to digital tools and collaborative workflows to help make better decisions earlier in the project lifecycle. From residential developments to transport hubs, recent projects offer a closer look at how this approach is working in practice.

Coordinating complex projects with limited time

One of the more challenging aspects of infrastructure delivery is coordinating multiple disciplines under tight schedules. With overlapping scopes, even small misalignments can lead to costly delays or rework. Under these pressures, many teams are using federated modeling and 4D planning tools to visualize logistics, flag conflicts early, and keep project delivery on track from the start.

Take John Sisk & Son, who used Bentley’s digital tools to manage construction of a two-tower, 463-unit residential project in Leeds. The team created a federated model early in the bid phase, then layered in construction sequencing to develop a living 4D plan. The model helped teams visualize temporary works, delivery flows, hoarding access and more. In one case, they identified a conflict between pod delivery paths and mast climber locations, and resolved it digitally before it created problems onsite. Across the project, the team tracked nearly 800 risks and opportunities, estimating £4.6 million in cost avoidance as a result of the modeling effort.

Designing for real-world behavior

Urban spaces are shaped by more than just building codes; factors around how people move through a city must be integrated into design. For example, consideration should be given to whether people are alone or in a group, in a rush or carrying luggage. Traditional design approaches often miss these nuances. To better account for real-world behavior, planners are turning to performance-based design and simulation.

In Madrid, Buchanan Consultores used Bentley’s LEGION software to model pedestrian and traffic flows at a stadium, a multimodal transport hub, and one of Europe’s largest redevelopment initiatives. The agent-based software allowed teams to simulate crowd flows using behavioral algorithms that account for speed, comfort, personal space, and direction of travel. The models also considered elements like stairs, signage, and narrow corridors.

For example, at the stadium, Buchanan validated that ongoing renovations wouldn’t compromise crowd movement on match days. Elsewhere, they evaluated how new transit connections would affect pedestrian and cycling flows, and proposed design changes to improve walkability. In one case, Buchanan’s team even created a digital twin of a metro station to assess how a larger station would accommodate more people and how far they would need to walk. By grounding their work in behavior-based simulation, the team helped ensure that the public spaces would function well once built.

Building flexibility and sustainability into design

Perhaps no expectation has grown more rapidly than the demand for sustainable infrastructure. Project teams face mounting pressure to incorporate smarter and greener designs, while keeping buildability and long-term flexibility in mind.

That balance was a key focus for the Arcadis team working on the new Cambridge South rail station in the UK, designed to support Cambridge’s growing biomedical campus and serve over two million passengers per year. The station is part of a broader three-mile infrastructure enhancement project, which also includes track, signaling, drainage, and overhead line work. The project is aiming for BREEAM Excellent certification and includes measures to achieve carbon net zero within three years and contribute to 10% biodiversity gain.

To deliver these goals efficiently, Arcadis used an array of Bentley tools including MicroStation, OpenRoads, SYNCHRO, and ProjectWise. Clash detection, performed monthly using federated models, identified and resolved over 26,000 issues before they reached the construction site. Tools like iTwin made it easier for those without technical knowledge—or even installed software—to interact with the federated model. Parametric modeling allowed for rapid iteration of earthworks and walkways. The team also used construction sequencing tools to plan complex activities—such as culvert replacement under live rail lines—in a way that minimized disruption. Arcadis’ design extended to include EV charging infrastructure and a scalable layout that would accommodate future expansion.

Conclusion

From residential towers to stadiums and rail stations, digital workflows are enabling infrastructure teams to work smarter under tighter constraints. Across the industry, organizations are rethinking their processes to integrate data and simulation so that teams can better understand their projects earlier, coordinate more consistently, and reduce avoidable mistakes.

To learn more, watch Bentley’s Going Digital webinar series.

The post How infrastructure teams are managing higher expectations on tighter budgets appeared first on Engineering.com.

]]>