Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ Tue, 24 Jun 2025 14:53:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.engineering.com/wp-content/uploads/2025/06/0-Square-Icon-White-on-Purpleb-150x150.png Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ 32 32 PTC adds supply chain intelligence to Arena PLM https://www.engineering.com/ptc-adds-supply-chain-intelligence-to-arena-plm/ Tue, 24 Jun 2025 14:53:06 +0000 https://www.engineering.com/?p=140859 The cloud native supply chain module syncs with Onshape to make a CAD-PDM-PLM hybrid for product development.

The post PTC adds supply chain intelligence to Arena PLM appeared first on Engineering.com.

]]>
PTC has released its Arena product lifecycle management (PLM) and quality management system (QMS) solution’s new Supply Chain Intelligence (SCI) suite.

Arena SCI continuously checks for emerging risks from evolving supply chain conditions, embedding real-time AI-driven component monitoring and risk mitigation insight directly into product development workflows. The goal is to manage component risks throughout the entire product lifecycle within and existing PLM environment.

Product development and introduction teams use Arena SCI to continuously monitor electronic components across bills of materials (BoMs) to identify emerging risks from changing supply chain conditions. Arena SCI then suggests alternative components based on technical compatibility to prevent sourcing interruptions before they impact production.

“By delivering supply chain intelligence directly where design decisions are made in a cloud-native environment, Arena Supply Chain Intelligence simplifies collaboration between design teams and suppliers and supports more proactive component sourcing decisions to help offset supply chain disruptions,” said David Katzman, General Manager of Arena and Onshape, PTC. Katsman says PTC’s investment in SCI adds a new dimension that prioritizes resiliency and hinted at more AI-driven functionality in future arena releases

Arena SCI works by using electronic component data from information services provider Accuris to outline comprehensive electronic risk details and suggest alternative parts.

“Our teams face constant pressure to move faster, even as supply chain challenges become increasingly unpredictable. We are seeking ways to help us stay ahead by identifying risks early and avoiding costly last-minute changes, so we can keep projects on track and deliver on time. We see Arena SCI as an opportunity to help achieve this,” said Dan Freeman, Director of Hardware Engineering, Universal Audio.

Since its acquisition by PTC in 2021, Arena has expanded into new international markets, introduced over 16 product releases, It collaborates with PTC’s Onshape cloud-native computer-aided design (CAD) and product data management (PDM) platform, resulting in a cloud-native CAD-PDM-PLM offering to support cross-functional product development

The post PTC adds supply chain intelligence to Arena PLM appeared first on Engineering.com.

]]>
Innatera introduces new neuromorphic microcontroller for sensors https://www.engineering.com/innatera-introduces-new-neuromorphic-microcontroller-for-sensors/ Mon, 16 Jun 2025 16:17:05 +0000 https://www.engineering.com/?p=140632 The new Pulsar chip brings brain-inspired intelligence to battery-powered devices for real-time, ultra-low power AI at the edge.

The post Innatera introduces new neuromorphic microcontroller for sensors appeared first on Engineering.com.

]]>
Innatera, a developer of neuromorphic processors, recently launched Pulsar, its first commercially available microcontroller designed to bring brain-like intelligence into edge devices. Neuromorphic processors are computing chips designed to mimic the structure and function of biological neural networks, particularly the human brain. Unlike traditional digital processors that use the von Neumann architecture (separate memory and processing units), neuromorphic chips integrate memory and computation in the same physical locations, similar to how neurons and synapses work together in the brain.

Born from more than a decade of research, Pulsar processes data locally at the sensor level, eliminating the need to rely on brute-force compute in power-hungry edge processors or data centers to make sense of sensor data. Innatera claims that it also delivers up to 100 times lower latency and 500 times lower energy consumption than conventional AI processors.

The new microcontroller introduces a compute architecture based on Spiking Neural Networks (SNNs), a generational leap in AI hardware that processes data the way the brain does, focusing only on changes in input. This event-driven model reduces energy use and latency while delivering precise, real-time decision-making. Pulsar also combines neuromorphic compute with traditional signal processing and provides versatility by integrating a high-performance RISC-V CPU and dedicated accelerators for Convolutional Neural Networks (CNNs) and Fast Fourier Transform (FFT) in a single chip.

“By using brain-inspired Spiking Neural Networks, it brings real-time processing to ultra-low-power devices without leaning on the cloud. That means sensors that can think for themselves — faster responses, lower energy use, and smarter performance across everything from wearables to industrial systems,” said David Harold, senior analyst at Jon Peddie Research, in a press release.

Shown here is a size comparison between a Pulsar chip and various coins. (Image: Innatera.)

With Pulsar, Innateria aims to give product teams a shortcut to smarter features that were previously off-limits due to size, power, or complexity. Filtering and interpreting sensor data locally keeps the main application processor asleep until truly needed, in some cases, eliminating the need for a main application processor or cloud computing, extending battery life by orders of magnitude. With sub-milliwatt power consumption, Pulsar makes always-on intelligence viable, enabling everything from sub-millisecond gesture recognition in wearables to energy-efficient object detection in smart home systems. For example, it can achieve real-time responsiveness with power budgets as low as 600 µW for radar-based presence detection and 400 µW for audio scene classification.

Innatera also aims to transform traditional sensors into self-contained intelligent systems. With its small memory footprint and efficient neural models, it fits into tight form factors while eliminating the need for heavy external compute and reducing reliance on complex, custom DSP pipelines. The idea is that sensor manufacturers can deliver plug-and-play smart sensor modules that accelerate development and time to market.

Using the company’s Talamo SDK, developers can build spiking models from scratch in a PyTorch-based environment and simulate, optimize, and deploy. Innatera is also launching a developer program, now open to early adopters, to provide a foundation for a growing community that accelerates innovation, shares knowledge, and empowers members to build the next generation of intelligent edge applications together. An upcoming open-source PyTorch frontend and marketplace will create an even more collaborative ecosystem for neuromorphic AI.

For more information, visit innatera.com/pulsar.

The post Innatera introduces new neuromorphic microcontroller for sensors appeared first on Engineering.com.

]]>
How engineers can mitigate AI risks in digital transformation – part 2 https://www.engineering.com/how-engineers-can-mitigate-ai-risks-in-digital-transformation-part-2/ Wed, 04 Jun 2025 18:03:14 +0000 https://www.engineering.com/?p=140282 Exploring five more of the most common AI risks and how to mitigate them.

The post How engineers can mitigate AI risks in digital transformation – part 2 appeared first on Engineering.com.

]]>
AI functionality is increasingly a component of digital transformation projects. Delivering AI functionality adds business value to digital transformation. However, engineers will encounter multiple AI risks in these projects. Engineers can use these risk topics as a helpful starter list for their digital transformation project risk register.

Let’s explore the last five of the ten most common AI risks and how to mitigate them. To read about the first five, click here.

Inadequate AI algorithm

The AI algorithms available to build AI models vary widely in scope, quality and complexity. Also, project teams often revise the algorithms they’ve acquired. These two facts create a risk of using an inadequate or inappropriate AI algorithm for the digital transformation problem.

Business teams can reduce their risk of using an inadequate AI algorithm by testing algorithms from multiple sources for:

  • Desired outputs using well-understood training data.
  • Software defects.
  • Computational efficiency.
  • Ability to work with lower quality or lower volume of data.
  • Tendency to drift when new training data is added.
  • Explainability.

AI algorithms are a family of mathematical procedures that read the training data to create an AI model.

Inadequate AI model

The risk of an inadequate AI model can result from many factors. The principal ones are an inadequate AI algorithm, problematic rules and insufficient training data.

Business teams can reduce their risk of using an inadequate AI model by testing the model repeatedly using the following techniques:

  • Fine-tuning model parameters.
  • Functionality testing.
  • Integration testing.
  • Bias and fairness testing.
  • Adversarial testing using malicious or inadvertently harmful input.

The AI model is the object saved after running the AI algorithm by reading the supplied training data. The model consists of the rules, numbers, and any other algorithm-specific data structures required to make predictions when the model uses real-world data for production use.

Insufficient understanding of the data elements

Some data elements or features always impact AI model results more than others. When a project team does not sufficiently understand which data elements influence model results more than others, the situation creates the risk of:

  • Inaccurate tuning of the AI algorithm.
  • Disappointing or misleading model outputs.

Business teams can reduce their risk of misunderstanding data elements by:

  • Testing how dramatically model results change in response to small changes in value or distribution of values of specific data elements.
  • Confirming that similarly named data elements across the data sources are identical or not to avoid misunderstanding the data element meanings.
  • Ensuring that the data quality of the most critical data elements is the highest.

Data elements are columns in a relational database.

Inadequate team competencies

Given the high demand for AI and data science talent, it’s common for digital transformation project teams not to have all the technical competencies they’d like. Inadequate team competencies create the risk that the quality of AI model results is insufficient, and no one will recognize the problem.

Business teams can reduce the risk of inadequate team competencies by:

  • Proactively training team members to boost competencies.
  • Assigning enough subject-matter expertise for the various data sources to the project team.
  • Engaging external consultants to fill some gaps.

The required project team roles and related competencies are likely to include:

  • Business analysts.
  • Data scientists.
  • Subject-matter experts.
  • Machine learning engineers.
  • Data engineers and analysts.
  • AI architects.
  • AI ethicists.
  • Software developers.

Insufficient attention to responsible AI

In their enthusiasm for digital transformation project work, the team often neglects responsible AI even though they are not acting unethically. Responsible AI is about ethics. Ethics is an awkward, abstract topic for project teams.

Business teams can reduce the risk of insufficient attention to responsible AI by:

  • Scoping of your fairness and bias assessment work based on the sensitivity of the data you will use.
  • Investigating the provenance of external data sources.
  • Evaluating the compliance and bias of external data.
  • Engaging with AI ethicists during design and testing.
  • Conducting a fairness and bias assessment of AI model results.
  • Designing a process to monitor AI model results regularly for compliance and bias once the AI application is in routine production use.

If you come to believe that the team is consciously acting in an unethical way, it’s time to fire people.

The OECD principles for responsible stewardship of trustworthy AI are:

  • Inclusive growth, sustainable development and well-being.
  • Human-centered values and fairness.
  • Transparency and explainability.
  • Robustness, security and safety.
  • Accountability.

When engineers proactively identify and mitigate AI risks in their digital transformation projects, they will deliver the planned business benefits.

The post How engineers can mitigate AI risks in digital transformation – part 2 appeared first on Engineering.com.

]]>
How Chain of Thought drives competitive advantage https://www.engineering.com/how-chain-of-thought-drives-competitive-advantage/ Tue, 03 Jun 2025 13:37:01 +0000 https://www.engineering.com/?p=140218 Moving beyond prompt engineering and towards AI-driven structured reasoning...for better or worse.

The post How Chain of Thought drives competitive advantage appeared first on Engineering.com.

]]>
Building on AI prompt literacy, engineers are discovering that knowing what to ask AI is only half the equation. The breakthrough comes from structuring how to think through complex problems with AI as a reasoning partner. Chain of Thought (CoT) methodology transforms this collaboration from text generation into dynamic co-engineering systems thinking— amplifying competent engineers into super-engineers who solve problems with exponential clarity and scale.

CoT as structured engineering reasoning

Chain of Thought formalizes what expert engineers intuitively do: breaking complex problems into logical, sequential steps that can be examined, validated, and improved. Enhanced with AI partnership, this structured reasoning becomes scalable organizational intelligence rather than individual expertise.

At its core, leveraging AI is about mastering the art of questioning. The transformation occurs when engineers move from asking AI “What is the solution?” to guiding AI through “How do we systematically analyze this problem?” This creates transparent reasoning pathways that preserve knowledge, enable collaboration, and generate solutions teams can understand and build upon.

As such, here is a reusable CoT template for technical decision-making:

“To solve [engineering challenge], break this down systematically:

  1. Identify core constraints: [performance/cost/regulatory requirements],
  2. Analyze trade-offs between [options] considering [specific criteria],
  3. Evaluate effects on [downstream systems/processes],
  4. Assess implementation risks and mitigation strategies.”

This template works across domains—thermal management, software architecture, regulatory compliance—because it mirrors the structured thinking that defines engineering excellence.

Practical applications in product innovation

CoT methodology proves most powerful in early-stage ideation, complex trade-off analysis, and compliance reasoning where traditional approaches miss critical interdependencies. Based on the target persona, this can translate in various use cases, such as:

Early-stage product ideation:

“To develop [product concept], systematically explore: 1) User pain points and current solutions, 2) Technical feasibility and core challenges, 3) Market positioning and competitive advantage, 4) Minimum viable approach to validate assumptions.”

Engineering trade-off analysis:

“When choosing between [options], evaluate: 1) Performance implications on [key metrics], 2) Cost analysis including lifecycle expenses, 3) Risk assessment and failure mode mitigation, 4) Integration requirements and future modification impacts.”

Compliance and regulatory reasoning:

“To ensure [system] meets [requirements], structure analysis: 1) Requirement mapping to measurable criteria, 2) Design constraint implications, 3) Verification strategy and documentation needs, 4) Change management for ongoing compliance.”

These frameworks transform AI from answer-generator to reasoning partner, helping engineers think systematically while preserving logic for team collaboration and future reference.

PLM integration—CoT as a digital thread enabler

CoT becomes particularly powerful when integrated into Product Lifecycle Management (PLM) and related enterprise resource systems—creating data threads that preserve not just what was decided, but why decisions were made and how they connect across development lifecycle. Just imagine these scenarios:

Design intent preservation:

“For [design decision], document reasoning: 1) Requirements analysis driving this choice, 2) Alternative evaluation and rejection rationale, 3) Implementation factors influencing approach, 4) Future assumptions that might affect this decision.”

Cross-functional integration:

“When [engineering decision] affects multiple disciplines, analyze: 1) Mechanical implications for structure/thermal/manufacturing, 2) Software considerations for control/interface/processing, 3) Regulatory impact and verification needs, 4) Supply chain effects on sourcing/cost/scalability.”

Digital thread connection points:

  • Link design decisions to original requirements and customer needs.
  • Connect material choices to performance targets and compliance requirements.
  • Trace software architecture to system-level performance goals.
  • Map manufacturing choices to cost targets and quality requirements.

This ensures that when teams change or requirements evolve, critical decision reasoning remains accessible and actionable rather than locked in individual expertise. From a business outcome perspective, this can contribute to continuity across product generations and reduce time spent retracing design decisions during audits, updates, or supplier transitions.

Strategic reality: revolution or evolution?

While CoT methodology delivers measurable improvements, the strategic question remains whether this represents fundamental transformation or sophisticated evolution.

Evidence for transformation: Though evidence remains scarce, early adopters of structured CoT approaches report measurable improvements in knowledge transfer efficiency, design review effectiveness, and decision consistency. Organizations consistently cite enhanced team collaboration, reduced rework cycles, and improved knowledge retention when engineering reasoning becomes explicit and traceable. These patterns suggest systematic capability enhancement rather than marginal improvement.

Case for evolution: Critics argue CoT merely formalizes what competent engineers have always done. Revolutionary breakthroughs—the transistor, World Wide Web, breakthrough materials—often emerge from intuitive leaps that defy structured frameworks, suggesting excessive systematization might constrain innovation. Regardless, the accelerating sophistication of AI demands that engineers critically assess not just what they build, but how they think.

Strategic balance: Successful engineering organizations are not choosing between structured reasoning and creative innovation—they are developing meta-skills for knowing when each approach adds value. CoT excels in complex, multi-constraint problems where systematic analysis prevents costly oversights. Pure creativity dominates breakthrough innovation where paradigm shifts matter more than optimization.

Future-proofing perspective: As AI capabilities accelerate from text generation to multimodal reasoning to autonomous design, organizations building frameworks for continuous methodology evaluation—rather than optimizing current techniques—will maintain competitive advantages through technological transitions.

Chain of Thought may represent the beginning of engineering’s AI integration rather than its culmination. The methodology’s emphasis on explicit reasoning provides tools for navigating technological uncertainty itself, perhaps its most valuable contribution to engineering’s digital future. CoT may be the missing link between today’s prompt-based AI assistants and tomorrow’s agentic co-engineers—moving from reactive support to proactive design collaboration.

Whether revolution or evolution, CoT offers engineers systematic approaches for amplifying problem-solving capabilities in an increasingly AI-integrated technical landscape.

The post How Chain of Thought drives competitive advantage appeared first on Engineering.com.

]]>
The prompt frontier—how engineers are learning to speak AI https://www.engineering.com/the-prompt-frontier-how-engineers-are-learning-to-speak-ai/ Wed, 14 May 2025 17:03:09 +0000 https://www.engineering.com/?p=139717 Will engineers shape AI, or will AI shape them?

The post The prompt frontier—how engineers are learning to speak AI appeared first on Engineering.com.

]]>
Microsoft defines prompt engineering as the process of creating and refining the prompt used by an artificial intelligence (AI) model. “A prompt is a natural language instruction that tells a large language model (LLM) to perform a task. The process is also known as instruction tuning. The model follows the prompt to determine the structure and content of the text it needs to generate.”

For engineers, this means understanding how to structure prompts to solve technical problems, automate tasks, and enhance decision-making. This particularly applies when working with Generative AI—referring to AI models that can create new content, such as text, images, or code, based on the input they receive.

An article from McKinsey suggests that “Prompt engineering is likely to become a larger hiring category in the next few years.” Furthermore, it highlights that “Getting good outputs is not rocket science, but it can take patience and iteration. Just like when you are asking a human for something, providing specific, clear instructions with examples is more likely to result in good outputs than vague ones.”

Why engineers should care about prompt engineering

AI is quickly becoming an integral part of engineering workflows. Whether it is for generating reports, optimizing designs, analyzing large datasets, or even automating repetitive tasks, engineers are interacting with AI tools more frequently. However, the effectiveness of these tools depends heavily on how well they are instructed.

Unlike traditional programming, where logic is explicitly defined, AI models require well-structured prompts to perform optimally. A poorly phrased question or vague instructions can lead to suboptimal or misleading outputs. Engineers must develop prompt engineering skills to maximize AI’s potential, just as they would with any other technical tool.

Interestingly, some experts argue that prompt engineering might become less critical as AI systems evolve. A recent Lifewire article suggests that AI tools are becoming more intuitive, reducing the need for users to craft highly specific prompts. Instead, AI interactions could become as seamless as using a search engine, making advanced prompt techniques less of a necessity over time.

Key prompt skills engineers need

Engineers do not need to be AI researchers, but a foundational understanding of machine learning models, natural language processing, and AI biases can help them craft better prompts. Recognizing how models interpret data and respond to inputs is crucial.

AI tools perform best when given clear, well-defined instructions. Techniques such as specifying the format of the response, using constraints, and breaking down requests into smaller components can improve output quality. For example, instead of asking, “Explain this system,” an engineer could say, “Summarize this system in three bullet points and provide an example of its application.”

Engineers must develop an experimental mindset, continuously refining prompts to get more precise and useful outputs. Testing different wordings, constraints, and levels of detail can significantly improve AI responses. Applying Chain-of-Thought Prompting encourages AI to think step-by-step, improving reasoning and accuracy. Rather than asking, “What is the best material for this component?” an engineer could use: “Consider mechanical strength, cost, and sustainability. Compare three material options and justify the best choice.”

Examples of prompt engineering in action

To illustrate how effective prompt engineering works, consider these examples using your favorite Gen-AI engine:

  • Manufacturing Improvement: Instead of asking an AI tool, “How can I improve my factory efficiency?” an engineer could prompt: “Analyze this production data and suggest three changes to reduce waste by at least 10% while maintaining throughput.”
  • Material Selection: Instead of a generic prompt like “Recommend a good material,” an engineer could use: “Compare aluminum and stainless steel for a structural component, considering weight, durability, and cost.”
  • Software Debugging: Instead of “Fix this code,” a structured prompt could be: “Analyze this Python script for performance issues and suggest optimizations for reducing execution time by 20%.”
  • Compliance Checks: Engineers working with sustainability standards could ask: “Review this product lifecycle report and identify areas where it fails to meet ISO 14001 environmental standards.”
  • System Design Optimization: Instead of asking, “How can I improve this mechanical system?” a structured prompt could be: “Given the following design constraints (weight limit: 50kg, max dimensions: 1m x 1m x 1m, operational temperature range: -20°C to 80°C), suggest three alternative system configurations that maximize efficiency while minimizing cost. Provide a trade-off analysis and justify the best choice.”

Such structured prompts hep AI generate more useful, targeted responses, demonstrating the value of thoughtful prompt engineering.

Applications of prompt engineering in actual engineering

Prompt engineering is not just for software developers—it has real-world applications across multiple engineering disciplines:

  • Manufacturing & Design: AI can assist in generating CAD models, optimizing designs for manufacturability, and analyzing production data for efficiency improvements.
  • Electrical & Software Engineering: Engineers can use AI for debugging code, generating test cases, and even predicting circuit failures.
  • Product Development: AI-driven tools can help in ideation, simulating product performance, and accelerating R&D workflows.
  • Sustainability & Compliance: Engineers working in sustainability can leverage AI to assess material lifecycle impacts, optimize energy usage, and ensure compliance with environmental regulations.

The future of prompt engineering in manufacturing

As AI models continue to evolve, the demand for engineers who can effectively interact with them will only grow. Mastering prompt engineering today will give professionals an edge in leveraging AI to drive innovation and efficiency.

However, the trajectory of prompt engineering is uncertain. Some predict that as AI becomes more advanced, it will require less intervention from users, shifting the focus from crafting prompts to verifying AI-generated results. This means engineers may not need to spend as much time iterating on prompts, but instead will focus on critically assessing AI outputs, filtering misinformation, and ensuring AI-driven decisions align with engineering standards and ethics.

Despite this, for the foreseeable future, engineers who master the art of prompt engineering will have a competitive advantage. Just as early adopters of CAD and simulation tools gained an edge, those who learn to effectively communicate with AI will be better positioned to innovate, optimize, and automate their workflows.

A new skill for a new era

Prompt engineering is more than just a buzzword—it is a fundamental skill for the AI-driven future of engineering. As AI tools become more embedded in daily workflows, knowing how to communicate with them effectively will set apart those who use AI passively from those who actively shape its outputs. One thing for sure: “AI will not replace engineers, but engineers who know AI will”—a quote often attributed to Mark Zuckerberg.

The engineering industry is entering a transformative era, where AI-driven tools are no longer just supplementary but central to problem-solving and innovation. This shift is not merely about learning how to phrase a question effectively—it is about rethinking how engineers interact with intelligent systems. The ability to refine, adapt, and critically assess AI-generated insights will be just as important as the ability to craft precise prompts.

This raises a key question: As AI continues to advance, will prompt engineering remain a specialized skill, or will it become an intuitive part of every engineer’s workflow? Either way, those who proactively develop their AI literacy today will be best prepared to lead in the next evolution of engineering practice.

The post The prompt frontier—how engineers are learning to speak AI appeared first on Engineering.com.

]]>
Model context protocol: the next big step in generating value from AI https://www.engineering.com/model-context-protocol-the-next-big-step-in-generating-value-from-ai/ Fri, 09 May 2025 18:24:25 +0000 https://www.engineering.com/?p=139597 You are going to start hearing a lot more about model context protocol (MCP) in the coming months. Here’s why.

The post Model context protocol: the next big step in generating value from AI appeared first on Engineering.com.

]]>
The Model Context Protocol (MCP) is an open-source, application-layer communication standard originally developed by Anthropic to facilitate seamless interaction between large language models (LLMs) and various data sources, tools, and applications. It aims to provide a standardized method for integrating AI systems with external resources, enabling more efficient and context-aware AI-driven workflows.​

With this kind of potential, it’s not a surprise that it’s starting to get a lot of attention. In a recent blog post, Colin Masson, Director of Research for Industrial AI at ARC Advisory Group, calls MCP a “universal translator” that replaces the need for custom-built connections between AI models and industrial systems. Last month, Jim Zemlin, Executive Director at Linux Foundation said in a LinkedIn post  that MCP  is “emerging as a foundational communications layer for AI systems” and compared its potential impact to what HTTP did for the Internet.

Key features of model context protocol

MCP serves as a bridge between AI models and the environments they operate in, allowing models to access and interact with external data sources, APIs, and tools in a structured and secure manner. By standardizing the way AI systems communicate with external resources, MCP simplifies the integration process and enhances the capabilities of AI applications.​ Here are some of the reasons it is expected to improve AI functionality:

Modular and Message-Based Architecture: MCP follows a client-server model over a persistent stream, typically mediated by a host AI system. It uses JSON-RPC 2.0 for communication, supporting requests, responses, and notifications.​

Transport Protocols: Supports standard input/output (stdio), HTTP with Server-Sent Events (SSE), and optionally extended via WebSockets or custom transports.​

Data Format: Utilizes UTF-8 encoded JSON, with alternative binary encodings like MessagePack supported by custom implementations.​

Security and Authentication: Employs a host-mediated security model, process sandboxing, HTTPS for remote connections, and optional token-based authentication (e.g., OAuth, API keys).​

Developer SDKs: Provides SDKs in Python, TypeScript/JavaScript, Rust, Java, C#, and Swift, maintained under the Model Context Protocol GitHub organization.​

MCP has been applied across various domains. In software development it’s integrated into IDEs like Zed, platforms like Replit, and code intelligence tools such as Sourcegraph to provide coding assistants with real-time code context.​ Companies in many industries are using it to help internal assistants retrieve information from proprietary documents, CRM systems, and company knowledge bases.​ Applications like AI2SQL leverage MCP to connect models with SQL databases, enabling plain-language queries.​ In manufacturing, it supports agentic AI workflows involving multiple tools (e.g., document lookup and messaging APIs), enabling chain-of-thought reasoning over distributed resources.​

MCP adoption and ecosystem

  • OpenAI announced support for MCP across its Agents SDK and ChatGPT desktop applications on March 26, 2025.​
  • Google DeepMind confirmed MCP support in the upcoming Gemini models and related infrastructure.​
  • Dozens of MCP server implementations have been released, including community-maintained connectors for Slack, GitHub, PostgreSQL, Google Drive, and Stripe.​
  • Platforms like Replit and Zed have integrated MCP into their environments, providing developers with enhanced AI capabilities.​

Comparing MCP to other systems

MCP differs from other AI integration frameworks in several ways:​

OpenAI Function Calling: While function calling lets LLMs invoke user-defined functions, MCP offers a broader, model-agnostic infrastructure for tool discovery, access control, and streaming interactions.​

OpenAI Plugins and “Work with Apps”: These rely on curated partner integrations, whereas MCP supports decentralized, user-defined tool servers.​

Google Bard Extensions: Limited to internal Google products, MCP allows arbitrary third-party integrations.​

LangChain / LlamaIndex: While these libraries orchestrate tool-use workflows, MCP provides the underlying communication protocol they can build upon.​

MCP represents a significant step forward in AI integration, offering a standardized and secure method for connecting AI systems with external tools and data sources. Its growing adoption across major AI platforms and developer tools underscores its potential to transform AI-driven workflows.​

The post Model context protocol: the next big step in generating value from AI appeared first on Engineering.com.

]]>
Trumpf AI assistant uses camera to improve laser cutting edges https://www.engineering.com/trumpf-ai-assistant-uses-camera-to-improve-laser-cutting-edges/ Tue, 06 May 2025 14:31:11 +0000 https://www.engineering.com/?p=139471 The company’s researchers cut thousands of parts to train its new AI assistant.

The post Trumpf AI assistant uses camera to improve laser cutting edges appeared first on Engineering.com.

]]>
Farmington, Conn.-based manufacturing technology company Trumpf is introducing a new “Cutting Assistant” application which uses artificial intelligence to help users improve the quality of laser-cut edges.

Production employees just take a picture of their component’s cut edge with a hand scanner. Then, the AI assesses the edge quality, evaluating it using objective criteria such as burr formation. With this information, the Cutting Assistant’s optimization algorithm suggests improved parameters for the cutting process. Then the machine cuts the sheet metal once more. If the part quality still does not meet expectations, the user has the option to repeat the process.

This solution is available for all TruLaser series laser cutting machines purchased as of May 2025, which feature a power output of 6 kW or higher.

“The Cutting Assistant is a great example of how AI-enabled tools can help overcome problems related to the skilled worker shortage and also saves time and money. When it comes to productivity, this application creates a competitive edge for fabricators,” says Grant Fergusson, Trumpf Inc. TruLaser 2D laser cutting product manager.

AI makes optimization suggestions

When laser cutting, materials that are not optimized for laser cutting often produce edges with wide variations in cut quality, forcing production employees to constantly change the technology parameters. This involves adjusting each individual parameter one by one— a process which demands a lot of time and employee experience. By integrating the Cutting Assistant into the machine software, optimized parameters can be transferred seamlessly into the software without programming.

While developing the Cutting Assistant, Trumpf experts cut thousands of parts and drew upon many years of expertise, using their extensive knowledge to train the software’s algorithm. This work on the Cutting Assistant did not stop on its release—data from applications in the field will also be incorporated into the solution to enable faster and more reliable results.

The post Trumpf AI assistant uses camera to improve laser cutting edges appeared first on Engineering.com.

]]>
Path and ALM strike up AI welding partnership https://www.engineering.com/path-and-alm-strike-up-ai-welding-partnership/ Mon, 05 May 2025 17:20:34 +0000 https://www.engineering.com/?p=139433 Path Robotics and ALM Positioners are combining forces to deliver AI-powered welding automation

The post Path and ALM strike up AI welding partnership appeared first on Engineering.com.

]]>
Path Robotics and ALM Positioners have announced a multi-year strategic partnership to transform industrial positioning systems into fully autonomous, AI-powered welding solutions.

The partners stated in a press release that the collaboration addresses urgent manufacturing challenges, including a shortage of skilled welders, increasing part variability, and demand for faster lead times. The solution is built for complex, high mix welding environments, enabling manufacturers to automate without the need for traditional programming.

The companies said this partnership expands their long-standing relationship, ensuring that AI-powered robotics and intelligent positioning technology work seamlessly together to improve accuracy and accelerate throughput in industrial automation.

“ALM is the perfect hardware partner for Path as we expand across North America,” said Andy Lonsberry, CEO and Co-Founder of Path Robotics. “ALM’s teams and products are best in class and known across the industries we serve.”

Industries including heavy equipment, trailer manufacturing, energy, aerospace, and agriculture face increasing pressure to deliver high-quality, customized products at scale.

These sectors face common hurdles: shortages of skilled welders, high part variability, and demand for faster lead times. Traditional automation solutions often fall short in these complex, variable environments.

The ALM-Path partnership offers a solution that addresses these pain points with intelligent automation designed for high-mix, multi-pass welding with extreme part variability. The combined system, based on Path Robotics AW3 and ALM Positioners, intelligently adapts to each part and weld path without reprogramming, making automation viable where it previously wasn’t.

“Path’s technology is changing the way manufacturers view automation,” said Pat Pollock, President and CEO of ALM Positioners, Inc. “Their AI-driven solutions allow manufacturers to take advantage of the quality, throughput, and consistency of robotic welding, without all the programming and application challenges associated with traditional robotic automation.”

The post Path and ALM strike up AI welding partnership appeared first on Engineering.com.

]]>
A crystal ball for the future of innovation https://www.engineering.com/a-crystal-ball-for-the-future-of-innovation/ Mon, 05 May 2025 10:00:00 +0000 https://www.engineering.com/?p=139355 GetFocus co-founder and CEO Jard van Ingen on using AI to predict who will succeed in the innovation race.

The post A crystal ball for the future of innovation appeared first on Engineering.com.

]]>
v

This episode of Designing the Future is brought to you by GetFocus.

Technology has been the critical driver in social and economic development worldwide since the dawn of the Industrial Revolution, but critical advances in key enablers such as quantum computing and AI suggest that we may be living, right now in an age more profound than for the age of steam, or electricity, or even nuclear technology.

But there is one critical difference between past generations and today: inventors, developers, and investors in new technologies could not predict success or failure, either in the engineering art, or in market acceptance of an innovation. What if that could be different?

Jard van Ingen thinks so, and as CEO and co-founder of the world’s first technology forecasting platform, GetFocus, he intends to turn AI into a digital crystal ball to look into the future of innovation. It’s an intriguing idea, which he describes in conversation with engineering.com’s Jim Anderton. 

* * * 
GetFocus is the world’s first AI-powered technology forecasting platform. We empower R&D leaders to eliminate guesswork and predict which innovations will dominate long before it is obvious by measuring improvement rates from global innovation data. With GetFocus, you get actionable insights in days —helping you move faster, reduce investment risk, and outpace competitors.

Learn more about GetFocus.

The post A crystal ball for the future of innovation appeared first on Engineering.com.

]]>
Turning unstructured data into action with strategic AI deployment https://www.engineering.com/turning-unstructured-data-into-action-with-strategic-ai-deployment/ Fri, 02 May 2025 13:12:31 +0000 https://www.engineering.com/?p=139379 Transform industrial data from disconnected and fragmented to a more unified, actionable strategic resource.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>
Artificial Intelligence (AI) is driving profound change across the industrial sector; its true value lies in overcoming the challenge of transforming fragmented, siloed data into actionable insights. As AI technologies reshape industries, they offer powerful capabilities to predict outcomes, optimize processes, and enhance decision-making. However, the real potential of AI is unlocked when it is applied to the complex task of integrating unstructured, “freshly harvested” data from both IT and OT systems into a cohesive, strategic resource.

This article explores the strategic application of AI within industrial environments, where the convergence of IT and OT systems plays a critical role. From predictive maintenance to real-time process optimization, AI brings new opportunities to unify disparate data sources through intelligent digital thread—driving smarter decisions that lead to both immediate operational improvements and long-term innovation. Insights are drawn from industry frameworks to illustrate how businesses can effectively leverage AI to transform data into a competitive advantage.

From raw data to ready insights

In an ideal world, industrial data flows seamlessly through systems and is immediately ready for AI algorithms to digest and act upon. Yet, the reality is far different at this stage. Much of the data that businesses generate is fragmented, siloed, unstructured, sometimes untimely available, making it difficult to extract real-time actionable insights. To realize the full potential of AI, organizations must confront this data challenge head-on.

The first hurdle is understanding the true nature of “freshly harvested” data—the new, often unrefined information generated through sensors, machines, and human input. This raw data is often incomplete, noisy, or inconsistent, making it unreliable for decision-making. The key question is: How can organizations transform this raw data into structured, meaningful insights that AI systems can leverage to drive innovation?

The role of industrial-grade data solutions

According to industrial thought leaders, the solution lies in the deployment of “industrial-grade” AI solutions that can manage the complexities of industrial data. These solutions must be tailored to meet the specific requirements of industrial environments, where data quality and consistency are non-negotiable. Seamless enterprise-wide data integration is key—whether for predictive maintenance that connects sensor data with enterprise asset management, real-time process optimization that synchronizes factory operations with ERP and MRP platforms—driving supply chain resilience that links production planning with logistics and inventory.

The first step in this process is data integration—the practice of bringing together disparate data sources into a unified ecosystem. This is where many organizations fail, as they continue to operate in data silos, making it nearly impossible to get a holistic view of operations. By leveraging industrial-grade data fabrics, companies can create a single, cohesive data environment where data from multiple sources, whether from edge devices or cloud systems, can be processed together in real time.

Data structuring—the secret to actionable insights

Once raw data is integrated, it must be structured in a way that makes it interpretable and useful for AI models. Raw data points need to be cleaned, categorized, and tagged with relevant metadata to create a foundation for analysis. This is a critical step in the data preparation lifecycle and requires both human expertise and sophisticated algorithms.

The structuring of data enables the development of reliable AI models. These models are trained on historical data, but the real power lies in their ability to make predictions and provide insights from new, incoming data—what we might call “freshly harvested” data. For example, predictive maintenance models can alert manufacturers to potential equipment failures before they occur, while quality control models can detect deviations in production in real time, allowing for immediate intervention.

The importance of explainability cannot be understated. For industrial AI applications to be truly valuable, stakeholders must be able to trust the insights generated. Clear, transparent AI models that are explainable ensure that human operators can understand and act upon AI recommendations with confidence.

Operationalizing AI for real results

Having structured data and trained models is only part of the equation. The real test is turning AI-generated insights into actionable outcomes. This is where real-time decision-making comes into play.

Organizations need to operationalize AI by embedding it within their decision-making frameworks. Real-time AI systems need to communicate directly with production systems, supply chains, and maintenance teams to drive immediate action. For example, an AI system might detect an anomaly in production quality and automatically adjust parameters, triggering alerts to the relevant personnel. The ability to act on AI insights immediately is what separates a theoretical AI application from one that delivers real-world value.

Moreover, feedback loops are essential. The AI models should not be static but should continuously learn and adapt based on new data and operational changes. This iterative approach ensures that AI doesn’t just solve problems for today but continues to improve and optimize processes over time.

Generative AI: A catalyst for innovation and workforce augmentation

While AI’s predictive capabilities are often the focal point, generative AI holds particular promise for transforming industrial workflows. By augmenting human creativity and problem-solving, generative AI helps address the skill gap in the workforce. For example, AI-assisted design can produce innovative solutions that human engineers may not have considered.

However, the integration of generative AI into industrial settings requires careful consideration. As powerful as it is, generative AI can be more costly than traditional AI models. Its inclusion in industrial applications must be strategic, ensuring that the value it brings—such as faster prototyping or more efficient design—justifies the investment.

How to build a sustainable AI strategy for data insights

Turning fragmented data into actionable insights requires a strategic approach. Based on industry frameworks from ABB and ARC Advisory Group, here’s a blueprint for effective AI adoption in industrial settings:

  1. Begin by understanding what is to be achieved through AI—whether it is optimizing efficiency, reducing downtime, or improving quality control. Align AI initiatives with these objectives to ensure focused efforts.
  2. Assess the existing data infrastructure and invest in solutions that integrate and standardize data across your systems. A unified data environment is crucial for enabling AI-driven insights.
  3. Avoid generic AI solutions. Instead, select AI tools that address specific use cases—whether it is predictive maintenance or process optimization. Tailored solutions are far more likely to provide valuable, actionable insights.
  4. In highly regulated industries, transparent and explainable AI models are essential for building trust and compliance. Make sure AI systems provide insights that are understandable and auditable.
  5. AI adoption is not a one-time implementation. Begin with pilot projects, learn from the results, and scale up gradually. This approach allows businesses to optimize AI systems while minimizing risk.

Scaling AI for broader impact

Collaboration is key to successful AI adoption. Partnering with experienced software providers, AI developers, and industry experts can help organizations navigate the challenges of scaling AI across their operations. Moreover, integrating generative AI alongside traditional AI approaches allows companies to strike a balance between innovation and cost-effectiveness.

The promise of AI in transforming industries is undeniable, but to truly realize its value, organizations must overcome the data fragmentation challenges that hinder effective AI deployment. By integrating, structuring, and operationalizing data, companies can convert raw information into actionable insights that drive measurable results. The future of industrial AI is not just about predictions and optimization—it’s about continuous learning, innovation, and the strategic use of AI to create sustainable, long-term growth.

The post Turning unstructured data into action with strategic AI deployment appeared first on Engineering.com.

]]>