1 Introduction

The present era of manufacturing is characterized by intense market competition, intricate and dynamic production processes, uncertain conditions, and a volatile market. To thrive in this globally competitive landscape, manufacturing companies are compelled to offer customized products while simultaneously reducing costs, shortening time-to-market, maintaining product quality, and ensuring customer satisfaction [1, 2]. To achieve these manufacturing objectives, numerous developed countries are actively promoting the use of advanced technologies as part of their national manufacturing strategies and initiatives. Examples include China's Made in China 2025 strategy, Germany's Industry 4.0, UK's Industry 2050 strategy, the USA's Industrial Internet, South Korea's manufacturing innovation 3.0, and Japan's society 5 [3]. At the heart of these strategies lies the concept of intelligent manufacturing, which emphasizes the seamless integration of artificial intelligence (AI) and advanced manufacturing techniques [4].

In an era driven by smart manufacturing strategies and advanced sensor technologies, manufacturing operations generate an immense amount of data, reaching around 1000 Exabytes yearly. This data contains valuable insights about various aspects of the manufacturing process, from critical events and alerts to production line activities [5]. Unfortunately, many manufacturing companies aren't fully utilizing this valuable resource. Within this vast data pool lies the untapped potential to uncover profound insights about machinery, processes, and systems. By using advanced data processing and analysis techniques, transformative insights can revolutionize how we understand and optimize manufacturing methods.

The advent of Industry 4.0 introduces a transformative era powered by groundbreaking technologies such as Blockchain (BC), Big Data Analysis (BDA), Artificial Intelligence (AI), Digital Twin (DT), Digital Twin Triplet (DT3), Internet of Things (IoT), Additive Manufacturing (AM), and Cyber-Physical Systems (CPS) [6,7,8]. The advent of Industry 4.0 introduces a transformative era powered by groundbreaking technologies such as Blockchain (BC), Big Data Analysis (BDA), Artificial Intelligence (AI), Digital Twin (DT), Digital Twin Triplet (DT3), Internet of Things (IoT), Additive Manufacturing (AM), and Cyber-Physical Systems (CPS) [9].

Throughout the intricate journey of a product or industrial equipment, which spans design, manufacturing, maintenance, and the cycles of recycling, reuse, and retrofitting, a diverse array of data emerges. This reservoir of data encapsulates the entire lifecycle's footprint. In the realm of Industry 4.0, the concept of Product Lifecycle Management (PLM) has gained prominence. PLM excels in orchestrating a product's existence, guiding it from inception through maturity and gracefully into its twilight [10]. Encompassing the domain of Industry 4.0, a symphony of technologies led by the likes of AI, IoT, Big Data (BD), and cloud computing are transforming advanced manufacturing. The convergence of these technological marvels paints a portrait of a manufacturing renaissance, where insights are illuminated, efficiency is elevated, and the very essence of how we craft and steward the life of a product is redefined [11]. The main contribution of this comprehensive literature review is that it provides valuable insights into the extensive applications of AI techniques throughout the lifecycle of industrial equipment.

Figure 1 visually depicts the paper's organizational framework. The paper's structure is as follows: Sect. 2 outlines the research methodology, including the literature review protocol and search query strings. Section 3 provides concise insights into modern production and the product lifecycle. Section 4 offers a brief overview of AI. Section 5 delves into popular AI techniques and their applications across industrial equipment lifecycles. Section 6 lists the benefits and challenges of integrating AI in industrial equipment lifecycles. Finally, Sect. 7 presents the research conclusion.

Fig. 1
figure 1

Organization of the paper

It is important to clarify our understanding of generic products and industrial equipment/components before commencing the literature review. Industrial equipment is a specific kind of generic product, customized for industrial applications. A generic product encompasses a wide range of items serving a function without industry-specific tailoring. These products are versatile and commonly mass-produced for various uses, like chairs providing seating in homes, offices, and restaurants. In contrast, industrial equipment includes machinery, tools, and systems created for industrial settings like factories and construction sites. Industrial equipment can be used standalone as a self-contained component or as part of other devices or machinery for use in industrial settings. These items are optimized for efficiency, durability, and performance in demanding environments. While both serve a function, industrial equipment is specialized for industrial needs, forming a focused subset of generic products designed for industry requirements.

The commonly used terms in this paper, such as AI (artificial intelligence), AM (additive manufacturing), PdM (predictive maintenance), etc. are abbreviated in Table 1, which furnishes a complete list of abbreviations and their meanings.

Table 1 Abbreviations used in article and their meanings

2 Research methodology

The objective of this literature review is to provide a thorough examination of the utilization of AI techniques throughout the various phases of the product/industrial equipment lifecycle. The review aims to identify the predominant AI techniques employed to address production challenges at each phase and determine their popularity. Additionally, it aims to explore how the application of AI at different lifecycle stages enhances collaboration along the manufacturing chain and the overall product lifecycle. Each publication included in this review has been meticulously analyzed and compiled to offer a comprehensive overview of the current state of the art and its potential for future advancements.

2.1 Review protocol

A protocol was established to guide the article selection process for this review paper. The protocol encompasses the identification of appropriate sources for literature selection, the formulation of search queries, and the establishment of inclusion and exclusion criteria for the chosen publications. The details of this protocol are outlined in the following subsection.

2.2 Selection of search sources

Various databases and search engines, such as Scopus, Web of Science, and Google Scholar, offer extensive collections of publications for researchers to explore. For this review, the Web of Science and Scopus databases were chosen based on several factors. Firstly, these databases enjoy popularity within the scientific community. Secondly, researchers and students have free access to Scopus and Web of Science through institutional agreements. Lastly, both databases provide comprehensive and reliable search results that can be utilized consistently.

2.3 Search query

The selection of appropriate query strings is a crucial aspect of this review and plays a key role in achieving its objectives. It is essential to use relevant and popular keywords that resonate with the research community to retrieve high-quality literature from scientific databases. This subsection focuses on analyzing the keywords employed in this review. Instead of using a complex single query string to capture articles related to all phases of an industrial equipment, separate query strings were formulated for the design, manufacturing, maintenance, and reuse-recycle-retrofit phases. Table 2. Query strings used to find publications from selected literature database presents the query strings utilized for each phase.

Table 2 Query strings used to find publications from selected literature database

Each query string comprises two parts, each containing specific keywords. The first part focuses on newer and trending keywords in the literature that are relevant to the core subject of this paper, such as "machine learning," "deep learning," "industrial intelligence," "artificial intelligence," and "big data," amongst others. The second part consists of keywords that are specific to each phase of the product life cycle. For example, for the design phase, keywords like "generative design," "design optimization," "computer aided design," and "sustainable product design" are used.

It is important to note that terms like "Data Analytics ", "Data Mining", or "Stochastic Learning" are sometimes used interchangeably with "Machine Learning" and "Deep Learning" as they are all related to extracting knowledge from data in the field of data science. However, using these terms in the query string could potentially introduce a bias and affect the focus of this study. Moreover, specific keywords related to Machine Learning (ML) or Deep Learning (DL) techniques, such as "convolutional neural network (CNN)", "generative adversarial networks (GAN)", "Random Forest (RF)", or "k-means," are not included as search keywords to avoid artificially boosting the results of these specific techniques when addressing the second research question regarding the most used AI techniques throughout the product lifecycle.

To evaluate the selected keywords for the query strings, a brief bibliometric analysis was conducted using VOSviewer.Footnote 1 VOSviewer is a popular software tool developed by researchers at the Centre for Science and Technology Studies at Leiden University. It is designed for bibliometric analysis and visualization of scientific literature. VOSviewer enables users to analyze and visualize patterns, networks, and relationships among publications based on various bibliographic data, such as keywords, authors, journals, and citations. It provides valuable insights into research trends, collaborations, and the structure of scientific knowledge domains. The purpose of bibliometric analysis was to gain a preliminary understanding of the impact of various keywords and combinations on the results of the queries across all phases of the product life cycle. The queries were executed on selected literature databases on May 20, 2023.

After conducting the database searches for each phase of the product life cycle, the publications obtained were subjected to bibliometric analysis. The analysis focused on the keywords specified by the authors in these papers. The VOSviewer network visualization tool was used to present the analysis results. In the network, nodes represented by circles or rectangles correspond to keywords or items, with the size of each node indicating its importance based on the frequency of occurrence. The connections between nodes represent their co-occurrence, and the spatial distance between keywords reflects their relationship. For better visibility, a filter was applied to display a maximum of 45 items per graph. The "overlay visualization" feature was utilized to show the average publication year for each keyword through a color scale. The queried keywords within the first parenthesis were highlighted with a red rectangle for easier identification. The resulting networks are presented in Fig. 2.

Fig. 2
figure 2

Network visualization of query results with publications per year

2.4 Literature collection

Following the query of scientific databases using the chosen keywords, a significant number of search results were obtained, covering the period from 1961 to 20 May 2023. The distribution of publications by year for each phase of the product lifecycle is depicted in Fig. 2. In Fig. 3 the bars highlighted in green indicate the inclusion of the most recent publications in this study (from 1 January 2017 to 20 May 2023). The acceptance and rejection criteria are further elucidated in Sect. 2.4.

Fig. 3
figure 3

Distribution of publications by year for each phase of the product lifecycle

2.5 Acceptance and rejection rules

This study aims to investigate the application of AI techniques, including machine learning, deep learning, reinforcement learning, and natural language processing (NLP), in optimizing industrial processes throughout the various stages of a product's lifecycle within the context of smart manufacturing. This literature review focuses on recent publications from 1 January 2017 to 20 May 2023 written in English. To ensure the selection of the most relevant literatures for evaluation, specific acceptance and rejection criteria were established for this review. The acceptance and rejection criteria are summarized in Table 3 and further explained in the next paragraph.

Table 3 Acceptance/Rejection rules for publications

2.6 Literature collection summary

The literature search process is summarized in Fig. 4, outlining the comprehensive approach followed to identify relevant publications. After an extensive search, articles that met the predetermined acceptance criteria were selected. A total of 43 articles were deemed suitable for further analysis.

Fig. 4
figure 4

Literature search and screening process

3 Modern production and product lifecycle

The integration of artificial intelligence into the manufacturing industry has become increasingly crucial in optimizing complex production processes. In pursuit of greater efficiency and sustainability, research efforts have focused on leveraging Industry 4.0 technologies such as AI, IoT, big data, cloud computing and many more. These technologies aim to enhance the resilience and sustainability of production systems [12]. Smart factories, as a prime example of this integration, utilize context-aware applications and self-regulating mechanisms to optimize production processes [13].

The growing importance of innovation and digitalization in products, services, and processes has underscored the necessity of adopting advanced manufacturing technologies such as AI and ML. These algorithms have emerged as crucial tools for addressing complex problems and handling vast amounts of data, which are inherent challenges in supply chain networks. With a specific focus on computer science and engineering, AI and ML offer numerous advantages across industrial sectors, including enhanced innovation, process optimization, resource utilization, and improved quality [14]. Notably, these benefits have revolutionized the product lifecycle, enabling optimization of processes, resources utilization, and product and process quality at every stage, ranging from design and manufacturing to removal and disposal. Moreover, these advancements extend to all levels of supply chain stakeholders, facilitating collaboration and efficiency throughout the entire supply chain.

A generic product’s lifespan does not start at the factory where it is made and ends at the store where it is sold. A product’s life progresses from its conception as an idea through design, testing, manufacture, usage, and eventually retirement and disposal [15]. The product lifecycle encompasses the different phases that a product experiences from its creation to its ultimate disposal. Generally, the product lifecycle can be categorized into three primary phases: the beginning of life (BOL), the middle of life (MOL), and the end of life (EOL) as presented in Fig. 5. During the BOL phase, the product concept is developed, designed, and physically realized through production. The MOL phase involves the distribution and maintenance of the product by customers and distributors. Finally, the EOL phase requires manufacturers to recycle or dispose of products or their components that are no longer repairable or reusable [15, 16].

Fig. 5
figure 5

Phases in the lifecycle of a generic product

The phases of the lifecycle are connected in a loop (Fig. 5), mainly to show the interconnections and relationships between the phases. For instance, the EOL and BOL periods are connected because products are remanufactured, reused, retrofit, or recycled. However, based on the product or its component condition it may be disposed of and become part of landfill.

From the manufacturer's perspective, a product lifecycle encompasses the entire process starting from the conceptual design stage to the acquisition of raw materials, production, distribution, utilization, after-sale service, and ultimately, recycling and disposal. This holistic view of the product lifecycle takes into account every step involved in the journey of a product, ensuring a comprehensive understanding of its progression from creation to its eventual end [15].

Every stage of the product lifecycle entails distinct activities, involves specific personnel and departments, and generates substantial amounts of data. Understanding the objectives and requirements associated with each stage is crucial for all supply chain partners to effectively manage the product and achieve their respective goals. By comprehending the unique characteristics and demands of each stage, stakeholders can work collaboratively to ensure efficient product management throughout its lifecycle [13]. Figure 5 adapted from the original source [17] and modified by the author, provides a visual representation of the different stages involved in the lifecycle of a product. It illustrates the progression of a product, starting from its initial conceptualization or idea generation phase, followed by the design and manufacturing processes. Subsequently, the product flows through the supply chain, involving various actors and stakeholders who contribute to its distribution and usage. Ultimately, at the end of its lifecycle, the product is either reclaimed by recycling companies or disassemblers, or it may be discarded and end up in landfills as waste.

As illustrated in Fig. 6 a substantial amount of high-dimensional and diverse data is generated throughout the lifecycle of a product. This data can be of numerical nature, originating from sensors, cameras, or other vision-related devices, or textual data arising from customer services, market analysis reports, and end-user reviews. For instance, the BOL period encompasses the design and manufacturing stages. The design phase takes input data such as customer demands, product functions, and product quality derived from market analysis. It produces product design specifications as output. This includes computer aided drawings (CAD) files, computer programming codes, and a variety of configuration parameters like tolerance and location parameters.

Fig. 6
figure 6

Related data and actors involved in product lifecycle

Similarly, the production stage focuses on procurement, production planning, maintenance, and logistics. Here, real-time sensory data is collected from plant processes, along with historical data from database pertaining to machinery and its components. Information regarding component or machinery failure frequency, failure rate, supplier details, and outsourcing companies is also captured. Consequently, the production stage provides vital outputs like production scheduling and specifications, production plans, instructions for operators and assemblers, and inventory planning and status updates. The evolution of a product through these various stages results in a wealth of data that encompasses both numerical and textual aspects, all of which contribute to optimizing and streamlining the product lifecycle.

During the MOL period, a product goes through the maintenance, logistics, and utilization stages. At this stage, the product has reached its final form, and issues related to logistics, onsite maintenance, and services become increasingly significant and require careful attention. The MOL phase involves various businesses like maintenance, repair, and work processes carried out by different end-users of the product. After production, the products are delivered to distributors and sale points based on customer and market demand. Furthermore, users receive delivery services following the purchase of the product. In this context, optimal and efficient strategies are crucial for logistics planning to ensure accurate and timely transportation of goods. Logistics planning must be optimized using inventory data, order data, location data, etc., to ensure precise and timely delivery of products [16, 18, 19].

During the EOL period, the maintenance history, product status information, and working environment data gathered from the MOL period, utilized to calculate the product's health and remaining useful life. It is crucial to consider the product's status to make informed decisions about the best EOL recovery options, such as recycling, reuse, remanufacturing, disassembling, or disposal. By maximizing the value of EOL products, the most suitable recovery option can be chosen. Advanced analytics techniques play a significant role in obtaining a well-optimized project schedule, determining when, how, where, and what to recycle, thus ensuring efficient and effective EOL management [16,17,18].

4 AI overview, methods, and techniques

In this section, we offer a concise overview of the captivating field of AI and emphasize the subtle yet significant difference between AI methods and techniques. The terms “AI technique” and “AI method” are often used interchangeably, leading to confusion in some studies [19]. However, it is crucial to distinguish between these terms as it enriches our understanding of the diverse approaches employed in AI. By clarifying these concepts, we aim to provide a more accurate and comprehensive grasp of the broad spectrum of methodologies utilized throughout the life cycle of a product. For those eager to delve deeper into the realm of AI, we recommend exploring the study conducted by Mukhamediev et al., [20]. In this comprehensive review, the authors provide invaluable insights into AI technologies, their adoption in industry and society, as well as the advantages, challenges, and concerns surrounding their implementation.

4.1 An introductory overview of AI

AI is an intriguing and transformative field in computer science that seeks to replicate human-like intelligence in machines. It encompasses a vast array of techniques and algorithms that empower computers to perform tasks that would typically require human intelligence, such as learning, reasoning, problem-solving, and decision-making. The evolution of AI has been rapid, driven by advancements in computing power, the availability of vast datasets, and continuous algorithmic innovations [21,22,23].

A fundamental distinction within AI is the division between weak AI and strong AI. Weak AI, often referred to as Narrow AI, pertains to AI systems designed for specific tasks and applications. These systems excel in their designated domains and can outperform humans in their specialized functions. Examples of weak AI include virtual assistants like SiriFootnote 2 and Alexa,Footnote 3 language translation tools, recommendation systems, and facial recognition software. Despite their impressive performance in their respective areas, these AI systems lack the broader capabilities associated with human general intelligence. Conversely, strong AI, also known as Artificial General Intelligence (AGI), represents a higher aspiration in AI research. AGI aims to create AI systems that possess a level of general intelligence like that of humans, allowing them to understand, learn, and adapt across diverse domains. Achieving strong AI remains an ambitious goal and is yet to be fully realized. Developing AGI requires an in-depth understanding of human cognition, consciousness, and the ability to handle complex and unstructured tasks [24, 25].

AI is an interdisciplinary field that draws upon various scientific areas and domains such as computer science, deep learning, machine learning, evolutionary biology (evolutionary algorithms), expert systems, natural language processing, computer vision, robotics, and planning. As AI continues to advance, its impact spans across numerous industries, including healthcare, finance, manufacturing, transportation, and entertainment. It has led to remarkable applications such as autonomous vehicles, medical image analysis, financial fraud detection, and natural language processing. The potential of AI is vast, and it has the capability to revolutionize various aspects of society and improve human lives significantly. However, with its widespread adoption, AI also raises critical questions about ethics, bias, data privacy, and the responsible deployment of AI technologies. Addressing these challenges will be crucial in harnessing the full potential of AI for the betterment of humanity.

4.2 AI methods and techniques

The terms “AI technique” and “AI method” are often used interchangeably, but there is a subtle difference between the two. In the upcoming section, we will aim to provide a detailed clarification.

4.2.1 AI methods

AI methods represent the fundamental approaches and paradigms that guide the development of intelligent systems. These overarching methodologies serve as blueprints for problem-solving and knowledge acquisition. Some prominent AI methods include:

  1. 1.

    Machine learning (ML): ML is a core method in AI that empowers machines to learn from data without being explicitly programmed. It involves creating algorithms and models that can automatically identify patterns and relationships within the data and make predictions or decisions based on those patterns. ML can be broadly categorized into three main branches: Supervised Learning (SL), Unsupervised Learning (UL), and Reinforcement Learning (RL).

  2. 2.

    Deep learning (DL): DL is a specialized subset of machine learning that utilizes artificial neural networks, inspired by the structure and functioning of the human brain. These deep neural networks consist of multiple layers of interconnected nodes (neurons) that process data hierarchically. DL has shown remarkable success in tasks such as image recognition, natural language processing, and speech synthesis. Its ability to process vast amounts of unstructured data, like images and texts, makes it particularly effective in complex and high-dimensional problem domains.

  3. 3.

    Natural language processing (NLP): NLP is an AI method that enables machines to understand, interpret, and generate human language. NLP algorithms analyze and process text and speech data, transforming it into a format that machines can work with. This technology has revolutionized applications like language translation, sentiment analysis, chatbots, and voice assistants. By bridging the gap between human language and machine understanding, NLP has facilitated more natural and intuitive human–machine well-suited.

  4. 4.

    Expert systems: Expert systems are AI programs designed to mimic the decision-making capabilities of human experts in specific domains. These systems rely on a rule-based system and knowledge representation to provide recommendations and solutions based on their domain-specific expertise. Expert systems excel in areas where human expertise is crucial, such as medical diagnosis, financial analysis, and troubleshooting complex technical issues.

  5. 5.

    Evolutionary algorithms: Inspired by the principles of Darwinian evolution, Evolutionary Algorithms optimize solutions through an iterative process of candidate solution evolution. By using selection, crossover, and mutation operators, evolution algorithms explore and refine potential solutions to complex optimization problems. These algorithms have proven useful in various applications, including optimization, engineering design, and financial modeling.

  6. 6.

    Reinforcement learning (RL): Reinforcement Learning introduces the concept of agents interacting with an environment and learning through trial and error. Agents take actions in the environment and receive feedback in the form of rewards or penalties, guiding their learning process. RL is well-suited for tasks where there is no readily available labeled data, and the agent must learn from its actions to achieve long-term goals. It has been successfully applied in areas such as game-playing, robotics, and autonomous systems.

  7. 7.

    Probabilistic graphical models: Combining probability theory and graph theory, probabilistic graphical models offer a structured framework for reasoning under uncertainty and analyzing complex dependencies.

  8. 8.

    Transfer learning: By leveraging knowledge from one task to improve performance on related tasks, transfer learning addresses data scarcity and facilitates efficient learning in diverse domains.

  9. 9.

    Adversarial learning: Adversarial learning encompasses both defending AI models against adversarial attacks and crafting adversarial examples to evaluate model robustness.

  10. 10.

    Swarm intelligence: Inspired by collective behavior in nature, swarm intelligence algorithms enable decentralized, self-organizing systems to collaboratively solve problems.

4.2.2 AI techniques

Complementing these overarching AI methods, various AI techniques serve as specialized tools and algorithms employed within the broader methodologies. These techniques play pivotal roles in addressing specific challenges and achieving superior performance in focused tasks. Some key AI techniques include:

  1. 1.

    Convolutional neural networks (CNN): CNN is a deep learning technique designed specifically for image and video analysis. It uses convolutional layers to automatically extract meaningful patterns and features from images, enabling accurate tasks like object detection, image recognition, and image segmentation.

  2. 2.

    Generative adversarial networks (GAN): GAN is a deep learning technique that consists of two neural networks, a generator, and a discriminator, which are trained adversarial. The generator generates synthetic data, while the discriminator tries to differentiate between real and synthetic data. This process leads to the creation of realistic synthetic data, benefiting applications like image synthesis and data augmentation.

  3. 3.

    Language modeling: Language Modeling is an NLP technique used to predict the probability of a sequence of words in a sentence. It plays a crucial role in language understanding and generation tasks, helping machines generate coherent and contextually relevant sentences in tasks like machine translation and text generation.

  4. 4.

    Rule-based systems: Rule-based Systems are techniques within Expert Systems that rely on a set of if–then rules to make decisions. These rules represent human expertise and domain knowledge, allowing machines to provide recommendations and solutions in specific domains, such as medical diagnosis and fault troubleshooting.

  5. 5.

    Genetic algorithms: Genetic Algorithms are techniques under Evolutionary Algorithms that draw inspiration from the process of biological evolution. By iteratively evolving and refining candidate solutions through selection, crossover, and mutation, genetic algorithms optimize solutions for complex problems in engineering, optimization, and finance.

  6. 6.

    Q-learning: It is a technique under Reinforcement Learning, which is used for training agents to make decisions in dynamic environments. The agent learns through trial and error and adjusts its actions based on feedback (Q-values) to maximize long-term rewards, making it suitable for tasks such as game-playing and autonomous navigation.

  7. 7.

    First-order logic: First-order Logic is a technique under Knowledge Representation and Reasoning that formalizes logical relationships between entities and facts. It provides a logical framework for representing knowledge and reasoning, essential for applications requiring complex logical inferences and decision-making.

  8. 8.

    Bayesian networks: It is a technique under Probabilistic Graphical Models, capturing probabilistic relationships between variables.

  9. 9.

    Pretrained models: Pretrained Models are techniques under Transfer Learning that use models pre-learned on vast datasets to initialize and fine-tune models for specific tasks. This approach saves time and computational resources and improves performance on target tasks, making it beneficial for applications with limited training data.

  10. 10.

    Ant colony optimization: A technique under Swarm Intelligence, solving optimization problems inspired by ant foraging behavior.

  11. 11.

    Support vector machines (SVM): Support Vector Machines is a powerful supervised learning technique used for classification and regression tasks. SVM finds the optimal hyperplane that best separates different classes in the data space. It works by mapping data into a high-dimensional feature space and identifying the hyperplane with the maximum margin between classes. SVM has been widely applied in various domains, including image recognition, text classification, and bioinformatics.

  12. 12.

    Random forest (RF): Random Forest is an ensemble learning technique that utilizes multiple decision trees to make predictions. It constructs a multitude of decision trees during the training process and combines their outputs to produce a more robust and accurate prediction. Random Forest is known for its ability to handle high-dimensional data and mitigate overfitting. It is commonly used in tasks such as classification, regression, and feature selection.

In summary, AI technique refers to a specific algorithm or approach used to address a particular AI problem, while AI method represents a more comprehensive set of techniques and strategies used to solve broader classes of AI challenges. It is important to note that the classification of AI techniques into specific AI methods may not always be straightforward, as many real-world applications such as vision, speech, planning, and robotics, often involve a combination of multiple AI methods and techniques to achieve the desired outcomes. The selection of AI methods depends on the nature of the problem, the availability of labeled data, and the complexity of the tasks involved. It is essential to recognize that the distinction between AI methods and techniques can sometimes be blurred, and authors may use different terminology in the literature. Nonetheless, understanding the various methods and techniques available in the realm of AI is crucial for comprehensively reviewing the applications of AI techniques throughout the lifecycle of industrial equipment.

5 Application of AI techniques through lifecycle of industrial equipment

Creating effective solutions for establishing collaborative networks across different phases of a product’s lifecycle presents a significant challenge. These solutions need to assess, analyze, and make informed decisions regarding how the product design impacts every stage of the lifecycle. This requires engineers involved in various phases to have a comprehensive understanding of the entire process and relevant information or data. As a result, managing substantial amounts of reliable information becomes crucial, necessitating the utilization of diverse technological solutions. By integrating artificial intelligence tools in a well-planned manner, synergies can be achieved across all factory functions, resulting in improved productivity, quality, cost-effectiveness, sustainability, and more. To fully realize these benefits, it is important to carefully select suitable artificial intelligence techniques and technologies for each specific stage of the product lifecycle.

To classify the methods and objectives of AI techniques in different stages of the product lifecycle, AI is clearly divided into four phases in this paper: the product design phase, product manufacturing phase, product maintenance phase and recycle/re-use/retrofit phase. Collaboration between these phases is crucial to exchange information among smart production units, smart logistics, smart products, smart organizational and engineering units, as well as individuals to achieve agile and resilient processes [10, 19, 26].

5.1 AI at design phase

The human design process plays a crucial role in creating technologies and environments that impact various aspects of our lives, including food, household products, and machinery. Product design engineering involves iterative processes and decision-making, starting with requirement identification and concluding with a detailed product description. Conventional product and engineering design processes have traditionally revolved around human-centric approaches, relying on the expert knowledge of individuals with scientific, intuitive, experiential, and creative methods [19]. Due to the adoption of AI into the different stages of the product and engineering design process, the conventional design approaches are changing dramatically. The AI-supported design techniques streamline complex design operations, enabling designers to concentrate on innovative and creative aspects while AI handles repetitive tasks like design comparison, evaluation, and parameter estimation. This activity reduces design time, delivers accurate results, and lowers overall design costs. Moreover, AI's high computational power, big data processing capabilities, and objective decision-making abilities make it superior to humans in executing these tasks [26,27,28].

The design process can be divided into three major phases: product design specification, conceptual design, and detailed design (design synthesis and optimization) [29]. Conceptual design is a critical stage where decisions can greatly impact the complexity of operations. It involves analyzing customer and design requirements, identifying the primary function, and seeking principles for solving the fundamental design problem. Evaluation and selection of a feasible concept are also crucial [30, 31]. Embodiment design focuses on clarifying, confirming, or optimizing details in primary design functions, considering aspects such as form, material, manufacturing process, assembly, and cost [32, 33]. Detailed design determines specifications, overall cost, and key factors in detail, aiming to finalize a manufacturable solution for production [33, 34].

The design engineering spectrum encompasses a wide variety of applications, ranging from designing machines and components like electric motors, lock nuts, thrust washers, super gear, etc.; aircraft models, and ship propellers to small-scale metamaterials [35]. The next subsections list the various AI techniques used at different phases of product design.

5.1.1 Unleashing creativity: AI's role in design inspiration and concept generation

Idea generation is a crucial stage in product design. However, designers often encounter difficulties in generating innovative ideas due to psychological inertia, commonly known as design fixation [36]. This cognitive barrier hampers the creative thinking process and obstructs the exploration of new design concepts during the conceptual design phase, posing a significant challenge to achieving the ideal design outcome like user requirements, expenditure, visual appeal, user comfort, operational capabilities, manufacturing techniques, and sustainability. Recognizing the importance of developing popular and novel products, the conceptual design phase is regarded as a critical aspect of the overall product design process. To overcome this obstacle and foster creativity, it is essential to integrate thorough market research, customer acceptance, domain expertise, intuition, intellectual acuity, and creative skills into the conceptual design process [31, 37, 38]. Furthermore, the decisions made during the conceptual design stage have a profound impact on various aspects of a product, including costs, performance, reliability, safety, and environmental impact. However, it is important to acknowledge that the design requirements and constraints at this early phase are often imprecise, approximate, and sometimes impractical [16]. Delays in this phase can lead to increased production costs and decreased market share [29].

During the stage of inspiration and concept generation in product design, a wealth of insights is derived from market analysis and customer feedback, providing valuable guidance for creating innovative and user-centric products. Additionally, the exploration of patent databases unveils a treasure trove of technical documents brimming with cutting-edge technology and design information. These documents shed light on intricate details, encompassing the realms of technical functionalities, material compositions, working principles, and visionary conceptual aspects, fueling the imaginative process of design and product development [39].

However, the analysis of data obtained from market and patent databases presents considerable challenges due to the inherent unstructured nature of the data, potential biases that may arise, and the susceptibility to human errors. These factors intricately inter-play to create intricate complexities during the process of data analysis, demanding meticulous attention and robust methodologies to derive meaningful insights [40]. Hence AI has emerged as a valuable tool for overcoming design fixation by providing verbal, written, and visual inspiration, stimulating innovative thinking, and facilitating the generation of fresh design ideas [38, 41].

During the concept generation stage, researchers used evolutionary algorithms as well as ML and DL methods to explore design space. In [42], a genetic algorithm (GA) was used to explore various configurations for a powertrain system comprising an engine, transmission, and drive shaft. With 3000 possibilities, the algorithm outperformed human experts, indicating its potential for automated product design. In [43], researchers conducted a study where an Artificial Neural Network (ANN) was utilized for product configuration, proving its effectiveness in determining optimal design configurations based on customer preferences. In [43], researchers utilized an ANN to determine the ideal form of a perfume bottle by generating combinations of product forms. Similarly, in [44], researchers have employed ANN and crowdsourcing to identify new design concepts, utilizing datasets that included spoken language. This approach facilitated the quick scanning of large datasets, such as surveys, competitions, and patents, to identify innovative ideas. In the study made in [44], a combination of ANN and GA is used to formulate the design concept of an aesthetic product. The ANN utilizes survey data to determine product form features, which are subsequently employed by the GA to generate alternative design concepts for washing machines, coffee makers, and mixers with aesthetic appeal.

Other machine learning and data mining techniques have demonstrated successful applications in idea generation, particularly in extracting both explicit and hidden ideas from textual and visual sources. Extensive datasets, including patent databases, academic journals, web pages, survey data, and speech data, can be systematically searched and analyzed to extract and categorize ideas. These methods provide a valuable means to harness the wealth of information available in these sources for innovative idea generation [45, 46]. For example, in a relevant study [40], the authors employed a combination of supervised and unsupervised machine learning algorithms to analyze customer needs using a large dataset sourced from customer reviews, complaints, and online surveys. The supervised algorithm, fastText,Footnote 4 was utilized to extract relevant data, while the unsupervised algorithm, Valence Aware Dictionary and Sentiment Reasoner (VADER), was employed for the identification and classification of customer needs. In [47], a Decision Tree (DT) algorithm (an ML algorithm) was employed to generate innovative ideas in product design by identifying customer needs. This approach aims to achieve multiple objectives, including cost reduction, enhanced product quality, accelerated new product development, and improved competitiveness.

Computer vision, coupled with DL algorithms, offers the potential to generate novel product options. By leveraging DL techniques, such as image recognition and analysis, visual data in the form of photos and videos can be effectively interpreted and analyzed. This opens opportunities for the generation of innovative product alternatives [19], through Generative Adversarial Networks (GANs) [48] and Variational autoencoders (VAEs) [49]. In studies [50, 51], the authors employed GANs and Conditional Variational Autoencoders (cVAEs) to generate a wide range of design options for mechanical structures, such as airfoils and wheels, even with limited input design data. Researchers in [52], proposed an approach based on Performance Augmented Diverse GANs (PaDGAN), which is a modified version of GANs specifically designed for industrial design applications. The modification in the loss function of general GAN aims to enhance GANs' performance and enable the generation of high-quality designs beyond the boundaries of the training data. In another study [53], a combination of a Convolutional Neural Network (CNN) and Deep Convolutional Generative Adversarial Network (DCGAN) was utilized to generate unique 3D-shaped concepts for mechanical components, such as springs. The authors trained the DCGAN model using perspective views (2D-shape) of 130 3D spring models, generating new 2D spring models. The CNN model was then employed to match the geometric and structural properties of the 3D models based on the 2D spring images. Utilizing this approach, a distinct 3D spring model was created by estimating its geometrical and structural properties. Notably, achieving such results with a relatively small dataset is remarkable, and further improvements can be made by incorporating the generated models into the training set.

Additionally, there have been efforts by researchers to integrate human input into AI-assisted design generation frameworks. In [54], electroencephalography (EEG) signals were utilized to capture the brain's electrical activity when exposed to physical products or images. These signals were then used to create a dataset that represented the recorded voltage fluctuations resulting from neuronal activity. Subsequently, a Long Short-Term Memory (LSTM) algorithm, a type of Recurrent Neural Network (RNN), was trained to correlate these signals with the desirability of specific design features. To further enhance the process, a GAN model conditioned on these design features was trained, allowing the generation of product concept images solely based on EEG signals. This approach holds significance as it enables AI methods to learn and interpret design requirements that may not be easily expressed verbally.

5.1.2 Revolutionizing design synthesis through artificial intelligence techniques

Design synthesis has always been a crucial aspect of engineering, as it involves creating innovative and efficient solutions to complex design problems having crucial geometric considerations such as automobile and aerospace design. In the past, design synthesis heavily relied on manual iterations and human expertise, which could be time-consuming and limited in exploring the design space. However, with the integration of AI techniques, the process of shape synthesis has been revolutionized. One area of focus is 2D and 3D shape synthesis within the aerospace domain, which aims to generate optimal shapes that meet specific design requirements and constraints [19].

In the realm of aerospace engineering, researchers have devoted considerable attention to the design of airfoils, which are the cross-sectional shapes of wings. This area of study specifically focused on 2D shape synthesis. Airfoils play a pivotal role in numerous engineering applications, ranging from propeller and rotor design to turbine blade development. Due to the crucial importance of airfoil performance parameters, a significant amount of research is focused on the conditional generation of airfoils tailored to fulfill specific performance criteria [26]. Hence in [55], researchers have employed conditional GANs (CGAN) and Deep Convolutional Neural Networks (DCNN) to generate new airfoil shapes based on desired performance. The DCNN structures address challenges associated with shape parametrization in traditional methods, allowing for pattern detection and utilization at a lower level of abstraction. The framework is demonstrated through the training of generator and discriminator networks using a database of airfoil shapes and conditional information. Once trained, the CGAN generator can produce customized airfoil shapes based on specified stall conditions or drag polar information. Also, some researchers have employed deep generative models to acquire knowledge of shape parameters through spline interpolation. For example, the study conducted in [56] proposed a Reinforcement Learning (RL) approach, in which the agent learns optimal policies by learning the equation coefficient.

The field of 3D object generation through DL has garnered significant attention in computer science. Researchers have been actively exploring methods to generate realistic shapes and objects in three-dimensional space. Various representations such as voxels, point clouds, and meshes are commonly used to represent 3D shapes and objects. In this area, advancements in 3D shape synthesis have heavily relied on techniques such as GANs and AEs, as well as RNNs, Transformers, and GNNs. In the realm of engineering design, most of the research on 3D shape synthesis has emphasized the inclusion of design performance considerations. In [57], researchers have proposed a hybrid approach for data-driven 3D shape synthesis. The approach utilizes VAE, an unsupervised DL technique to extract a compact latent design representation from a corpus of 3D designs. This latent representation captures important design information and allows for the generation of new designs through sampling, interpolation, and extrapolation in the latent space. Also, a simple latent space design crossover technique is used that enables a genetic optimizer to produce a diverse set of new designs for 3D aircraft models through stochastic interpolation and extrapolation of latent vectors.

In [58], authors proposed architecture for conditionally generating 3D objects in the form of point clouds. This architecture consists of AEs and CGAN. 3D objects are converted to latent vector space through AE and then a CGAN is trained on this latent space. The new objects are created in latent space by trained CGAN. However, in order to make the proposed architecture suitable for industrial applications, there is a need to enhance the precision of dimension specifications addressed in this study. To improve the quality of generated data, researchers in [59] employ an iterative retraining strategy, wherein a GAN is retrained on high-performance models assessed using computational fluid dynamics (CFD) evaluation. The method is applied to point cloud aircraft models sourced from ShapeNet,Footnote 5 with the objective of minimizing aerodynamic drag as the chosen performance metric. The authors utilize a conventional GAN loss and adopt a discriminator architecture from [60]. To address the challenge of sparse conditioning in data-driven inverse design tasks, researchers in [61], have proposed a range-constrained deep generative model, called Range-GAN, with a label-aware self-augmentation technique for automatic design synthesis with range constraints for GAN training. Additionally, they incorporate a "range loss" mechanism to enforce adherence to design constraints associated with parameter limits and apply this methodology to the generation of 3D aircraft models.

5.1.3 Advancing Topology optimization through artificial intelligence techniques

In the field of engineering design, topology optimization plays a crucial role in determining the ideal product configuration, improving its features, maximizing functionality, minimizing costs, and achieving lightweight components, amongst other objectives. The goal is to generate or identify the most favorable design while considering a multitude of constraints. By utilizing design optimization techniques, companies can achieve significant improvements in product performance, efficiency, and overall competitiveness. Over the years, various techniques have been developed to tackle topology optimization problems, ranging from classical mathematical programming approaches to more recent data-driven methods. In recent years, the emergence of deep generative models has brought about a paradigm shift in topology optimization. Deep generative models, such as generative adversarial networks (GANs) [48] and variational autoencoders (VAEs) [49], have shown tremendous potential in capturing complex patterns and generating novel designs. By leveraging the power of Deep Learning (DL), these models enable engineers to explore and optimize design spaces in ways that were previously unattainable.

Deep generative models offer significant advantages in topology optimization compared to traditional approaches. These models provide a flexible framework for representing complex design spaces, allowing for the exploration of unconventional solutions. They also facilitate the incorporation of design constraints, ensuring the generation of feasible and manufacturable designs. Furthermore, deep generative models enable engineers to explore design alternatives and trade-offs, considering multiple conflicting objectives. This helps in making informed decisions and gaining valuable insights into the design space. Additionally, these models allow for the integration of domain knowledge and expert guidance, combining data-driven learning with engineering expertise. Although the field of utilizing deep generative models in topology optimization is still evolving, ongoing research aims to address challenges such as model interpretability, computational efficiency, and handling high-dimensional design spaces. Despite these challenges, the potential for deep generative models to revolutionize topology optimization is undeniable, offering the possibility of more efficient, functional, and sustainable product designs.

The authors in [62] conducted an experiment where they trained a Wasserstein Generative Adversarial Network (WGAN) using a dataset generated through topology optimization (TO). In addition to training the WGAN, they also trained another network called an auxiliary network. The purpose of the auxiliary network was to predict performance metrics related to the generated designs. By implementing the proposed method, it has been demonstrated that the wheel design for automobiles can be automatically generated without the need for human intervention. The results have revealed that this process produces designs that are not only aesthetically superior but also possess significant technical value. The authors in [63] conducted a study where they trained VAEs using a dataset generated through TO for heat transfer. In addition to training the VAEs, they incorporated an additional loss function based on style transfer [64], which is a technique used in image processing tasks. The purpose of this additional loss was to enhance the quality and style of the generated designs. Furthermore, the authors proposed iterative strategies that leverage the latent space of the VAEs for targeted design optimization.

In [51, 65], the authors presented an iterative method for generative-network-fitting that incorporates TO and filtering of similar designs. They employ a modified version of the Boundary Equilibrium Generative Adversarial Network (BEGAN) which is an extension of the WGAN, trained on pre-existing TO-generated design topologies. Unlike previous generative network-fitting approaches [62, 63, 66], this method includes retraining and re-optimization steps, enabling the exploration of new design regions. The focus of their application is on wheel design, aiming to achieve a balance between aesthetics and structural performance. Through empirical evidence, the authors demonstrate the effectiveness of their approach in the context of wheel design.

The study conducted in [67] presents a method that tackles the problem of gaps or unexplored regions within the design space of topologies. They propose the use of a Variational Deep Embedding (VaDE) (VaDE) [68] approach. Initially, a dataset is generated using TO. The proposed method then proceeds to identify voids or unexplored areas in the design space by utilizing the VaDE. Designs are decoded from these voids, optimized using TO, and subsequently added to the training dataset. This iterative process helps to fill the gaps in the design space and expand the range of feasible designs. This work has focused on automated retraining methods for topology design generation. However human inputs can be incorporated into the training phase. Researchers in [69] introduce a framework for topology design generation that combines human input with a Conditional Generative Adversarial Network (CGAN). The designer actively participates by selecting design clusters iteratively, refining the process towards preferred designs. This collaborative approach merges the designer's expertise and preferences with the generative capabilities of the CGAN model, facilitating an iterative design process. The aim is to harness the synergy of human creativity and machine generative power to achieve optimal topology designs.

The performance of deep generative models (DGMs) in topology generation has been enhanced in various studies by incorporating the physical properties of the design domain, resulting in improved baseline results. Researchers in [70] proposed an approach for generating synthetic topologies using the CGAN architecture called “Pix2Pix” architecture. In their method, spatial fields of different physical parameters such as von mises stress, displacement and strain energy density are used as input to the generator. The ground truth for training the generator is obtained from the topologies generated by TO. Additionally, the authors introduce a novel generator architecture that combines the squeeze and excitation of ResNetFootnote 6 with U-Net,Footnote 7 aiming to improve the performance of the generator in generating accurate and realistic topologies. Similarly, in [71], researchers utilize a Neural Network (NN) to generate optimized topologies based on the given loading conditions. They propose an iterative approach to evaluate and improve the deviation of proposed solutions from the optimal conditions imposed by the problem. During the training process, they gradually expand their dataset by recalculating optimal solutions using TO for the proposed solutions that exhibit the most significant violation of the provided conditions optimality. This iterative process allows them to improve the performance of the NN by incorporating additional training examples that focus on challenging cases where the proposed solutions deviate the most from the desired optimality conditions.

The field of electric machine design also benefits from the application of AI techniques to optimize design topologies, such as motor design. Optimizing the design of electric machines poses a multi-objective and nonlinear challenge [72]. For example, during the design of the Switched Reluctance Motor (SRM), numerous geometric parameters come into play, including the number of stator and rotor poles, bore diameter, pole arc angles, taper angles, air gap length, and more. These parameters must be carefully selected to meet the specific requirements of the application in an efficient manner. To improve the torque profile of a 3-phase 12/8 SRM, the researchers in [73] have employed a Generalized Regression Neural Network (GRNN) to determine the optimal stator pole arc angle and rotor pole arc angle. They utilized Finite Element Analysis (FEA) to obtain the static torque characteristics of the specific motor being studied. The results obtained by FEA is used to train an ANN for approximating the objective function. Another research [74] introduced an enhanced variant of GRNN to model a 4-kW 12/8 SRM. They used the Fruit Fly Optimization Algorithm (FFOA) to optimize the spread parameter of GRNN. In this study, the researchers employed the fruit fly optimization algorithm to optimize the spread parameter of the GRNN. Their model effectively captured the nonlinear correlation between the ripples, operational efficiency, and three geometric variables: stator pole arc angle, rotor pole arc angle, and rotor yoke thickness.

In addition to SRMs, ML algorithms have also been employed to design different types of electric motors. For example, the design optimization of Permanent Magnet Synchronous Motors (PMSMs) [75] proposes the use of Extreme Learning Machines (ELM), the goal is to achieve optimal performance in terms of high average thrust, low thrust ripple, and low total harmonic distortion at various operating speeds. ELM is used to map the complex relationship between input factors and motor performance using data obtained from FEA. Then a Gray Wolf Optimizer Algorithm (GWOA) is employed for iterative optimization of multi-objective functions, leading to the identification of optimal performances and structural parameters. In [77], researchers adopted the Support Vector Regression (SVR) approach for design optimization of a 3-kW 6-phase concentrated-winding direct-drive PMSM to meet the electric vehicle performance requirements. The primary objective of using SVR was to expand the range of design solutions within the design space. A Pareto frontFootnote 8 method was then applied to get the optimal design models with maximum torque density and minimum torque ripple.

5.1.4 Summary

In concluding this section, it is important to highlight that the upcoming tables and figures will refer to "Standard GAN" techniques. It is essential to understand that Modified GAN entails upgrades and adjustments to the loss function of the standard GAN, resulting in enhanced performance and capabilities. Throughout our research, we have extensively explored the application and utilization of AI techniques during the product design phase. This investigation encompasses three major stages within this phase: Inspiration and concept generation, Shape synthesis, and Topology optimization. Our analysis has revealed the use of 18 different AI techniques in these stages, with Generative Adversarial Networks (GAN) and Deep Learning (DL) being the most prevalent techniques.

Table 4 provides an exhaustive compilation of AI techniques commonly applied throughout various design stages. In the "Inspiration and concept generation" phase, Artificial Neural Networks (ANN) emerge as the most prevalent AI technique, owing to their prowess in learning and mapping complex functions. Following closely behind are Genetic Algorithms (GA) and both modified and standard Generative Adversarial Networks (GAN), which share equal significance and extensive utilization in this creative stage. Conversely, Autoencoders (AE) and their variants, Decision Trees (DT), and Convolutional Neural Networks (CNN) find relatively lesser usage during this phase.

Table 4 AI techniques for design phase

In the realm of shape synthesis, Autoencoders (AE/VAE), Deep Convolutional Networks (DCN), and both modified and standard GAN techniques play equally critical roles, providing diverse avenues for generating innovative designs. The well-balanced application of these techniques showcases their adaptability and effectiveness in shaping the artistic aspect of the design process. In the realm of Topology optimization, there is a considerable research focus on utilizing AI algorithms from the Generative Adversarial Networks (GAN) family, particularly emphasizing the modified GAN variant. Additionally, Autoencoders (AE) and style transfer techniques are also utilized, while ANN and SVR techniques are the least utilized for Topology optimization tasks.

In Fig. 7, the bar chart provides a visual depiction of the prevalence of AI techniques across various stages of the product design process. This encompasses crucial phases such as Inspiration and concept generation, Shape synthesis, and Topology optimization. On the vertical axis, we have the spectrum of AI techniques, while the horizontal axis quantifies the volume of publications linked to each of these techniques. Evidently, the chart highlights the significant influence of AI algorithms, with a specific emphasis on those within the Generative Adversarial Networks (GAN) family, within this field. Notably, the modified GAN variant has emerged as a widely favored option among professionals. Additionally, other influential techniques come into focus. Autoencoders (AE), acknowledged for their capabilities in creating latent space representations, hold considerable importance. The chart also emphasizes the use of Artificial Neural Networks (ANN) and traditional GAN methods. This visualization adeptly illustrates the prevailing pattern of AI methods intricately integrating into the various phases of product design. It portrays a scenario where the fusion of innovative strategies, encompassing specialized GAN variants, reliable Autoencoders, and resilient ANN techniques, collectively drive the progression of product design.

Fig. 7
figure 7

Popular AI techniques at design phase

Within the confines of the pie chart in Fig. 8, a compelling narrative of AI technique adoption across distinct stages of product design unfolds. This visual representation unravels a fascinating tapestry of insights. As we delve into the slices of Fig. 8, a clear pattern emerges, casting light on the dynamic landscape of AI's integration into the realm of design. At the forefront, Topology optimization (TO) commands a substantial share, standing at an impressive 39%. This signals a resounding resonance with this AI technique, underscoring its pivotal role in refining and enhancing the design process. It emerges as the beacon guiding the intricacies of design optimization, steering creations toward their most efficient and refined forms. Moving across the canvas of the pie chart, we encounter the realm of Inspiration and concept generation, occupying a significant portion at 37%. This notable chunk signifies the recognition of AI's creative potential in sparking novel ideas and conceptual frameworks. It speaks to the harnessing of AI's capabilities to ignite the initial sparks of innovation, setting the stage for visionary creations to take shape As we explore further, the segment dedicated to shape synthesis captures our attention at 24%. While relatively smaller in comparison with the other two stages, by no means diminishes the importance of this stage. Here, AI’s hand guides the evolution of design shapes, instilling a touch of ingenuity in the journey toward finalizing a product's form.

Fig. 8
figure 8

AI techniques and their prevalence in different design stages

In essence, Fig. 8 encapsulates a narrative of AI techniques interwoven seamlessly throughout the tapestry of product design. It underscores the strategic utilization of TO, the fertile ground of Inspiration and concept generation, and the subtle yet impactful role of shape synthesis. Collectively, these insights underscore the nuanced interplay between AI and design, painting a vivid picture of innovation's evolving landscape. Figure 9 presents a comprehensive overview of the popularity of various AI techniques in the design phase of a product. The major pie chart, which accounts for 79% of the total area, showcases the most frequently utilized AI techniques during this phase.

Fig. 9
figure 9

AI techniques used at product design phase

Moreover, Genetic Algorithms (GA) and Style transfer techniques contribute 8% and 5%, respectively, to the design phase. In addition, various other techniques such as Decision Trees (DT), Deep Convolutional Networks (DCN), FOA-GRNN, FastText, VADER, Gated Recurrent Neural Networks (GRNN), GWOA-ELM, Long Short-Term Memory (LSTM), ResNet with U-Net, and Reinforcement Learning (RL) each maintain a moderate presence of 3%.

On the other hand, the minor pie chart, formed from an arc of the major pie chart and representing 21% of the total area, highlights the utilization of AI techniques that belong to the Generative Adversarial Networks (GANs) family but have modified loss function which is an enhancement of standard GAN. In this paper they are combined under the term "Modified GAN," these techniques play a significant role in the design phase. CGANs contribute 8% to the overall utilization, showcasing their prominence in this context. Other GAN techniques, such as PaDGAN, DCGAN, BEGAN, WGAN, and Range-GAN, each have a 3% share.

Overall, the pie of pie chart provides valuable insights into the distribution of AI techniques during different stages of the product design phase. AI's indispensable role in product design is evident across all stages. From igniting the creative spark during inspiration and concept generation to navigating the intricacies of shape synthesis and optimizing topology, AI techniques leverage data analysis, generative models, and optimization algorithms to enhance creativity, efficiency, and overall design excellence. By seamlessly incorporating AI into these stages, designers gain access to a wealth of opportunities, enabling groundbreaking innovation, accelerated iteration cycles, and meticulously fine-tuned product designs. The harmonious blend of human ingenuity and AI's computational capabilities propels product design into an era of boundless possibilities.

5.2 AI at manufacturing phase

The manufacturing phase in the product lifecycle encompasses all the activities involved in transforming raw materials or components into finished products. This phase involves various processes and tasks that contribute to the production and assembly of the product. Some key aspects and activities typically included in the manufacturing phase are:

  1. 1.

    Production planning: This involves determining the production requirements, creating production schedules, and establishing the necessary resources and facilities for manufacturing.

  2. 2.

    Procurement: The procurement process involves sourcing and acquiring the required raw materials, components, and equipment needed for production.

  3. 3.

    Production operations: This includes the actual manufacturing processes, such as machining, assembly, welding, molding, or any other specific operations involved in converting raw materials into finished products.

  4. 4.

    Quality control: Quality control activities ensure that the manufacturing processes meet the required quality standards. This includes inspections, testing, and monitoring of the production processes and the finished products to ensure they meet the specified criteria.

  5. 5.

    Inventory management: Managing inventory levels and ensuring the availability of materials and components throughout the manufacturing process is crucial to avoiding delays or disruptions.

  6. 6.

    Equipment maintenance: Regular maintenance and servicing of manufacturing equipment are essential to ensure optimal performance, minimize downtime, and extend equipment lifespan.

  7. 7.

    Supply chain management: Coordinating with suppliers, managing logistics, and overseeing the flow of materials and components from suppliers to the manufacturing facility is critical for a smooth manufacturing process.

  8. 8.

    Process optimization: Continuously improving manufacturing processes, identifying inefficiencies, and implementing lean manufacturing principles to enhance productivity and reduce waste.

  9. 9.

    Environmental and safety compliance: Adhering to environmental regulations and ensuring a safe working environment for employees during the manufacturing process is essential.

Overall, the manufacturing phase focuses on efficiently and effectively producing the desired quantity and quality of products while meeting cost, time, and quality objectives. The manufacturing industry is witnessing a transformative shift with the integration of artificial intelligence (AI) and machine learning (ML) technologies into various stages of the production process. AI is revolutionizing manufacturing by enabling intelligent decision-making, optimizing processes, and improving overall process efficiency and product quality [76]. One of the major advancements facilitated by AI in manufacturing is the intelligent integration of subtractive manufacturing (SM) and additive manufacturing (AM) processes. By combining traditional subtractive manufacturing techniques, such as machining, with additive manufacturing methods like 3D printing, manufacturers can benefit from the strengths of both approaches. AI algorithms are used to analyze design requirements, material properties, and production constraints to determine the optimal combination of processes, resulting in improved product quality, reduced waste, and increased manufacturing speed [77,78,79].

AI also finds applications in procurement, supply chain management, supplier selection, and warehousing/logistics management. By leveraging AI-powered systems, manufacturers can streamline their procurement processes, optimize inventory management, and enhance supplier selection based on factors such as pricing, quality, delivery time, and reliability. AI algorithms can analyze supply chain data, forecast demand, and optimize distribution and logistics networks to ensure efficient material flow and timely delivery, leading to improved operational efficiency and cost savings. Furthermore, ML (a subset of AI) and DL (a subset of ML) found applications in quality assessment [20, 24]. Quality assessment based on DL is a prominent application of AI in the manufacturing phase. DL algorithms can analyze large volumes of data, including sensor readings, images, and process parameters, to assess product quality in real time. By training DL models on historical data and known quality standards, manufacturers can detect anomalies, classify defects, and ensure consistent and high-quality production outcomes. AI-driven quality assessment minimizes the need for manual inspection, improves defect detection rates, and enhances overall product reliability [76, 79, 80].

In the next sections, we will explore the various AI techniques utilized in the manufacturing (AM and SM) phase. We will discuss real-world examples, case studies, and emerging trends that highlight the transformative potential of AI in intelligent integration, process optimization, supply chain management, human–robot collaboration, and quality assessment. By embracing AI in the manufacturing phase, organizations can achieve enhanced productivity, improved quality control, optimized supply chain operations, and a competitive edge in the rapidly evolving manufacturing landscape.

5.2.1 AI application in additive manufacturing

In recent years, additive manufacturing (AM), commonly referred to as 3D printing, has made significant advancements and has gained prominence across various industries [81]. AM involves the layer-by-layer fabrication of products or components based on the design specifications derived from a 3D model. It offers numerous advantages over traditional manufacturing methods, such as rapid prototyping and swift design iteration, the ability to create highly customized products, components with intricate geometries, and tailored material properties, while minimizing material waste [82]. As AM technology continues to evolve, the ASTM F42Footnote 9 standards categorize AM processes into seven different categories, with several of them capable of producing metallic parts for applications in sectors like automotive and aerospace [80, 83].

However, despite its potential, the industrial adoption of AM faces challenges related to production speed, issues with surface quality and dimensional accuracy of the parts [84], and microstructural deviations that can impact the mechanical properties and overall product quality [85]. Moreover, AM is typically not suitable for manufacturing large-sized products due to limitations in build volume capacity and constraints on the range of materials that can be used during the printing process [86, 87]. The primary cause of these shortcomings in the AM process is the simultaneous formation of both the shape and material properties of a part during the AM process. The production of an AM part entails intricate interactions among design, material, and process elements throughout a multi-stage process that comprises five key steps: design, process planning, building, postprocessing, and testing and validation [88]. The successful fabrication of a qualified part relies on the meticulous and precise execution of each of these steps.

The fusion of AI and ML methodologies has ushered in a transformative era, reshaping the approaches and optimizations applied to various stages of the additive manufacturing (AM) process. In Sect 5.1. AI at design phase, we explored the significance of AI techniques in the design phase of a product. In this section, our focus will shift to the utilization of AI techniques in AM for process and parameter optimization, in-process (in-situ) monitoring, and control of the process, as well as defect detection. However, before delving further into the topic, it is essential to clarify the distinction between additive manufacturing (AM) techniques and AM processes. During our research, we encountered instances where these terms were used interchangeably, but in certain contexts, they hold a subtle difference. In general, “additive manufacturing techniques” encompass a broader category of methods utilized to construct three-dimensional objects layer by layer. The following are the commonly used AM techniques:

  1. 1.

    Fused deposition modeling (FDM): In FDM, a thermoplastic filament is heated and extruded through a nozzle. The nozzle moves along a predefined path, depositing material layer by layer to build the object.

  2. 2.

    Stereolithography (SLA): SLA uses a liquid photopolymer resin that is cured by a UV laser. The laser selectively solidifies the resin layer by layer, creating the final 3D object.

  3. 3.

    Selective laser sintering (SLS): In SLS, a high-powered laser is used to selectively fuse powdered material, typically polymers or metals, layer by layer to create the object.

  4. 4.

    Selective laser melting (SLM): SLM is like SLS, but it is used specifically for metal materials. In SLM, a layer of metal powder is spread across the build platform, and a high-powered laser is used to selectively melt and fuse the powder particles together. As with SLS, the build platform is lowered after each layer is completed, and a new layer of powder is spread on top to continue the process.

  5. 5.

    Digital light processing (DLP): Like SLA, DLP uses a digital light projector to cure a vat of liquid photopolymer resin layer by layer.

  6. 6.

    Binder jetting (BJ): Binder jetting involves spreading a layer of powdered material and selectively applying a liquid binding agent to fuse the particles together.

  7. 7.

    Material jetting (MJ): Material jetting uses inkjet print heads to deposit droplets of photopolymer material onto the build platform, which are then cured by UV light.

  8. 8.

    Directed energy deposition (DED): DED involves feeding a feedstock material, such as metal powder or wire, into a high-energy heat source, which melts the material and deposits it layer by layer.

  9. 9.

    Electron beam melting (EBM): EBM uses an electron beam to selectively melt metal powder, creating fully dense metal parts with excellent mechanical properties.

On the other hand, “additive manufacturing processes” specifically refer to the steps and procedures involved in each AM technique. For example, in the FDM technique, a thermoplastic filament is fed into a heated nozzle, where it is extruded onto the build platform, gradually forming the object layer by layer. Conversely, in SLA, a liquid photopolymer resin is exposed to a UV laser, causing the material to solidify and create the object.

In summary, additive manufacturing techniques encompass a wide range of methods used to build objects layer by layer, while additive manufacturing processes pertain to the specific procedures and steps involved in each technique. Although the terms are sometimes used interchangeably, it is important to acknowledge the potential nuances between them for precise communication in the field of additive manufacturing.

5.2.2 AI assisted process and parameter optimization

An emerging area of research involves utilizing data-driven approaches to establish the intricate connections among potential parameters of process (P), final material structure (S), properties (P), and performance (P) of additive manufacturing (AM) parts, commonly referred to as PSPP [89]. Traditionally, the development and optimization of process parameters are achieved through a series of experiments or simulation methods in the realm of new material additive manufacturing. However, the experiment-based approach often involves time-consuming and costly trial-and-error processes, particularly in the metallic part fabrication AM process [84, 90,91,92]. Although Finite Element Modeling (FEM) methods have demonstrated some success in accurately understanding the complex relationships among process parameters, material structure, properties, and performance (PSPP) in AM, it remains a challenge to achieve precise representations of these relationships through high-fidelity modeling.

In additive manufacturing, models based on physical modeling and simulation are highly complex, demanding a thorough understanding of material properties and the physical principles that govern the AM process, including factors such as melt pool geometry and the formation of microstructures. Nonetheless, macro-scale simulations, such as FEM, may deviate from experimental results due to simplified assumptions [93]. Low-fidelity models lack information about physical properties, particularly regarding variations between different machines and materials [94]. Furthermore, sophisticated techniques like computational fluid dynamics often focus on individual tracks or a limited number of tracks and layers, posing challenges in predicting the macro-scale or continuum mechanical properties of the parts [95, 96]. The potential of AI techniques lies in their ability to successfully unravel intricate relationships between process parameters, material structure, properties, and performance (PSPP), surpassing the limitations of traditional approaches. Table 5 presents the AI techniques employed to optimize processes and parameters, aiming to minimize adverse effects in the fabricated additive manufacturing part.

Table 5 AI techniques used for AM process and parameter optimization

In high-energy AM processes, the individual tracks at the mesoscale serve as the foundational blocks. The morphology of the melt pool, including its geometry, continuity, and uniformity, significantly impacts the quality of the final product. To predict the dimensions of the melt pool, such as width, depth, and height, in powder-based [97] and wire-based [98, 99] Directed Energy Deposition (DED) processes, a multi-layer perceptron (MLP) model was employed using a limited set of experimental data. This allowed for a close association between the melt pool geometry and the process parameters. In a similar vein, researchers in [100] conducted a study aimed at monitoring and comprehending the intricate correlation between process parameters such as laser power, scanning speed, and feed rate in DED and the precision of the resulting part, by assessing the deposition height. Their approach involved adapting a Backpropagation Neural Network (BPNN) by incorporating both a momentum coefficient algorithm and an adaptive learning rate, leading to notable enhancements in training efficiency and overall outcomes.

In contrast, another investigation in [101], focused on the optimization of process parameters in Laser Powder bed Fusion (L-PBF) through the utilization of a data-driven framework. To establish relationships between the process parameters, melt-pool depth, and layer height, the researchers employed the mutable smart bee algorithm, a bio-inspired optimization technique, in conjunction with a fuzzy inference system. The identified relationships were subsequently combined with a non-dominated sorting genetic algorithm to further optimize the process parameters. Furthermore, the researchers suggested the application of Self-Organizing Maps, an unsupervised machine-learning technique, for post-optimization of the process.

In [102], researchers employed a Gaussian process-based (GP) model to visually represent the relationship between melt pool depth and process parameters. This approach enabled the creation of 3D response maps, aiding in the effective determination of the process window to avoid keyholes. In metal AM, achieving full density is crucial due to the significant impact of porosity on the mechanical performance of parts, especially their fatigue properties [103]. The author mentions the utilization of Multi-Layer Perceptron (MLP) and Gaussian Process (GP) models to predict porosity based on combinations of process parameters in selective laser melting (SLM). MLP and GP with Bayesian methods were employed for porosity prediction, with MLP offering the advantage of modeling complex nonlinear relationships and GP providing estimates of prediction uncertainties [104, 105]. In specific scenarios, there is a need for intentional open porosity, for example applications including auxetic structures for energy absorption and porous structures for medical implants. For instance, in the case of selective laser sintering (SLS) processing of PLA material, the prediction of open porosity was explored using SVM and MLP techniques [106].

Process parameters and properties of AM build can also be studied at the macro-scale level. Researchers in [107], aimed to analyze the high-cycle fatigue life of SS316L parts produced using SLM. They collected 139 fatigue data points from parts fabricated under 18 different processing conditions on the same SLM machine. The authors successfully employed the Adaptive-Network-Based Fuzzy Inference System (ANFIS) to predict fatigue life. However, the models' performance decreased when predicting fatigue life using 66 data points from published literature, mainly due to machine-to-machine variability. To improve generalization capability, the authors suggested incorporating both experimental and literature data in model training. On the other hand, researchers in [91] focused on narrowing down the process window for electron beam melting (EBM) by observing the top-build surface condition. They utilized SVM to correlate process parameters (beam current and scan speed) with surface conditions. However, their study primarily employed SVM for data fitting and plotting decision boundaries. Due to the small size of their dataset, it was challenging to allocate a separate test set to evaluate the accuracy of the overall model.

Researchers in [93] utilized Recurrent Neural Networks (RNN) for time series forecasting in the context of complex parts produced through DED. They employed RNN to train Finite Element Modeling (FEM) data and predict the high-dimensional thermal history of the parts during the DED process. In [100], authors attempted to predict the depositing height of thin walls in DED. They compared the performance of both MLP and SVM techniques for this task. However, the specific objectives and outcomes of their study were not provided in the given information. In the context of material extrusion-based AM processes, researchers primarily focus on investigating the mechanical properties at the macro-scale. Specifically, within the Fused Deposition Modeling (FDM) process, extensive research has been conducted on various process parameters, including layer thickness, print temperature, orientation, and raster angle during the build process. Among the ML approaches commonly employed in this domain, the MLP has gained significant popularity. MLP has demonstrated its effectiveness in capturing complex, non-linear relationships within the system, making it well-suited for tasks such as data fitting and estimation. Consequently, MLP has been widely utilized to predict various mechanical properties of materials, such as tensile properties [108], compressive strength [109], wear rate [110], dynamic modulus of elasticity [111], as well as creep and recovery properties [112], particularly for particularly for polylactic acid (PLA) [113] and polycarbonate–acrylonitrile butadiene styrene (PC-ABS) materials. Additionally, in [114], the authors applied MLP to predict the maximum printable build size, although further details regarding the specific objectives and outcomes of their study were not provided in the available information.

The study conducted in [115], centered around the prediction of powder-spread parameters utilizing ML techniques. Their approach involved combining a Discrete Element Method (DEM) simulation with a BPNN. The main objective was to create a comprehensive process map for powder-spreading that could assist workers in producing parts with the desired surface roughness. During this research, it was observed that researchers showed a keen interest in employing a hybrid approach for optimizing processes and parameters. In the study in [116], the authors presented a holistic methodology to predict the mechanical toughness of parts fabricated through AM. Their approach involved the integration of different models including physics-based models, process models, material models, and data-mining techniques to comprehensively comprehend the interplay between process response, material properties, and the final design structure. The researchers employed self-consistent clustering analysis and reduced-order modeling to establish a mapping between microstructural descriptors and roughness. Importantly, they found that ML techniques such as kriging and neural networks (NN) are particularly effective when evaluating larger databases, enhancing the accuracy and applicability of the predictive models. The research conducted in [117], aimed to predict performance responses, including fraction porosity, median pore diameter, and median pore spacing, for AM-built parts. Their focus was on combining information and models related to the AM process, part structure, and material properties (specifically Inconel 718Footnote 10). They used a Random Forest Network (RFN) due to its ability to handle both classification and regression tasks while being insensitive to irrelevant features.

5.2.3 AI assisted in process (in-situ) monitoring and control

Presently, AM encounters a range of defects linked to the manufacturing process itself. These defects encompass geometric inaccuracies, surface imperfection, porosity in printed parts, incomplete fusion, cracks, separation or splitting of layers (delamination), distortion, foreign particles or contaminants inclusions, and process instability issues like keyhole formation and balling. These defects are primarily attributed to the layer-by-layer deposition process, with some defects propagating across multiple layers, potentially resulting in the failure of the entire build. Consequently, the significance of in-process monitoring becomes evident in identifying and addressing these defects in a timely manner [118]. In-situ monitoring technology has witnessed significant advancements, with the rapid evolution of various sensing technologies. These technologies now encompass fast optical cameras, acoustic sensors, thermocouples, vibration sensors, pyrometers, and photodetectors, along with other sensors [119]. This progression has caught the attention of researchers, who have directed substantial efforts toward monitoring defects during the additive manufacturing (AM) process. Traditionally, defect detection during printing has been associated with tedious and time-consuming procedures, often yielding low-fidelity results.

However, with the integration of advanced in-situ monitoring techniques, researchers aim to enhance both process control and part quality. Current efforts in AM focus on using ML to achieve real-time control and improve part quality. The primary emphasis is on monitoring the state of the built part and the AM machine to detect defects and optimize process parameters. By integrating advanced sensor technologies and ML algorithms, real-time defect detection and process optimization are possible. This intersection of in-situ monitoring technology and ML presents a promising avenue for advancing the field of additive manufacturing, enabling greater control, efficiency, and reliability in the production of AM components. AM parts can exhibit various defects, such as porosity, surface imperfections, delamination between layers, cracks, and geometric distortions [120]. The detection of these defects plays a crucial role in identifying failed builds and accurately predicting the final properties of the part. In-situ monitoring using Image data

Optics-based techniques are extensively employed for in-situ monitoring in the field of AM. These techniques involve the use of cameras including infrared (IR) thermal, high-speed cameras, and digital cameras, to capture optical signals. Additional information can be obtained from pyrometers and photodiodes in certain cases. The collected data typically includes indicators related to the shape and temperature profiles of melt pools as well as plume and spatter. During the inspection, images of the top surface were captured in a layer-by-layer sequence. Due to the high-dimensional nature of these images, there is a growing trend towards utilizing ML techniques to enhance the effectiveness of in-process monitoring in AM.

Table 6 presents the AI techniques employed to In-situ monitoring using Image data. During the high temperature melting process of laser metal Powder Bed Fusion (PBF), two important by-products called plume and spattering are generated. These by-products provide valuable insights into the interaction between the laser and the material, as well as the overall stability of the process. It is worth noting, however, that the presence of plumes and spattering can potentially affect the stability of the melt pools. As a result, researchers have dedicated significant efforts to understanding the impact of plumes and spattering on the behavior of melt pools.

Table 6 AI techniques used for image based in process monitoring

In a research investigation conducted by scholars [121], plume images were acquired utilizing an infrared (IR) camera with specific sampling frequency and spatial resolution. They employed an unsupervised ML technique to automatically detect unstable states of melt pools during the Selective Laser Melting (SLM) process using zinc powder. The dataset comprised 14 layers obtained from streams of IR images, with a portion dedicated to training and the remainder used for monitoring through a control chart. By employing image thresholding and segmentation, relevant regions of interest were extracted, resulting in reduced processing time and computational requirements. Similarly, in another study [122], researchers aimed to classify different melting states based on indicators derived from plumes and spatter captured by an IR camera. They utilized MLP, CNN, and DBN models for the classification task. The DBN model achieved high accuracy while necessitating minimal signal pre-processing, parameter selection, and feature extraction.

In a separate study [123], researchers employed a high-speed camera to capture plumes, spatters, and melt pool images, leading to improved classification accuracy for melt pool anomalies. CNN outperformed SVM in automatically extracting features from raw data, enhancing the classification performance. Similarly, in another investigation [124], a high-speed camera was used to capture melt pool images and location information during the SLM process with different laser powers. DNN accurately classified the melt pool images based on laser powers, which correlated with varying levels of porosity. Additionally, MLP and SVM were applied to differentiate the thermal signatures of melt pools in overhang sections from those in bulk sections [125]. By applying k-means clustering, defects in overheated regions were successfully detected and located by analyzing the intensity profiles of melt pool image pixels [126].

A direct approach to defect identification in additive manufacturing involves the layer-by-layer examination of top-build surfaces. This method detects various defects such as pores, lack of fusion, cracks, balling, warpage, and curling. Complementary information is obtained by combining layer-wise images before and after powder recoating in laser powder bed fusion, particularly for warpage detection. Different machine learning algorithms (CNN, SVM, RF) are explored for defect detection in selective laser melting. Layer-wise images can be integrated with post-built CT scan data to precisely locate defects [127, 128]. Authors in [129] achieved significant classification accuracy improvement (62% to 85%) by using SVM with eight images captured under different lighting conditions. In fused deposition modeling (FDM), SVM is used to classify parts based on images captured at specific checkpoints, while unsupervised ML methods yield the highest accuracy for malicious infill detection [130, 131]. In-situ monitoring using acoustic data

Compared to optical signals, acoustic signals offer certain advantages in terms of sensor sensitivity and cost-effectiveness [132]. They also provide a higher temporal precision, enabling more accurate tracking of defect locations. Additionally, processing 1D acoustic data is faster as compared to analyzing image data which usually consists of 2D or 3D tomography data. However, for some AM processes like laser metal AM in an inert gas environment background, noise can significantly affect the acoustics signal. Therefore, the implementation of intelligent monitoring based on ML techniques holds promise as a superior solution to mitigate these challenges. Table 7 presents the AI techniques employed to In-situ monitoring using acoustic data.

Table 7 AI techniques used for in process monitoring using acoustic data

The study conducted in [133], focuses on the inspection in the Laser Powder Bed Fusion (L-PBF) process using acoustic signals and DBN. The authors observed that variations in acoustic signals occur due to temperature changes during the transition from melting to solidification. They trained a DBN, a type of NN, to recognize and categorize defects such as balling, keyholing, and cracking. This classification was based on analyzing the sparking sound spectrum in the time domain and the signal power spectral density in the frequency domain. By leveraging DBN and acoustic signal analysis, the researchers aimed to develop a method for accurate defect detection in the L-PBF process. On the other hand, researchers in [132] explored the combination of Acoustic Emission (AE) and CNN for defect detection in the L-PBF process. They employed a highly sensitive fiber Bragg grating acoustic sensor to capture airborne AE signals which are sampled at 1 MHz. These signals are generated during melting, sparking, spattering, and solidification. The collected signals were transformed from the time domain to the frequency domain using the wavelet packet transform. To recognize defect-related features, they utilized spectral CNN (SCNN), an enhancement to plain CNN. SCNN is optimized for working frequency domain data and is mainly used for classification and regression tasks. The authors reported a confidence level ranging from 83 to 89% for classifying different porosity levels using the SCNN.

In the study conducted in [134], researchers introduced a model that combines ML algorithms and real-time sensor data to detect process conditions that contribute to porosity in L-PBF. The researchers examined how different parameters like hatch spacing, velocity, laser power, and pore characteristics are correlated in the final parts. They extracted statistical features from in-situ images obtained layer by layer and employed SVM, k-NN, and NN to categorize these features. This approach facilitates the identification of process conditions that are highly likely to cause porosity during the L-PBF process. For metal L-PBF processes, computer vision and Bayesian inference based on an online inspection system are proposed. The researchers generated a dataset of labeled features by analyzing in-situ camera images of each layer, distinguishing between defective and non-defective characteristics. Frequency-domain features were extracted from these images, and a Bayesian classifier was implemented to differentiate between defective and non-defective parts based on these features.

Acoustic signals obtained from the plasma generated at the surface of the powder bed have recently been employed for in-process monitoring in the Powder Bed Fusion (PBF) process. Changes in the surface temperature of the parts caused by overheating or underheating of the metal powder can alter the plasma density. This variation, along with fluctuations in atmospheric pressure within the enclosed chamber, affects the acoustic intensity. Taking advantage of this principle, researchers in [133] utilized a microphone to capture acoustic signals and then employed DBN to classify the melt track conditions (balling, normal, overheating) in the selective laser melting process. Unlike traditional machine learning methods that involve multiple sequential steps for data processing, de-noising, and feature extraction, DBN simplifies and expedites the process through generative pre-training and discriminative fine-tuning techniques.

In the research in [135], investigators explored the use of acoustic waves in the FDM process to identify anomalies. Based on changes in acoustic feature patterns, which served as indicators of faulty processes, they employed ML techniques such as K-means clustering, to differentiate between valid and defective printing processes. Furthermore, they successfully achieved the same objective by implementing the hidden semi-Markov model with reduced feature dimensions, facilitating faster data processing [136]. In-situ monitoring using data fusion

As the need for fault detection continues to grow, there is an emerging field of research that focuses on the fusion of data from multiple sensors to monitor and control the manufacturing process. This approach, known as multi-sensor data fusion, is gaining traction as a means to enhance the accuracy and effectiveness of process monitoring and control systems. By combining information from various sensors, researchers aim to improve the overall reliability and performance of fault detection mechanisms. This growing area of research holds promise for advancing the capabilities of process monitoring and control in various industries, including additive manufacturing. Table 8 presents the AI techniques employed to In-situ monitoring using data fusion.

Table 8 AI techniques used for in process monitoring using data fusion

In [137], researchers have explored the utilization of data fusion techniques in electron beam powder bed fusion systems for process monitoring. The objective was to integrate data from multiple sensors embedded within the system. They employed the Support Vector Data Description (SVDD) ML technique to classify signals as either in-control or out-of-control, facilitating the automated identification of faults and process errors. The study emphasized the significance of stable signals obtained from multiple sensors in obtaining valuable insights into process performance. However, it should be noted that the approach’s applicability may be limited to the sequential production of the same product, potentially restricting its use in other scenarios.

In the studies in [129, 138], researchers have explored the use of multi-sensor data fusion in L-PBF to detect defects related to discontinuities. They combined data from sensors with homogeneous characteristics, such as layer-wise images of the powder bed captured under various lighting conditions, as well as data from sensors with heterogeneous properties, including post-build Computed Tomography (CT) scans. Ground-truth labels indicating normal or anomalous conditions were extracted from the CT scans. The researchers trained models NN and SVM [129], as well as SVM ensemble classifiers [138], to directly detect defects from the images. The ensemble classifiers achieved an impressive classification accuracy of 85% by analyzing multiple images taken under different lighting conditions, outperforming the accuracy of 65% achieved when using images from a single lighting condition.

Researchers in [139], developed an online-monitoring system for fused deposition modeling (FDM) that integrates data from a diverse range of sensors. These sensors include thermocouples, accelerometers, an infrared temperature sensor, and a real-time miniature video borescope. The researchers employed the non-parametric Bayesian Dirichlet process mixture model and evidence theory to analyze the combined sensor data and detect process failures, specifically nozzle clogs. The system achieved a high prediction accuracy of up to 85%, surpassing the performance of other existing methods such as probabilistic NN, Naive Bayes clustering, and SVM in detecting process failures.

5.2.4 AI/ML assisted AM production planning and quality control AM production planning

To overcome the challenges of costly AM and improve productivity, a comprehensive pre-manufacturing plan is crucial across the entire production chain, from CAD design to final product quality control. Researchers have turned to ML techniques to aid in AM planning, optimizing critical aspects such as material selection, part orientation, support structure generation, and parameter optimization. By leveraging ML algorithms, the goal is to enhance the efficiency and effectiveness of AM planning, resulting in higher yields and improved overall production outcomes. This integration of ML in AM planning holds promise for reducing costs, increasing productivity, and delivering higher-quality AM products.

Researchers in [140] sign, material, and process parameters [141]. In a similar vein, researchers in [142], utilized SVM to improve the accuracy of a 3D printability checker software that assesses the suitability of a design for additive manufacturing. Additionally, an MLP model was developed to enhance build time estimation, achieving a significant reduction in error rates compared to existing estimators used in SLS machine software [143]. These studies highlight the diverse applications of ML in pre-manufacturing, encompassing manufacturability assessment, print success prediction, design suitability evaluation, and time estimation enhancement. Quality control for AM products

Ensuring consistent product quality in AM is challenging due to variations across machines and builds. These variations impact factors like accuracy, density, stability, and mechanical properties. Researchers are investigating ML methods to address this challenge. The goal is to develop ML models and techniques for quality control in AM, enabling reliable desired quality standards. There are three strategies available for minimizing geometric errors in AM. These approaches involve modifying the overall size of the part, adjusting the original computer-aided design (CAD) file, and implementing process control. ML algorithms such as MLP or CNN can be employed to predict the scaling ratio and facilitate the resizing of the part prior to fabrication [103]. Additionally, ML algorithms can be utilized to address shape-dependent geometric deviations caused by thermal stress in AM. By modeling these deviations, ML techniques enable the necessary geometric modifications to be made in CAD files.

Researchers in [144] utilized ANN to compensate for geometric deformations and mitigate the thermal effects during the SLM process, aiming to enhance the accuracy and quality of the final printed parts. Similarly, in a study conducted in [145], researchers used experimental data instead of simulation data. The experimental data was collected from FDM printing, and a model was trained based on this data to predict the deformed locations. By modifying the CAD geometry based on these predictions, the researchers aimed to enhance the overall quality and precision of the additive manufacturing process.

During Directed Energy Deposition (DED) additive manufacturing, controlling the process parameters can allow for manipulation of the shape of individual tracks. This manipulation is aimed at reducing geometric errors on a larger scale, such as at the macro level. By adjusting the process parameters, such as laser power, scanning speed, or material feed rate, it is possible to optimize the deposition process and improve the overall geometric accuracy of the printed part. This highlights the importance of process control in DED to achieve the desired shape and minimize geometric errors during manufacturing [97, 98, 100]. To establish a mapping between irregularities in part geometry and process conditions, researchers have used self-organizing maps (SOM) [146]. SOM can reduce the amount of 3D point cloud data required for assessing the geometric accuracy of AM parts. Compared to traditional supervised machine learning approaches, the SOM can provide accurate results with fewer data points. This reduction in data requirements can improve efficiency and streamline the quality assessment process in AM [147].

To enhance the mechanical performance, relative build density, and process stability AM produced part, the implementation of in-process monitoring is utilized. This monitoring involves the integration of a variety of sensors and cameras, as discussed earlier. Visual and acoustic signals emitted during the printing process are collected and processed. These signals are then used to train multiple machine learning (ML) algorithms, enabling them to monitor the printing process. In this context of AM, ML techniques can be applied to autonomously diagnose and classify the failure modes [130, 131, 135, 148], assess melting conditions [121, 122, 124, 126, 129, 133, 149,150,151], determine printing status [136, 137, 152, 153], identify porosity [132, 154, 155], predict tensile properties [156, 157], and estimate surface roughness [158]. Various studies have investigated these applications.

5.2.5 AI/ML application in subtractive manufacturing

The conventional methods of subtractive manufacturing, including both conventional and non-conventional machining processes, have often faced challenges in terms of inefficiencies, limited process control, and difficulty in optimizing machining parameters. These issues can lead to suboptimal machining outcomes, longer production times, and higher costs. However, the integration of AI and ML has transformed the machining process into smart machining; a machining process that possesses the capability to autonomously adapt its parameters throughout the machining operation with the goal of achieving specific objectives.

Through the application of AI/ML algorithms, machining processes can now be optimized and automated, leading to improved accuracy, productivity, and cost-effectiveness. AI/ML enables real-time monitoring and analysis of machining parameters, allowing for predictive maintenance and proactive error detection. With AI/ML, the machining process can be optimized for specific materials, geometries, and cutting conditions, resulting in reduced material waste, improved surface finish, and enhanced overall machining performance. The incorporation of AI/ML into machining processes brings advantages such as increased efficiency, higher precision, reduced downtime, and the ability to adapt to complex and dynamic machining requirements, ultimately revolutionizing the subtractive manufacturing industry [159, 160].

Subsequently, we will enumerate the applications of AI techniques in both conventional and non-conventional machining processes. However, it is important to note that the AI techniques employed for monitoring tool condition, tool health, and tool prognosis (estimating remaining useful life) will not be included in this list. These aspects are more closely associated with the maintenance phase and will be discussed in Sect. 5.3 dedicated to maintenance.

5.2.6 Conventional machining processes and parameter optimization

In several studies [161,162,163,164,165,166,167,168,169,170,171], algorithms have been extensively explored in the context of conventional machining processes. These studies serve various purposes, including optimizing process parameters, monitoring machine health, and enhancing product quality. Among the conventional machining processes, turning and milling have received significant attention and have been extensively studied. Table 9 lists the AI/ML techniques used for the smart conventional machining process.

Table 9 AI techniques used for smart conventional subtractive machining process Turning

AI techniques have become indispensable in the turning process, offering solutions for the prediction of machinery parameters like cutting forces and surface roughness, tool condition monitoring, grain size, hardness prediction, and many more. Researchers have employed various algorithms, such as ANNs, SVR, and polynomial regression, to address these challenges. Attaining the desired surface roughness is crucial both for the functional requirements and aesthetic appeal of a product. The research presented in [161], centers on the prediction of surface roughness through the utilization of Multiple Regression Analysis (MLR). The analysis incorporates cutting parameters, tool wear, and statistical parameters derived from vibration signals captured during the machining process on a turning center. To enhance the accuracy and computational efficiency of the predictions, Principal Component Analysis (PCA) is used. This technique effectively reduces the number of input features while preserving essential information, resulting in an enhanced ability of the machine learning model to predict surface roughness accurately.

In [162], the performances of three AI techniques, SVR, Polynomial Regression (PolyReg), and ANN, are compared for the prediction of cutting parameters independently for the high-speed turning process. The parameters considered are surface roughness (Ra), cutting force (Fc), and tool lifetime (T). The results reveal that polynomial regression yields superior performance in predicting “Fc” and “Ra” compared to SVR and ANN. However, ANN performs best in predicting “T” while showing the lowest performance for “Fc” and “Ra”. The study also demonstrates that the polynomial kernel in SVR outperforms the linear and RBF kernels. Furthermore, there is no significant difference in performance between SVR and polynomial regression when predicting all three machining parameters.

The study in [163] proposes a direct method to quantify carbon emissions in machining processes, specifically in turning operations. By analyzing experimental data using MATLAB, the study determines coefficients for the quantitative method. Additionally, a multi-objective teaching–learning-based optimization algorithm is introduced to minimize carbon emissions and operation time simultaneously by optimizing cutting parameters. Notably, researchers in [164] utilized a combination of Random Forest (RF) and Genetic Algorithms (GA) [172, 173] to investigate the effects of cutting parameters and tool characteristics on surface properties, such as microhardness and grain size. Milling

Numerous studies have extensively investigated the application of AI algorithms in milling processes, some of which are presented in this paper. The utilization of AI algorithms has facilitated the process monitoring, optimization, and prediction of various parameters and factors, as well as tasks that were traditionally challenging using conventional methods. Monitoring tool condition, including wear tracking and failure prediction, has been the most common objective. For these tasks, classification algorithms have frequently been employed for this purpose. For instance, researchers used SVM with radial bases kernel function to predict surface roughness for the micro-milling process [165] as well as predicting chatter lobe stability with dynamic modeling of cutting force [166]. Moreover, statistical AI techniques such as Gaussian Process Regression (GPR) [167, 168] have found application in optimizing process parameters to reduce costs through energy consumption predictions. Evolutionary algorithms such as Non-dominated Sorting Genetic Algorithm II (NSGA-II) [169] have been used for optimizing tool path, selecting and evaluating cutting parameters. In addition to NSGA-II, Back Propagation Neural Network (BPNN) is also used for selecting optimal values of cutting forces for a 2.5D milling process [174]. Grinding

Although the research on smart grinding processes is relatively limited, significant progress has been made in predicting the quality of the finished surface of the product. Researchers in [170] employed interpolation-factor SVM to monitor and control surface roughness and shape characteristics, such as peak-valley measurements. They utilized input parameters such as acoustic emission, grinding force, and vibration data to achieve accurate predictions. These findings emphasize the potential of machine learning in enhancing grinding processes by enabling real-time monitoring and precise prediction of surface quality. Further exploration in this field holds promise for improving process efficiency and elevating the overall product quality in grinding operations. Drilling

Accurate prediction of product quality in the drilling process has been achieved through the monitoring of key process parameters such as torque, cutting force, and thrust force. In the case of machined carbon-fiber-reinforced polymer plates, the surface quality and dimensional characteristics were assessed using a machine learning and pattern recognition method called “logical analysis of data” [171]. The presented approach showcases the potential of AI in evaluating and optimizing drilling operations by providing real-time insights into product quality. By incorporating machine learning algorithms, manufacturers can enhance process control, improve efficiency, and meet the desired quality standards for drilled components. The integration of machine learning in drilling processes has the potential to revolutionize the manufacturing industry and drive advancements in product quality and performance. Boring

To enhance the surface finish quality during the boring process, the prevention of chatter is of utmost importance. In [175], researchers conducted a comprehensive study aimed at identifying the parameters that contribute to chatter, which include spindle speed, depth of cut, and feed rate. Vibration signals were collected during the process, and features were extracted using the discrete wavelet transform. These features were then classified into three classes (stable, transition, and chatter) by utilizing SVM. This research serves as compelling evidence for the efficacy of machine learning techniques in analyzing vibration data and accurately predicting chatter occurrence in the boring process. By employing SVM and waveform analysis, manufacturers can take proactive measures to mitigate chatter-related issues and significantly improve the surface finish quality of bored components.

5.2.7 Non-conventional machining processes

Although non-conventional machining processes have received less attention, there have been notable efforts to utilize learning algorithms for improving finish quality by predicting surface roughness. However, the main challenge in these processes is the low productivity, which has led to a significant emphasis on optimizing process parameters to maximize the Material Removal Rate (MRR). Through the application of machine learning techniques, manufacturers can optimize process parameters to achieve a balance between surface quality and productivity in non-conventional machining. Continued research in this field has the potential to enhance the capabilities of non-conventional machining processes and drive improvements in manufacturing efficiency. Table 10 lists AI techniques used for smart non-conventional subtractive machining processes.

Table 10 AI techniques used for smart non-conventional subtractive machining process Electric discharge machining (EDM)

Despite efforts to predict surface roughness in EDM (Electric Discharge Machining), the primary focus of implementing machine learning methods has been on predicting and maximizing the material removal rate. This emphasis stems from the inherent challenge of low productivity associated with the EDM process. To achieve this goal, a common approach is presented in [172]. The focus of the research is to optimize the process parameters in order to achieve the highest material removal rate and the lowest wear ratio in EDM. A NN model is used to establish the relationship between the process parameters and the performance of the machining process. The study employs three different evolutionary algorithms (Simulated Annealing, Genetic Algorithm, and Particle Swarm Optimization) along with the NN model to predict the optimal process parameters. The performance of these algorithms is then compared to determine their effectiveness in optimizing the machining process. Overall, the study highlights the potential of neural networks and evolutionary algorithms in improving the optimization of process parameters for EDM. Similarly, researchers in [177] used feedforward BPNN (FF-BPNN) with GA. These evolutionary algorithms play a crucial role in optimizing EDM process parameters, leading to improved MRR. By integrating machine learning techniques into EDM, manufacturers can enhance productivity levels while simultaneously maintaining the desired surface roughness quality. Electrochemical machining (ECM)

Machine learning algorithms have been successfully utilized in Electrochemical Machining (ECM) to predict and optimize the Material Removal Rate (MRR), leveraging its process similarities to EDM. In the study conducted in [176], the Teaching–Learning Based Optimization (TLBO) algorithm proved effective in enhancing the MRR in ECM, surpassing the performance of the Artificial Bee Colony (ABC) algorithm by requiring fewer iterations. Additionally, TLBO was implemented in the hybrid process of EDM, resulting in a remarkable 18% increase in MRR compared to the ABC algorithm. These findings show the potential of machine learning techniques, particularly TLBO, in maximizing the MRR and enhancing the efficiency of ECM and related hybrid machining processes. Laser machining

The application of laser processes in industrial settings is becoming increasingly popular; however, the search for optimized process parameters, particularly in delicate tasks like micromachining, remains a challenge. The study conducted in [178] aims to investigate the influence of process parameters in micro-laser milling on the resulting micro shape features. Experimental trials were conducted using a pulsed Nd:YAG laser on hardened steel, where scanning speeds, pulse intensities, and frequencies varied. The collected data was analyzed in terms of dimensional accuracy, surface roughness, and material removal rate. Machine learning techniques including Decision Trees (DT), Linear Regression (LR), NN, and K-NN, were implemented and compared. The results showed that neural networks were effective in modeling channel depth, while DT performed well in predicting material removal rate. Both techniques exhibited similar accuracy for width and surface roughness. The study suggests utilizing DT to understand the relationship between input parameters and using neural networks to achieve dimensional accuracy. Moreover, it underscores the importance of comprehensive datasets for developing reliable AI models, particularly considering the presence of noise, particularly in surface roughness measurements. Abrasive water jet

In abrasive water jet machining, the prediction of surface roughness is a crucial area of focus. Researchers have predominantly employed various types of NNs [179, 180], including feedforward, backpropagation, and extreme machine learning, to address this objective. Notably, researchers in [181], achieved an outstanding prediction accuracy of 99% by utilizing a hybrid algorithm that combines grey relational analysis for feature selection with SVM. The effectiveness of ML techniques in accurately predicting surface roughness in abrasive water jet machining processes is highlighted by this innovative approach. By harnessing the potential of hybrid algorithms, manufacturers can optimize their machining operations and achieve the desired surface finish with exceptional precision and reliability.

5.2.8 AI/ML applications in supply chain management (procurement, inventory management, and logistics)

AI applications have revolutionized various aspects of business operations, and one area where it has made a significant impact is in procurement, logistics, and warehousing. The integration of AI technologies in these domains has brought forth numerous benefits, including improved efficiency, enhanced decision-making, and optimized resource allocation. In logistics, AI facilitates intelligent route planning, real-time tracking, and predictive analytics, leading to optimized delivery schedules and reduced costs. Additionally, in the realm of warehousing, AI-powered systems enable automated inventory management, efficient order fulfillment, and advanced demand forecasting. These AI applications not only enhance operational efficiency but also provide valuable insights for strategic decision-making, ultimately driving business growth and customer satisfaction. Table 11 lists AI applications in procurement, Inventory management, and logistics (supply chain).

Table 11 AI applications in procurement, Inventory management, and logistics Procurement

Procurement and Sourcing (P&S) are vital components of the supplier selection process, considering factors like the Bill of Materials (BOM) and resource allocation across different manufacturing units [182]. Evaluating, comparing, and selecting suppliers involves complex decision-making based on multiple criteria, presenting a challenge in multi-criteria decision-making. AI applications in procurement provide industries with the capability to automate and streamline the purchasing process, spanning from supplier selection to contract management. P&S employs a systematic approach to evaluate suppliers, enabling informed decisions that optimize resource allocation for efficient and effective manufacturing operations. In the context of supplier selection, researchers in [183] proposed a fuzzy Bayesian model which aims to assist managers in supplier selection and comprehensively analyze the advantages and disadvantages of each supplier. On the other hand, a neuro-fuzzy supplier selection model was proposed in [184]. This model combines neural networks and fuzzy logic to assess and rank potential suppliers. Inventory management

Effective inventory management and maintaining a steady supply of materials and components throughout the manufacturing process are critical to prevent delays and disruptions. However, managing order policies is a complex task due to various stochastic factors. Traditional analytical methods have limitations in determining an optimal policy for minimizing overall inventory costs. To tackle this challenge, the authors in [185] propose the adoption of Monte Carlo Tree Search (MCTS), an AI heuristic approach. They develop both offline and online models that utilize real-time data to make informed decisions. To illustrate their approach, they utilize a supply chain structure like the classical beer game, involving four actors and accounting for stochastic demand and lead times. Moreover, researchers also used fuzzy logic [186], reinforcement learning [187,188,189], and evolutionary algorithm, specifically GA [190], in determining an optimal policy for minimizing overall inventory costs. Logistics

In the field of logistics, several articles explore various aspects such as inbound logistics planning, container loading management, and incorporating industrial robotics for collaborative logistics. The study in [191] introduces an intelligent system for industrial robotics in logistics. Researchers in [192] adopted a predictive approach toward inbound logistics planning, while a conceptual framework is developed to differentiate between different performance levels of human-artificial collaboration systems in logistics [193].

Researchers also focus on container terminal operations and management. Hence in [194], the authors utilize heuristics and a Decision Support System (DSS) to determine the number of reshuffles required for container assignment, whereas in [195], the authors propose an automated planning system for container-loading problems. Researchers in [196], explore the application of Radio Frequency Identification (RFID) and AI techniques to enhance the responsiveness of the logistics workflow. Finally, inter-organizational lot-sizing problems are addressed in [197] using self-interested and autonomous agents.

5.2.9 Summary

In this section, we have explored the diverse applications of AI techniques in the manufacturing domain, with a specific focus on additive manufacturing, subtractive manufacturing, and supply chain processes. Our review encompasses a wide variety of AI techniques that contribute to enhancing various aspects of the manufacturing industry.

In the realm of additive manufacturing, we have investigated the optimization of processes and parameters (Table 5), for AM techniques such as Selective Laser Sintering (SLS), Selective Laser Melting (SLM), Electron Beam Melting (EBM), and Fused Deposition Modeling (FBM), amongst others. Additionally, we delved into in-process monitoring techniques using image and acoustic data as well as data fusion techniques (Tables 6, 7, 8), which aid in ensuring the quality and precision of AM products. The study also covered AM production planning and quality control strategies, where AI techniques play a pivotal role in streamlining manufacturing operations and ensuring optimal outcomes.

Subtractive manufacturing received significant attention, encompassing both conventional machining processes (Table 9), such as turning, grinding, milling, drilling, and boring, and non-conventional machining processes (Table 10) including Electrochemical Machining (ECM), Abrasive Water Jet, Laser Machining, and Electric Discharge Machining (EDM) amongst others. AI techniques in this domain contribute to improving the efficiency, accuracy, and cost-effectiveness of subtractive manufacturing processes.

Our review also highlighted the relevance of AI techniques in supply chain management (Table 11), focusing on procurement and supplier selection, inventory management, and logistics. AI-powered approaches enable streamlined decision-making, efficient inventory control, and optimized logistical operations, leading to improved supply chain performance.

Throughout the review, we identified and analyzed a total of 29 distinct AI techniques that are actively used in manufacturing applications. The insightful bar chart depicted in Fig. 10 showcases the distribution of these AI techniques in the manufacturing phase, with the y-axis representing the AI techniques and the x-axis displaying the corresponding number of publications associated with each technique. This chart serves as a visual testament to the prevalence and importance of various AI techniques in revolutionizing the manufacturing landscape. Noteworthy techniques like ANN were observed in 15 publications, while SVM appeared in 18 publications, and MLP in 6 publications. Moreover, Gaussian process (GP) was identified in 5 publications, while GA with NSGA-II and K-NN were both found in 4 publications each. Convolutional Neural Networks (CNN), regression models, and fuzzy logic-based techniques were relatively rare but still found, amongst others, all making valuable contributions to the progressive evolution of manufacturing. These distinctive applications demonstrate the diverse array of AI techniques employed in manufacturing, showcasing their varying degrees of utilization and their significant impact on the industry.

Fig. 10
figure 10

Popular AI Techniques in Manufacturing Phase

The pie of pie chart in Fig. 11, provides a comprehensive overview of the AI techniques leveraged in various stages of the manufacturing process. The larger pie represents the main stages in manufacturing, and each segment within the larger pie represents a specific stage and its corresponding percentage of AI techniques utilization. Among the manufacturing stages, “Process Parameter Optimization” occupies the largest portion of the pie, indicating its prominent reliance on AI techniques. This stage, encompassing 28% of the chart, involves optimizing process parameters such as laser power, scanning speed, and feed rate to achieve precise and efficient manufacturing outcomes. The next crucial stage in the manufacturing process is “In Process Monitoring”, which involves the utilization of various data types, including “Image Data”, “Acoustic Data”, and “Data Fusion”. These sub-stages account for 18%, 8%, and 9% of the pie, respectively. In-process monitoring involves real-time data analysis using AI techniques to monitor and control the manufacturing process. “Image Data” utilizes image-based techniques, “Acoustic Data” utilizes sound-based techniques, and “Data Fusion” combines data from multiple sources for more comprehensive insights. Another essential aspect of the manufacturing process is “Subtractive Manufacturing”, which encompasses both conventional and non-conventional methods. Each of these methods holds a 14% share in the chart. “Conventional Subtractive Machining” involves traditional processes like turning, grinding, milling, drilling, and boring. On the other hand, “Non-Conventional Subtractive Machining” includes advanced techniques such as Electrochemical Machining (ECM), Abrasive Water Jet, Laser Machining, and Electric Discharge Machining (EDM). Both conventional and non-conventional manufacturing processes benefit from AI techniques to optimize and enhance their machining processes.

Fig. 11
figure 11

Distribution of AI techniques among manufacturing stages

Moving on to the smaller pie, it represents the sub-stages of the “Supply Chain” phase in manufacturing, consisting of “Procurement”, “Inventory Management”, and “Logistics”. These sub-stages occupy 2%, 5%, and 2% of the pie, respectively. AI techniques play a vital role in improving supply chain management, streamlining procurement processes, optimizing inventory levels, and enhancing logistics operations. Overall, the pie of pie chart (Fig. 11) offers a visual representation of the critical roles AI techniques play in the various stages of manufacturing. It illustrates how AI-driven data analysis, process optimization, and monitoring contribute significantly to the advancement and efficiency of manufacturing processes, resulting in improved product quality, and enhanced overall productivity.

The pie chart in Fig. 12 presents the significant findings on the popularity of AI techniques across the various stages of the manufacturing phase, encompassing Additive Manufacturing (AM) techniques and processes, Subtractive Manufacturing (SM) techniques and processes, as well as supply chain management. The results indicate that Support Vector Machines (SVM) stand out as the most widely adopted AI technique, with an impressive percentage share of 22%. SVM techniques have proven to be highly effective in various manufacturing applications, contributing to improved efficiency and accuracy. Artificial Neural Networks (ANN) hold the second-highest percentage share at 18%, showcasing their dominance in the manufacturing domain. ANN techniques excel in pat-tern recognition, optimization, and decision-making tasks, making them indispensable tools for product design and process improvement.

Fig. 12
figure 12

Share of AI techniques in the manufacturing phase

Gaussian Process (GP) and GA (NSGA-II account for 6% and 5%, respectively. These AI techniques offer valuable capabilities in optimization, adaptive control, and evolutionary search algorithms, empowering manufacturers to achieve optimal results in complex manufacturing processes. Other prominent AI techniques in the manufacturing phase include CNN and KNN, Multi-Layer Perceptron (MLP) at 5%, 4%, and 7% respectively. These techniques are renowned for their applications in image and pattern recognition, enabling better quality control and defect detection in manufacturing. Among the AI techniques that have a lesser percentage of utilization, each with a share of less than 3%, are several that provide valuable contributors to the manufacturing phase based on application. For instance, the FF-BPNN (Feedforward Backpropagation Neural Network), a variation of the Backpropagation Neural Network (BPNN), plays a role in solving complex optimization and prediction problems in manufacturing.

Similarly, the ELM (Extreme Learning Machine) has the ability to efficiently handle high-dimensional data, making it a useful tool in manufacturing tasks. Additionally, the MCTS (Monte Carlo Tree Search) algorithm aids in optimizing tasks. Moreover, the DT (Decision Trees) technique serves as a powerful tool for classification and regression tasks, aiding in data analysis and decision-making in manufacturing. The LR (Linear Regression) technique, despite occupying a smaller portion of the pie, remains essential for predicting continuous outcomes in various manufacturing scenarios. The SOM (Self-Organizing Map), a neural network technique used for clustering and visualization tasks, contributes to pattern recognition and process optimization in manufacturing. Furthermore, the SVR (Support Vector Regression), assists in predicting continuous variables, making it valuable in various manufacturing applications. The SVDD (Support Vector Data Description) technique is adept at anomaly detection and fault diagnosis in manufacturing processes. The RFN (Radial Basis Function Network), a specialized neural network used for function approximation, enhances various manufacturing tasks.

Additionally, the RNN (Recurrent Neural Network), despite its lower share, is effective in handling sequential data, making it valuable for tasks involving time-series analysis and predictions in manufacturing. Though these AI techniques may have a relatively smaller percentage in the pie chart, their significance in the manufacturing domain should not be underestimated. Each technique offers unique capabilities and applications, contributing to the overall advancement of manufacturing processes and decision-making. Figures 13 and 14 illustrate the pie-of-pie charts, presenting the prevalent AI techniques used in Additive Manufacturing (AM), Subtractive Manufacturing (SM), and Supply Chain Management respectively. The pie of pie chart presented in Fig. 13, clearly demonstrates that SVM and ANN are widely applied in AM processes, whereas only ANN is the popular AI technique employed in SM processes.

Fig. 13
figure 13

Popular AI techniques in AM (bigger pie) and SM (smaller pie) processes

Fig. 14
figure 14

Popular AI techniques in supply chain management

The pie of pie chart presented in Fig. 14, clearly demonstrates that RL is the widely used AI technique for inventory management compared to MCT and NSGA-II. For procurement, Fuzzy logic-based models are utilized for supplier selection, while linear programming and heuristics are employed for logistics. The comprehensive pie charts highlight the diverse and extensive utilization of AI techniques in the manufacturing phase, providing valuable insights and data-driven decision-making capabilities to improve the efficiency, precision, and overall performance of the manufacturing processes.

Overall, the comprehensive review sheds light on the critical role played by AI techniques in revolutionizing the manufacturing industry, enabling advanced optimization, increased productivity, and better decision-making across various manufacturing processes and supply chain management. As AI continues to advance, its transformative impact on the manufacturing landscape will continue to evolve, propelling the industry towards higher levels of efficiency and innovation.

5.3 AI at maintenance phase

In modern manufacturing systems, a multitude of machines are typically employed to meet the demand for producing high-quality products with intricate functionality. As the number of machines within a system increases, so does the cumulative risk of machine failures. In any industry, a sudden breakdown can result in significant economic losses stemming from both machine and production downtime. To illustrate, consider a conventional automobile assembly line, where every minute of downtime translates to a staggering loss of $20,000 [198].

Machine downtime can give rise to various direct and indirect consequences, which can be broadly categorized into two groups: (a) tangible and (b) intangible costs. Tangible costs are relatively straightforward to quantify and encompass expenses related to labor, materials, and other resources essential for machine repair. In contrast, intangible costs cannot be precisely estimated and encompass factors like labor idleness, overtime payments to compensate for lost time, penalties incurred due to delayed product deliveries caused by machine downtime, and so forth.

During the fourth industrial revolution, various concepts emerged, one of them is predictive maintenance, also known as e-maintenance, smart maintenance, or maintenance 4.0. This concept has become crucial in sustainable manufacturing and production systems as it introduces a digital approach to machine or component maintenance [199]. Maintenance strategies have experienced a progressive evolution throughout the industrial revolutions, as illustrated in Fig. 15, adopted from [200]. Currently, maintenance is viewed as an ongoing and continuous process, adapting to the changing needs and advancements in technology. This evolution reflects the shifting paradigms of maintenance practices over time.

Fig. 15
figure 15

Paradigms of maintenance practices over time [202]

5.3.1 Classification of maintenance practices

The classification of maintenance practices is a framework that categorizes various strategies used to ensure the optimal functioning of physical assets, machinery, and equipment. Maintenance practices are vital in preserving the efficiency, reliability, and longevity of assets, ultimately contributing to the overall productivity and safety of operations within industrial, commercial, and residential settings. Figure 16 illustrates the classification of maintenance practices based on their progression.

Fig. 16
figure 16

Classification of maintenance practices

Generally, maintenance practices can be broadly classified into several categories:

  1. 1.

    Traditional/reactive maintenance: It often involves reactive approaches, leading to unexpected breakdowns, costly repairs, and production downtime [201].

  2. 2.

    Preventive maintenance (PvM): It is a maintenance strategy that involves performing regular maintenance at scheduled intervals or process iterations to proactively anticipate potential process or equipment failures. It is an effective method for preventing failures from occurring. However, it can sometimes result in unnecessary corrective actions being taken, which leads to an increase in operating costs [202].

  3. 3.

    Predictive maintenance (PdM): It employs sophisticated tools to evaluate maintenance requirements by continuously monitoring the integrity of machines or processes. This approach ensures that maintenance activities are performed only when essential. Furthermore, PdM enables early detection of failures through the utilization of predictive tools, such as analyzing historical data using machine learning techniques [199, 202].

  4. 4.

    Proactive maintenance: Pro-active maintenance: This is used for root cause failure analysis, which involves identifying the underlying mechanisms and factors responsible for machine faults. By addressing the fundamental causes of machine failures, it becomes possible to gradually eliminate these failure mechanisms from each machinery installation. This comprehensive approach encompasses routine preventive and predictive maintenance activities, as well as the corresponding work tasks derived from them [199].

  5. 5.

    Prescriptive maintenance: Here the focus shifts to answering questions such as “What actions can we take to make it happen?” or “How can we exercise control over the occurrence of a specific event?” by providing valuable insights to improve and optimize future maintenance processes and decision-making [203].

Maintenance plays a vital role in the era of intelligent manufacturing as it ensures the continuous operation of equipment. In recent years, there has been a growing interest in Predictive Maintenance (PdM) among researchers due to its ability to facilitate efficient fault diagnosis, equipment prognosis, and Prognostic Health Management (PHM). This interest is fueled by two key factors: the emergence of intelligent manufacturing and the abundance of data available throughout the equipment’s lifecycle.

Figure 17 provides an illustration of the key objectives of Predictive Maintenance (PdM), and the various approaches used to achieve them. Prognosis, on the other hand, focuses on predicting the future behavior, health, or performance of the equipment based on its current condition and historical data. It involves estimating the remaining useful life (RUL) of the equipment and predicting when a failure is likely to occur. Prognosis helps organizations plan maintenance activities in advance, schedule downtime, and optimize resource allocation.

Fig. 17
figure 17

PdM aims and its approaches

Prognostic Health Management (PHM) is an integrated approach that combines fault diagnosis and prognosis to manage the health and performance of equipment. It involves continuously monitoring the condition and performance of the equipment, analyzing data, and using predictive models to diagnose faults, predict future behavior, and make in-formed maintenance decisions. PHM aims to optimize maintenance strategies, minimize downtime, and maximize the availability and reliability of equipment. Therefore, fault diagnosis, prognosis, and PHM are key aims of predictive maintenance, working together to ensure the timely detection, diagnosis, and prediction of faults or potential failures in equipment, enabling proactive maintenance actions to be taken.

5.3.2 PdM approaches

PdM models refer to the specific techniques and methodologies employed within PdM approaches to achieve PdM goals. The following are the commonly used PdM approaches [199, 203, 204]:

  1. 1.

    Physical model-based: Physical model-based methods in PdM utilize mathematical models or simulations of the equipment or system to predict its behavior and health. These models are often based on physics-based principles and incorporate knowledge of the underlying system dynamics, such as mechanical, electrical, or thermal behavior. Physical model-based methods require a deep understanding of the system's physics and typically involve complex mathematical calculations.

  2. 2.

    Knowledge-based: Knowledge-based methods rely on domain expertise and prior knowledge about the equipment or system to develop prognostic algorithms. These methods often employ expert systems, rule-based systems, or logical reasoning to assess the condition of the equipment and predict future failures. Knowledge-based methods use heuristics, rules, and established guidelines to make predictions based on known failure patterns or observed symptoms. They can be effective in situations where historical data may be limited, but expert knowledge is available to guide the prognostic process.

  3. 3.

    Data-driven: Data-driven methods in predictive maintenance leverage historical and real-time data collected from the equipment or system to identify patterns, trends, and anomalies. These methods employ statistical analysis, machine learning algorithms, and data mining techniques to uncover hidden patterns and correlations within the data. By training models on the available data, data-driven methods can make predictions about future failures or degradation. These methods are particularly useful when sufficient data is available, and the relationships between the data and the health of the equipment are complex or not easily discernible through traditional analytical methods.

To effectively organize the extensive literature on the maintenance phase of the product lifecycle in engineering, our focus will be narrowed down to the electrical/electronic and mechanical machines and components. In the following sections, we will delve into the diverse AI techniques applied in predictive maintenance across these domains.

5.3.3 AI-driven predictive maintenance mechanical sector

In the automotive sector, balancing functional safety throughout the product lifecycle while managing maintenance costs has emerged as a significant challenge. Predictive maintenance (PdM) has become a key approach in achieving this goal. With the abundance of operational data available in modern vehicles, machine learning (ML) is an ideal solution for implementing PdM strategies. For effective predictive maintenance and timely replacement of electrical/electronic and mechanical components, the calculation of Remaining Useful Life (RUL) and timely anomaly detection (AD) and fault prognosis for components is crucial. This includes the faults diagnostics and predicting RUL for various components like gearboxes [204], motor bearings [205,206,207], electric capacitor [208], vehicle engine [209], state of health monitoring of batteries in electric vehicles (EV) [210] and many more.

5.3.4 Fault diagnosis in mechanical component Engines

The studies in [211, 212] address the detection of faults in engines, specifically focusing on simultaneous faults where multiple single faults occur concurrently. In [211], an ensemble of Bayesian Extreme Learning Machines (ELM) is used for training the base classifiers on different single faults. Their experiments demonstrate that the ensemble can effectively detect simultaneous and signal faults. On the other hand, in [212], researchers also used the same classifier, but it differs in decomposition of signal and feature extraction techniques. In [212], author used weighing mechanism for base classifier is based on its performance.

In [213], the researchers introduced a hybrid methodology to detect combustion faults in a 12-cylinder 588 kW diesel engine. Their approach combines the analysis of vibration signatures using fast Fourier transform, discrete wavelet transforms, and Artificial Neural Networks (ANNs). Researchers in [214] conducted a study specifically targeting pre-ignition faults in turbocharged petrol engines. They utilized data obtained from the Electric Control Units (ECU) of a fleet of vehicles. The researchers introduced a deep learning architecture consisting of 7 layers, 4 CNN, 2 LSTM, and 1 soft-max layer. They also employed different subsets of features in their analysis. The proposed approach achieved an impressive F1-score of 0.9 and outperformed stand-alone CNNs, LSTMs, linear SVMs, logistic regression, and random forests in fault identification.

In a research study conducted in [209], a classification method for engine faults based on sound measurements was developed. The researchers employed wavelet analysis to extract distinctive features from the sound data, which were then utilized to train an Artificial Neural Network (ANN). Their approach demonstrated successful classification of engine data obtained from a test bed into two categories: fault-free and eight different types of faults. This study highlights the effectiveness of using sound measurements in conjunction with wavelet analysis and ANN for accurate fault classification in engines. Rotatory machinery: bearings and gears

Failures in gears and bearings can result in significant system damage, financial losses, and safety-related issues. The research community places a strong emphasis on employing condition monitoring and predictive maintenance techniques for gearboxes in order to proactively avert catastrophic failures. In the following section, we will outline the relevant studies in this field.

Fault diagnosis in bearing

The research in [215] introduces a new approach for recognizing the condition of bearings using a combination of Deep Neural Networks (DNN) and multi-feature extraction. The method involves extracting various features from vibration signals in the time and frequency domain to capture their unique characteristics. To reduce redundant information, a nonlinear dimension reduction algorithm based on DL is applied. The top-layer classifier of the DNN is then utilized to determine the condition of the bearings. The effectiveness of the proposed method is evaluated using experimental data from a testbed with vibrating bearings, and a comparative analysis is conducted. The results demonstrate that the proposed method outperforms other techniques in terms of adaptive feature selection and achieving higher accuracy in bearing condition recognition.

Similar to the work in [215], researchers in [216] propose a method for enhancing the reliability of fault diagnosis in rotating machinery through the use of multi-sensor data fusion but they used Autoencoders (AEs) with Deep Believe Networks (DBN). Their approach involves extracting features from sensor signals using time and frequency domain analysis. These extracted features are then input into Sparse Autoencoder (SAE) layers to fuse them, generating fused feature vectors as indicators of machine health. A Deep Belief Network (DBN) is trained using these indicators for classification purposes. Experimental results from bearing fault experiments on a test platform validate the effectiveness of the SAE-DBN approach, demonstrating superior performance compared to other fusion methods. The proposed method successfully identifies the operating conditions of machinery and improves fault diagnosis in rotating systems. In a similar vein, researchers in [217], employed a Deep Belief Network (DBN) for fault diagnosis using multi-source vibrational data. Their approach was compared against SVM, KNN, and Back-propagation Neural Network (BPNN). The comparative results demonstrated that the DBN-based method not only effectively fused multisensory data but also achieved superior identification accuracy compared to the other methods.

Researchers in [218], present a DL based approach for bearing fault diagnosis. They used Acoustic Emission (AE) signals in this study. These signals are preprocessed using short-time Fourier transform (STFT) to generate a spectrum matrix, which is then used as input to the Large Memory Storage Retrieval (LAMSTAR) neural network for bearing fault diagnosis. Researchers in [219], present a novel approach for intelligent fault diagnosis using un-supervised Stacked Denoising Autoencoder (SDA) and Deep Neural Network (DNN). The approach is designed to extract meaningful information from a vast amount of unlabeled condition data collected from vibration signals from various components such as bearings, gearboxes, and induction motors, exhibiting diverse conditions. By harnessing the power of SDA and DNN, this approach effectively learns from the unlabeled data and achieves precise classification of machine conditions with minimal reliance on labelled data. Consequently, it offers a versatile and adaptable framework for fault diagnosis, capable of accurately classifying both familiar and novel conditions.

In the study [220], a dynamic DL algorithm, which incorporates incremental compensation is presented. The algorithm utilizes deep learning to extract feature modes from newly generated data and compares them with fault modes obtained from historical data. A similarity computing model is employed to dynamically adjust the weights of the merged modes. The weighted modes are then classified using the supervised SVM algorithm, and the model is fine-tuned using the BP algorithm to achieve dynamic and compensatory adjustment. Experimental results using bearing running data confirm that the proposed approach effectively enhances diagnosis accuracy and reduces time costs, thereby fulfilling the real-time requirements of equipment fault diagnosis. In another study [221], researchers proposed a stacked denoising autoencoder (SDAE) which is a DL method for fault diagnosis in bearing. Similarly, researchers in [222] present an Autoencoder (AE) and Extreme Learning Machine (ELM) based diagnosis method for fault detection in bearings. By leveraging the feature extraction capability of AE and the fast-training speed of ELMs, the proposed method overcomes existing deficiencies. Comparative analysis and experimental results using a rolling element bearings dataset highlight the effectiveness of the method. It demonstrates adaptive mining of discriminative fault characteristics and achieves high-speed diagnosis, showcasing its potential for improving fault diagnosis in bearings.

Fault diagnosis in gears

In their work, the authors [223], introduce a Multimodal Deep Support Vector Classification (MDSVC) approach for fault diagnosis in gearboxes based on vibration measurements. The approach involves separating the multimodal homologous features into different modalities such as time, frequency, and wavelet. Pattern representations for each modality are learned using a Gaussian-Bernoulli Deep Boltzmann Machine (GDBM). The MDSVC model is then constructed by fusing the GDBMs using a support vector classifier. This model aims to improve fault diagnosis in gearboxes by leveraging deep learning techniques and integrating information from multiple modalities.

The study described in [224] focuses on developing a systematic approach for diagnosing defects in automobile gearboxes using sound data. ML techniques are employed to analyze the acoustic features extracted from the gearbox to detect potential faults. The research involved extracting acoustic features from the gearbox by calculating statistical coefficients such as mean and correlation coefficients. Mel-frequency cepstrum coefficients (MFCC) and their derivatives (∆MFCC and ∆∆MFCC) were used as the feature extraction technique. Several classifiers, including support vector machine (SVM), decision tree (DT), linear decrement analysis (LDA), Naive Bayes (NB), and logarithm regression (LR), were utilized to identify the most effective statistical features. The SVM and NB classifiers demonstrated the highest accuracy in classification. Overall, the feature-based classifiers successfully detected the defective component within the gearbox. This research offers valuable insights into the application of machine learning techniques for gearbox defect diagnosis using sound data.

Authors in [225], a Supervised Machine Learning approach is employed to estimate gearbox health and classify three types of faults: chip, worn 10%, and worn 5%. They utilize a continuous wavelet transform for signal analysis and extract statistical features. The energy and Shannon entropy are then utilized for feature reduction, and an Artificial Neural Network (ANN) is used for classification. Their proposed feature reduction technique demonstrates superior accuracy and training time compared to using the non-reduced feature space. In a similar problem setting [226], researchers focus on comparing different feature extraction methods for classifying gearbox vibration data. They apply a continuous wavelet transform to the signals and PCA and FDA for feature extraction. The resulting feature vectors are classified using k-NN and Gaussian mixture models. In their experiments conducted on a gearbox operating at constant speed, feature extraction with FDA outperforms PCA in terms of classification performance.

In [227], researchers focused on automotive gearboxes and aimed to classify common faults, particularly partially broken teeth in planetary gearboxes. They constructed an experimental setup consisting of both healthy gearboxes and gearboxes with injected faults. Various sensors, including those for motor current, voltage, torque, vibration, and rotational speed, were utilized in the setup. The collected sensor data underwent preprocessing and segmentation. To perform the classification, the authors introduced a hybrid Deep Belief Network (DBN) architecture and compared its performance against several other methods such as classic DBN, CNN, SVM, autoencoder, and LSTM. The experimental results clearly indicated the superior performance of the hybrid DBN, showcasing its effectiveness in detecting and classifying faults in automotive gearboxes.

Authors in [228] demonstrated the effectiveness of using deep learning, specifically CNN, in analyzing thermal imaging data for fault diagnosis in WGs (worm gears). The thermal image-based CNN model showed superior performance compared to traditional vibration and sound-based models, highlighting the potential of thermal imaging methods in condition monitoring of WGs. The study [229], focuses on intelligent fault diagnosis of rotating machinery by addressing the deficiencies in existing methods. Authors proposed an approach which uses DNNs, to automatically extract relevant information from raw data and approximate complex non-linear functions. The effectiveness of the method is demonstrated using datasets from rolling element bearings and planetary gearboxes, showcasing superior diagnosis accuracy compared to existing methods. By overcoming the limitations of manual feature extraction and shallow architectures in artificial neural networks (ANNs), the proposed method offers a promising approach for prompt and accurate fault diagnosis. Fault diagnosis aircraft

In the study [3], a framework based on LSTM-AE is proposed for fault detection and classification in complex systems, with a focus on aircraft systems. The framework leverages the power of deep learning to learn from raw time series data collected from heterogeneous sensors, without the need for manually engineered features. The proposed approach utilizes a reconstruction model trained on nominal time series samples representing healthy system behavior. By analyzing the reconstruction errors across sensors, faults and anomalies can be effectively detected and distinguished. The health index based on reconstruction errors serves as a measure of the system's health state. The framework is evaluated using real-world data from a commercial aircraft system, and the results demonstrate its effectiveness in accurately identifying faults and capturing important characteristics for fault classification. This approach has the potential to assist technicians in fault isolation during line troubleshooting, leading to more efficient maintenance and improved system reliability.

The research in [230] focuses on monitoring the cooling unit system in wide-body aircraft, utilizing a semi-supervised anomaly detection framework. Despite the challenges posed by uncertain and noisy data, the proposed approach, which relies on auto-encoder architectures, shows promise in condition monitoring. The generated anomaly scores effectively detect periods of faults, while the health indicator metric accurately identifies faulty flights that were missed by the onboard detection systems. This approach has the potential to complement existing fault detection systems and provide efficient diagnostics for maintenance teams. However, the study acknowledges the difficulties in training and evaluating the models due to uncertain ground truth and the need for further refinement. Future research directions include fine-tuning hyperparameters, exploring alternative architectures, and enhancing the understanding of anomaly trends based on the system's underlying physics. Despite its limitations, the study highlights the potential and value of the proposed approach in monitoring the health of aircraft fleets.

5.3.5 Prognosis in mechanical components

Prognosis or Remaining Useful Life (RUL) is a critical concept in the field of automotive engineering, particularly in the context of predictive maintenance (PdM). RUL estimation is a crucial aspect in the field of automotive component maintenance, particularly for components such as electric vehicle batteries, air compressors, and motor rotary bearings. Various data-driven techniques, such as machine learning and predictive modeling, are employed to analyze historical data, sensor readings, and operating conditions to predict the RUL of these components. By monitoring and continuously updating the RUL estimates, automotive manufacturers and maintenance teams can make informed decisions regarding component replacement or maintenance actions, ensuring optimal performance, reliability, and safety of the vehicle systems [231]. In the following section, we will outline the relevant studies in this field. Air compressor

The focus of the study [232], revolved around forecasting the RUL of air compressor systems found in trucks and buses. The primary goal involved enhancing the efficiency of repair shop visit scheduling through a comparison between the anticipated RUL and the planned service appointments. The study's data collection was drawn from two principal sources: vehicle usage patterns documented during repair shop visits and maintenance records upheld by the repair establishments. Employing a supervised learning technique and harnessing a random forest classifier, the researchers effectively pinpointed four occurrences of air compressor malfunctions. Particularly noteworthy was their methodology's emphasis on the importance of incorporating feature selection methods. Rotatory machinery: bearings and gears

Prognosis in bearings

In [233], a prediction framework for estimating the RUL of bearings is proposed, leveraging deep autoencoders and DNN. The research findings demonstrate that the proposed method, which integrates a unique eigenvector derived from joint time–frequency–wavelet features and a deep autoencoder for efficient feature compression, significantly improves the efficiency of bearing RUL prediction. By harnessing the power of deep learning techniques, the method effectively captures the degradation patterns of bearings and preserves crucial information without adding complexity to the DNN model.

In [205], the focus was on predicting the RUL of an electric motor’s rotary bearing using vibration data. The researchers compared the performance of ordinary least squares, feasible generalized least squares, and Support Vector Regression (SVR) algorithms. Despite being more computationally expensive, the SVR algorithm outperformed the other methods in accurately predicting the RUL of the rotary bearing. Similarly for vibrational response of bearings, a data-driven approach is proposed in [234], which utilizes the LSTM a type of RNN, for assessing the degradation of bearings in rotating machinery. The aim is to effectively utilize fault propagation information and accurately predict the RUL of the bearings.

In [235], researchers used a Deep Belief Network (DBN) combined with the Weibull distribution was utilized to evaluate the health condition of bearings. Vibration measurements were used as features, and the bearings were classified into five different degradation states. In another study [239], researchers introduced a technique that involved a combination of Restricted Boltzmann Machine (RBM) and PCA for feature extraction in accurate prediction of the RUL of bearings. After extracting the features, a similarity-based method was employed for RUL prediction. In [233], authors also utilized vibration data and RBM for predicting the RUL of bearings. They proposed the use of stacked RBM to enhance the accuracy of RUL prediction for bearings.

Prognosis in gears

In [204], researchers proposed a general vehicle remote diagnosis platform and demonstrated its application on gearbox data. They used engine speed, wheel speed, and gearbox temperature as input features and employed the least squares-support vector machine (LS-SVM) for classification. The LS-SVM was trained to classify the gearbox into different states such as “NOK”, “10% RUL”, “40% RUL”, and “OK” based on expert knowledge. They achieved a classification accuracy of 93%, surpassing the accuracy of the k-nearest neighbors (k-NN) algorithm at 82%. This study falls between condition-based predictive maintenance (PdM) and RUL prediction, as the RUL estimation is based on crisp classification labels.

Turbofan engine

The study presented in [236], focuses on accurate fault localization and estimation of the RUL of airplane engines. The authors propose the utilization of LSTM, a neural network variant known for its effectiveness in handling complex operations, hybrid faults, and high levels of noise. The proposed approach is evaluated using a dataset of aircraft turbofan engines provided by NASA for health monitoring. Different modifications of the LSTM network are compared, and the experimental results demonstrate the superiority of the standard LSTM model over other variations. These findings highlight the efficacy of LSTM in improving fault diagnosis and predicting the remaining lifespan of aero plane engines. Similarly, the research work [237], the focus is on estimating the RUL of turbofan engines without making assumptions about the degradation pattern. The authors propose an unsupervised machine learning technique called LSTM-ED (Long Short-Term Memory based Encoder-Decoder). This approach leverages multi-sensor time-series data to derive a Health Index (HI) for the system. By training LSTM-ED to reconstruct the time-series associated with a healthy state, the reconstruction error is used to calculate the HI, which is then employed for RUL estimation. The effectiveness of this approach is evaluated using publicly available datasets for turbofan engines, as well as a real-world industry dataset from a pulverizer mill. The results demonstrate a significant correlation between the LSTM-ED based HI and maintenance costs, showcasing the potential of this approach for accurate RUL estimation. Researchers in [238], introduced a novel approach, for estimating RUL by combining evolutionary algorithms with DBNs. The effectiveness of the proposed method was evaluated using NASA’s C-MAPSS aeroengine dataset, which consists of four sub-datasets.

5.3.6 AI-driven predictive maintenance in the electrical/electronic (EE) sector

AI-driven predictive maintenance plays a crucial role in ensuring the reliability and efficiency of electrical and electronic components. Various components such as photovoltaic systems [239], microchips [240], transformer [241], automatic washing machines [234], and DC link capacitors [242] are susceptible to anomalies and failure that can result in significant maintenance cost and product quality issues. To mitigate these risks, predictive maintenance systems are employed to monitor and detect potential failures in advance. For instance, in electric transformers, early detection of failures caused by overheating and overloading is essential. Monitoring the deterioration of oil and tracking the insulation systems are key aspects of these systems. Additionally, the ageing process of DC link capacitors [242] and failure detection in battery energy storage systems [243], are critical areas of focus. Anomaly warning systems and remote monitoring of household devices further contribute to minimizing losses caused by failure. By leveraging AI technologies, predictive maintenance in electrical and electronic components offers proactive measures for identifying and addressing potential issues, resulting in improved performance, reduced costs, and enhanced reliability.

5.3.7 Fault diagnosis in electrical/electronic machines (EE) or components Electrical vehicle batteries

Several research studies have focused on assessing the State of Health (SoH) and predicting electric vehicle batteries. To estimate the state of health (SoH) of EV batteries, Pan et al. [244], used Extreme Learning Machine (ELM). Their focus was on capacity degradation of batteries. The main aspect of their study revolved around the identification of indicators for battery health, which are derived from the internal resistance of the battery. These indicators are analogous to feature extraction and selection for training ML model. After identifying features, they employed an ELM to train on the selected features. To assess the effectiveness of their approach, they compiled a dataset consisting of batteries exhibiting different levels of capacity loss. When comparing the ELM with a standard ANN, the researchers observed that the ELM exhibited advantages in terms of ease of use, faster training, and improved accuracy, specifically tailored to their intended application.

The study conducted in [245], focused on diagnosing battery states in real-world driving patterns. They employed a data-driven approach using measurable data from electric vehicles and utilized a LSTM model to monitor the SoH of the batteries. To carry out their analysis, they simulated the operation of over 70 battery cells using a battery cycler, which facilitated the charging and discharging of batteries with varying currents. In another study [246], researchers developed an approach for detecting external short circuits in batteries. They devise two physical models for detecting external short circuits in batteries. The authors compare these models through a genetic algorithm and select the best one by parameter estimation. By leveraging their domain knowledge, they identified specific features that could indicate the presence of the fault. Finally, they trained a Random Forest (RF) classifier and tested the approach using real batteries in a laboratory setting. Turbines

The study [247] proposes a methodology for wind turbine maintenance using ML techniques, specifically ANN, and data from SCADA systems (Supervisory Control and Data Acquisition). The aim is to develop models that can characterize the behavior of key components of wind turbines such as the gearbox and generator to predict operating anomalies without the need for additional sensors, thereby reducing costs associated with Operation and Maintenance (O&M) in wind farms. The authors [248] employed autoencoder (AE), a deep learning approach, to improve fault diagnosis accuracy in a tidal turbine's generator using vibration data and various loading conditions. They compared this approach with feature-based methods like KNN and SVM. The proposed AE approach demonstrated superior performance compared to the feature-based methods, enhancing the accuracy of fault diagnosis in the turbine's generator. For anomaly detection in gas turbines, researchers in [249], used extreme learning machines (ELMs) as one-class classifier. They also showed that ELMs outperform the SVM. Motors and generators

The study in [250] proposes a novel architecture in which researchers used ultra-low power sensors as an effective platform for running compressed RNN for conditional monitoring of induction motors. In the study in [251], an artificial feedforward backpropagation neural network approach is proposed for real-time monitoring and fault diagnosis of low power hub motors. The approach utilizes measurements of seven main system parameters and is trained using a dataset consisting of 4160 samples. The results demonstrate promising outcomes and have the potential for application to other types of hub motors. To implement the developed model, an Arduino Due microcontroller card is used, and a prototype of a mobile real-time monitoring and fault diagnosis system for the hub motor is designed and manufactured.

The article [252], presents a novel approach using an adaptive neuro-fuzzy inference system (ANFIS) for monitoring the condition and diagnosing faults in automotive generators. Unlike traditional fault indication systems, which provide limited information about normal or faulty conditions, the proposed system aims to classify various fault conditions. It utilizes discrete wavelet analysis for feature extraction, reducing the complexity of the feature vectors, and employs artificial neural networks for classification. The ANFIS is specifically used for classifying and comparing synthetic fault types in an experimental engine platform operating under different conditions. Experimental results demonstrate the potential effectiveness of the proposed system in monitoring and diagnosing faults in automotive generators.

Authors in [253], presented a method utilizing a sparse autoencoder, an unsupervised DNN technique, for fault diagnosis in induction motors. They utilized vibration measurements as input data for training their model, collected under six different conditions at a sampling rate of 20 kHz. The proposed approach was compared with other fault diagnostic machine learning models such as SVM, Logistic regression (LogiReg), and NN. The experimental results demonstrated that the sparse autoencoder (SAE) outperformed the other models in terms of fault diagnosis and classification accuracy in induction motors.

Some researchers also use CNN in fault diagnosis of induction motors such as the studies [254] and [255]. In [254], researchers introduced a method for autonomously learning discriminative features in the context of induction motor fault diagnosis. Their approach involved the combination of BPNN and a feed-forward convolutional pooling architecture. The results demonstrated notable performance improvements compared to other methods. The proposed approach successfully acquired discriminative patterns and derived robust and invariant features, thereby enhancing the accuracy of fault diagnosis in induction motors. In [255], 1D CNN is used in fault diagnostic of induction motors. In this study, the authors introduced a motor condition monitoring and early fault-detection system that utilizes 1-D convolutional neural networks. Their approach offers both speed and accuracy by integrating feature extraction and classification within a single learning framework. By directly processing the raw data (signal), the need for a separate feature extraction algorithm is eliminated, resulting in a more efficient system in terms of speed and hardware requirements. Transformers

The authors in [241] explore a PdM strategy for power transformers using the ELM algorithm. Their focus was on fault diagnosis and predictive algorithms based on the ELM algorithm, specifically for power transformers in a smart grid environment. Comparative studies of different fault pattern prediction algorithms were conducted to validate the effectiveness of the fault prediction algorithm based on ELM. The findings suggest that the ELM algorithm can provide significant technical support for the maintenance strategy of predicting faults in power transformers. In [256], a variant of autoencoder (AE) named continuous spars autoencoder (CSAE) is used for fault diagnostics in transformers. Photovoltaics panels and transistors

To improve PdM planning for photovoltaic (PV) systems, a method for anomaly detection is introduced in [239]. The approach utilizes an ANN model to forecast AC power production by utilizing solar irradiance and PV panel temperature measurements. Training the model involves using a dataset from the specific PV system under monitoring. Real-time trend data is then compared to the model’s predictions, and the discrepancies are examined to identify anomalies and generate daily PdM alerts, thus mitigating the risk of future failures.

In the research conducted in [257], the authors introduced a CNN-based system known as SCDD (Segmentation of Cells and Defect Detection) for the visual inspection of defects in Electroluminescence (EL) images of single-crystalline silicon solar panels used in the photovoltaic industry. The system employed ResNet50 as a classifier and YOLOv4 as a detector specifically for identifying defects in the panels. By utilizing deep CNNs, this approach achieved remarkably accurate defect detection rates, even when training data was limited.

The study conducted by Pan et al. [258] focused on failure analysis in nanoscale field-effect transistors (FET) within the semiconductor industry. They proposed two machine learning (ML) models, namely STR (Structure transfer) and SER (Structure Expansion Reduction), along with an ensemble model called MIX. These models were employed to conduct defect detection and failure analysis on two defect datasets, FinFET and GAA-FET. The ML models exhibited notable accuracy in identifying failures within the devices, thus facilitating the acceleration of the production process. Machine tool wear

In the research work in [236], the focus was on estimating the RUL without making assumptions about the degradation pattern. The authors propose an unsupervised machine learning technique called LSTM-ED (Long Short-Term Memory based Encoder-Decoder). This approach leverages multi-sensor time-series data to derive a health index (HI) for the system. By training LSTM-ED to reconstruct the time-series associated with a healthy state, the reconstruction error is used to calculate the HI, which is then employed for RUL estimation. The effectiveness of this approach is evaluated using publicly available data sets for milling machines, as well as a real-world industry data set from a pulverize mill.

The studies in [259] and [260] introduced LSTM-based approaches for monitoring high-speed milling machine cutters. In [260], a real-life tool wear test was conducted to predict actual tool wear using raw sensory data. Basic and deep LSTMs were employed, and the experimental results demonstrated that both models, particularly the deep LSTMs, outperformed several state-of-the-art baseline methods. On the other hand, in [259], researchers evaluated their proposed LSTM model using publicly available datasets, including C-MAPSS data set [261], PHM08 Challenge Data Set [262, 267], and Milling Data Set [263]. The experiments revealed that the LSTM model outperformed other approaches, exhibiting superior performance in remaining useful life (RUL) estimation.

The studies [264,265,266] aim to address the limitations of model-based prognostics in manufacturing systems by exploring data-driven methods, particularly RF. The focus of these studies is on predicting tool wear in milling operations and the development of prognostic models for mechanical failures and RUL estimation in manufacturing systems in general. The authors compare the performance of RFs with ANNs and SVR using an experimental dataset. The results highlight that RFs outperform ANNs and SVR in terms of prediction accuracy, suggesting the potential of RFs for machinery prognostics and maintenance management in smart manufacturing systems. Furthermore, the authors proposed a RFs-based prognostic method for predicting tool wear and compared it with feed-forward back propagation (FFBP) ANNs and SVR. They utilize an extensive dataset from 315 milling tests. The findings demonstrate that RFs exhibit superior prediction accuracy compared to FFBP ANNs with a single hidden layer and SVR. This research showcases the effectiveness of RFs as a data-driven approach for machinery prognostics in complex manufacturing systems.

In [231], researchers have developed an affordable cyber-physical system capable of measuring temperature and vibration variables in CNC turning center machining processes. The system utilizes predictive models to detect the rejection of machined parts based on quality thresholds. The Recursive Partitioning and Regression Tree (RPART) model is employed as the AI technique to predict part rejection based on quality thresholds.

In [267], the concept of Device Electrocardiogram (DECG) is presented, along with an algorithm that utilizes Deep Denoising Autoencoder (DDAE) and regression operation for predicting the remaining useful life of industrial equipment. Experimental validation and comparison with traditional factory information systems are conducted to demonstrate the feasibility and effectiveness of the algorithm. The integration of DDAE and the proposed algorithm shows promise for sophisticated industrial applications and has the potential to contribute to the realization of Industry 4.0.

5.3.8 Prognosis in electrical and electronics (EE) components Belt drives

In [268], researchers propose a methodology for continuously diagnosing faults and detecting anomalies in belt drives using vibration analysis and an unsupervised deep learning algorithm called autoencoder. The objective of the study is to enhance maintenance planning and minimize expenses by effectively identifying anomalies and estimating the remaining lifespan of belt drive components. Researchers in [269], introduced a data-driven approach aimed at the prognosis of belt tension and the monitoring of splice conditions in conveyor belts. Their approach utilizes an ANN for the prognostic task. The authors employed the power consumption of the belt driver and load information as the input features for training the ANN model. By leveraging these features, the model is able to learn and predict the belt tension and effectively monitor the condition of the splices between the belt pieces in conveyors. EV batteries

In [270], in order to precisely identify the battery capacity regeneration and to forecast the RUL of a lithium-ion battery, researchers suggest a novel expectation maximization-unscented particle filter-Wilcoxon rank sum test (EM-UPF-W) method. By leveraging the expectation-maximization (EM) algorithm, the article, builds a dynamic deterioration model for a single battery based on unscented particle filter (UPF) that adaptively predicts the noise variables. Furthermore in [271], a method for forecasting the RUL of lithium-ion batteries is presented. The technique is predicated on the improved sparrow search algorithm (ISSA), which maximizes both the long- and short-term time-series network (LSTNet) and variational mode decomposition (VMD).

In [272], researchers applied LSTM networks to determine the SoH of lithium-ion batteries in electric vehicles. They emphasized the importance of extracting relevant features related to battery health before applying machine learning methods. This preprocessing step played a crucial role in achieving accurate SoH estimation using LSTM networks. In [210], a prediction method was developed for estimating the voltage and lifetime of lead-acid batteries in electric vehicles. They employed a CNN and a standard ANN. Using data recorded from 10 lead-acid batteries over two years, different voltage measurements were used as features. CNN outperformed ANN in terms of accuracy in predicting voltage and lifetime for the lead-acid batteries.

5.3.9 Summary

In this section, we delved into the realm of AI techniques utilized in the maintenance of both mechanical and electrical/electronic sectors. Throughout our investigation, we identified a total of 36 distinct AI techniques employed either individually or in combination for fault diagnosis, root cause analysis, and prognosis of components and machines. The maintenance phase of mechanical and electrical machines and components plays a critical role in ensuring their optimal performance, with fault diagnosis and prognosis (RUL prediction) being a key aspect of this phase. Over the years, researchers have explored various AI techniques for fault diagnosis and prognosis. Prior to 2012, prominent AI techniques employed in this domain included principal component analysis [273], fuzzy logic [274,275,276,277,278,279,280], support vector machines (SVM) [281], support vector regressor (SVR) [282], genetic algorithms (GA) [212, 283], and neural networks (NN) [284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299].

However, recent advancements have witnessed a notable transition towards more advanced AI techniques in fault diagnosis. Researchers have increasingly embraced cutting-edge approaches, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Deep Neural Networks (DNN), Sparse Autoencoders (SAE), Deep Belief Networks (DBN), Large Memory Storage Retrieval (LAMSTAR) neural networks, Stacked Denoising Autoencoders (SDA), and Extreme Learning Machines (ELM). These contemporary techniques leverage the power of deep learning algorithms and feature extraction capabilities to enhance the accuracy and efficiency of fault diagnosis processes. Figure 18 presented as a visually engaging pie chart, showcases the widespread prevalence of AI techniques in these vital sectors. Strikingly, the mechanical sector emerges as the primary beneficiary, harnessing AI techniques extensively to enhance the prognostic capabilities and predictive maintenance of mechanical equipment and machines.

Fig. 18
figure 18

AI techniques and their prevalence in mechanical and EE sectors

This profound adoption of AI signifies its profound impact in ensuring optimal performance, unmatched reliability, and extended longevity of mechanical assets. In the realm of electrical/electronic sectors, AI techniques also play a significant role, contributing to the robustness and efficiency of maintenance practices. The bar chart in Fig. 19 offers valuable insights into the widespread adoption of diverse AI techniques during the maintenance phase.

Fig. 19
figure 19

Popular AI techniques in maintenance phase

The y-axis provides a comprehensive list of AI techniques, while the x-axis quantifies the number of publications associated with each technique. Notably, Sport Vector Machine (SVM), Artificial Neural Network (ANN), Long short-term memory (LSTM), modified Autoencoder (AE), Principal Component Analysis (PCA), Autoencoders (AE), Convolution Neural Network (CNN), Genetic Algorithm (GA), Extreme Learning Machine (ELM), and Deep Belief Network (DBN) have garnered considerable attention, boasting 19, 17, 11, 8, 4, 7, 7, 6, and 5 publications, respectively. The modified AE category encompasses various AE variants, namely Convolutional Autoencoder (CAE), Fully Connected Autoencoder (FAE), Sparse Autoencoder (SAE), Stack Denoising Autoencoder (SDA), Sparse Denoising Autoencoder (SDAE), Deep Denoising Autoencoder (DDAE), and Continuous Spars Autoencoder (CSAE). Each variant introduces specific modifications and constraints to the standard autoencoder architecture, rendering them better suited for tackling diverse tasks and accommodating various data types in the realm of ML and DL. The active exploration and implementation of these AI techniques in the maintenance phase exemplify their significant impact on enhancing reliability, predictive maintenance, and fault diagnosis in various engineering domains. Among the AI techniques explored, Multimodal Deep Support Vector Classification (MDSVC), Least Squares Support Vector Machine (LS-SVM), Naive Bayes (NB), Backpropagation Neural Network (BPNN), Encoder and Decoders, Feed Forward Backpropagation Neural Network (FFBPNN), Transfer learning, and Feed Forward Backpropagation (FFBP) stand out as the less frequently utilized approaches, with each technique having only one associated publication, except for Feed Forward Backpropagation (FFBP), which boasts two associated publications.

The comprehensive bar chart in Fig. 19 presents a compelling overview of AI techniques' prevalence in the realm of maintenance phase with the y-axis representing the AI techniques applied and the x-axis displaying the corresponding number of publications associated with each technique. It underscores the diversity of techniques employed to address specific challenges and elevate diagnostic capabilities in both mechanical and electrical/electronic engineering domains. While certain AI techniques garner significant attention and research focus, it is equally important to acknowledge the potential and application of less widely used methods, as they may offer unique solutions to diagnostic scenarios. Overall, the chart reinforces the profound impact of AI in advancing fault diagnosis and prognosis and contributing to the continuous evolution of maintenance practices across various engineering sectors. Various mechanical and electrical components and machines, encompassing engines, bearings, gears and gearboxes, aircraft, and pumps, motors, generators, shafts, alternators, transformers, EV batteries, etc., have been subject to extensive study within the context of fault diagnosis and prognosis.

The integration of advanced AI techniques has yielded notable improvements in fault identification and classification, augmenting the overall effectiveness of the maintenance phase for mechanical machines and components.

The pie chart in Fig. 20, offers a comprehensive visual representation of the utilization of AI techniques in the fields of mechanical and electrical engineering for fault diagnosis and prognosis. It showcases the percentage distribution of AI's involvement in each sector, providing valuable insights into the significance of AI in enhancing maintenance practices and ensuring the reliable operation of mechanical components and electrical machinery. In the mechanical sector, AI techniques play a pivotal role in fault diagnosis, covering a substantial area of 48%. This demonstrates the widespread adoption of AI in identifying and rectifying issues within mechanical components and machinery. Fault diagnosis, facilitated by AI, enables engineers and maintenance experts to swiftly detect potential problems, thereby preventing potential breakdowns and optimizing the performance of mechanical systems.

Fig. 20
figure 20

Incorporation of AI for fault diagnosis and prognosis in mechanical and EE sector

Additionally, the pie chart reveals that AI is also heavily leveraged for prognostics (remaining useful life prediction) in the mechanical sector, accounting for 15% of the sector area. Prognostics using AI techniques allow engineers to assess the health and predict the remaining lifespan of mechanical components, empowering them to undertake timely maintenance actions and avoid unexpected failures. This predictive capability enhances equipment reliability and reduces downtime, leading to improved productivity and cost-efficiency in the mechanical sector. In the electrical and electronic (EE) sector, AI's impact is equally noteworthy, with a significant 32% sector area attributed to fault diagnosis. The implementation of AI techniques in fault diagnosis for electrical components and machinery enables rapid identification of issues, aiding in prompt troubleshooting and maintenance interventions. This fosters optimal performance and safety in electrical systems, ensuring uninterrupted operations and safeguarding against potential hazards. Moreover, AI's role in prognostics in the EE sector is highlighted, constituting 5% of the sector area.

Overall, the pie chart illustrates the significant contribution of AI techniques in both mechanical and electrical sectors, addressing fault diagnosis and prognosis challenges in mechanical components and electrical machinery. This underscores AI's transformative impact on maintenance practices in engineering fields, enabling industries to embrace predictive maintenance strategies and maximize the efficiency, reliability, and safety of their mechanical and electrical assets. Table 12 provides a list of the AI techniques used by researchers for fault diagnosis and prognosis in mechanical and electrical/electronic machines and components.

Table 12 AI techniques used for machine/component fault diagnosis and prognosis

The pie of pie chart depicted in Fig. 21 offers a comprehensive overview of the AI techniques utilized in the mechanical sector for both fault diagnosis and prognosis. The larger pie chart focuses on fault diagnosis and shows the distribution of AI techniques employed for this purpose.

Fig. 21
figure 21

Popular AI techniques used for fault diagnosis and prognosis in mechanical component

Among these techniques, Support Vector Machine (SVM) emerges as the most widely used, accounting for a significant 10% share of the larger pie. This finding is well founded, as SVM is often preferred for classification tasks, which are prevalent in fault diagnosis applications. Following closely behind are Artificial Neural Networks (ANN) and modified Autoencoder (AE), each covering 6% of the pie, indicating their notable role in fault diagnosis tasks. The smaller pie chart, representing prognosis, reveals that Long Short-Term Memory (LSTM) is the prominent AI technique, occupying 4% of the pie. LSTM’s widespread usage in prognosis can be attributed to its capability in handling sequential data and time-series analysis, making it suitable for predicting remaining useful life and performance of mechanical components and machinery.

Overall, the pie of pie chart provides valuable insights into the AI techniques' distribution and significance in both fault diagnosis and prognosis within the mechanical sector. SVM, ANN, and modified AE shine as the key players in fault diagnosis, while LSTM takes center stage in the domain of prognosis. This comprehensive analysis of AI techniques' share in mechanical sector applications highlights their pivotal role in enhancing maintenance practices and ensuring the optimal performance and reliability of mechanical components and machinery.

The pie of pie chart in Fig. 22 provides valuable insights into the utilization of AI techniques in fault diagnosis and prognosis within the electrical and electronic (EE) sector. The larger pie chart is dedicated to fault diagnosis, while the smaller pie chart focuses on prognosis, illustrating the distribution of AI techniques employed for these distinct purposes.

Fig. 22
figure 22

Popular AI techniques used for fault diagnosis and prognosis in EE component

Artificial Neural Networks (ANN) stand out as a widely studied and versatile technique, holding a significant 16% share in the larger pie chart for fault diagnosis. ANN, inspired by the human brain's neural networks, has proven to be highly effective in analyzing electrical data and detecting faults in electrical machines and components. It is evident that researchers find ANN to be a valuable tool for addressing both fault diagnosis and prognosis challenges.

Other prominent AI techniques in fault diagnosis, each constituting 6% of the larger pie, include Genetic Algorithms (GA), Long Short-Term Memory (LSTM), Extreme Learning Machine (ELM), Convolutional Neural Networks (CNN), and modified Autoencoder (AE). These techniques have garnered considerable attention and are widely explored by researchers in the EE sector. On the other hand, certain AI techniques such as Recurrent Neural Networks (RNN), K-Nearest Neighbors (K-NN), and Logistic Regression (LogiReg) are less popular in fault diagnosis, each holding an equal share of 2% in the larger pie chart. While they may not be as widely employed, their presence signifies their relevance and potential for specific fault diagnosis applications.

Turning to prognosis, the widely used AI techniques are ANN, Support Vector Machine (SVM), and Support Vector Regression (SVR), each constituting a share of 6% in the smaller pie chart. This highlights the significance of ANN, SVM, and SVR in predicting the remaining useful life and performance of electrical and electronic components and machinery. Other techniques, such as LSTM, CNN, GA, and AE, are also utilized for prognosis, though to a lesser extent. The diverse array of AI techniques employed for both fault diagnosis and prognosis in the EE sector showcases the dedication of researchers in continuously exploring and advancing AI's potential in enhancing maintenance practices and ensuring the optimal performance and reliability of electrical and electronic systems.

5.4 AI at recycle/re-use/retrofit phase

As the world grapples with the increasing environmental challenges posed by industrial equipment, the concepts of retrofit, recycling, and reuse have emerged as vital strategies for sustainable resource management. These approaches focus on extending the lifespan of equipment, minimizing waste generation, and optimizing resource utilization. In this context, advancements in Artificial Intelligence (AI) have paved the way for innovative solutions to enhance the effectiveness and efficiency of retrofit, recycling, and reuse practices in the industrial sector [308].

When it comes to the recycling and reuse of industrial equipment or machines, the process often falls under the broader concept of retrofitting which involves modifying or upgrading existing equipment to enhance its functionality, improve efficiency, or extend its lifespan, thereby reducing the need for complete replacement [309]. Recycling can be seen as a form of retrofitting because it involves recovering valuable materials from outdated or non-functional machinery which can be upgraded or modified to meet the desired specifications, thereby aligning with the principles of retrofitting. While reuse is closely connected to retrofitting as well. Reuse involves extending the life cycle of industrial machinery and equipment by refurbishing, repairing, or repurposing them for different applications. This aligns directly with the objective of retrofitting, which is to enhance and extend the functionality of industrial machinery.

The recycling, reuse, and retrofitting of industrial equipment present significant challenges and complexities in achieving efficient and sustainable outcomes. These processes involve the transformation of existing equipment to extend its lifecycle, reduce waste, and enhance performance. To overcome these challenges, advanced technologies such as artificial intelligence (AI) have emerged as crucial tools [310]. As mentioned earlier, recycling and reuse of industrial machinery or equipment fall under the broader concept of retrofitting. For this review, only articles that are related to equipment retrofitting are included.

5.4.1 Retrofitting of industrial machinery or component

The evolution of retrofitting from traditional approaches to smart retrofitting can be attributed to the industrial evolution towards Industry 4.0. With the advent of Industry 4.0, there has been a paradigm shift in manufacturing, characterized by the digitalization, connectivity, and automation of industrial processes. This transformation has necessitated a more holistic and intelligent approach to equipment upgrade and optimization. Traditional retrofitting, which primarily focused on hardware integration, was insufficient to fully exploit the potential of this industrial evolution [311]. Hence, the emergence of smart retrofitting, with its aim to transform traditional systems (machines/components) into intelligent and networked entities through the incorporation of advanced technologies, aligning with the principles of cyber-physical systems.

Smart retrofitting puts more emphasis on software side and advanced technologies like AI and ML. Hence the integration of digital technologies and connectivity in smart retrofitting enables the retrofitted systems to become part of a larger networked ecosystem, facilitating real-time monitoring, data exchange, and intelligent control. In this context, smart retrofitting aligns with the principles of Industry 4.0 and cyber-physical systems and enables companies to achieve added value, sustainability, and improved performance by harnessing the power of advanced software, combined with human expertise, to optimize industrial equipment and processes [312,313,314].

During the retrofit stage, existing industrial equipment is modified or upgraded to improve its performance, energy efficiency, or compliance with new standards. Retrofit actions may involve replacing outdated components with more efficient ones, implementing advanced control systems, or integrating smart technologies to optimize operations. This process allows companies to extend the lifespan of their equipment, reduce energy consumption, and minimize the need for new equipment manufacturing [6].

AI can facilitate the analysis of data related with equipment performance, enabling predictive maintenance and early detection of potential failures. AI-based systems can monitor equipment in real-time, identify patterns or anomalies, and provide insights that optimize operational efficiency, reduce downtime, and extend equipment lifespan. Furthermore, AI can assist in the selection and integration of energy-efficient components, control systems, and automation technologies during the retrofitting process, ensuring optimal resource utilization and reduced environmental impact [6].

The research on AI applications for smart retrofitting reveals an interesting trend in the available literature. While numerous articles discuss frameworks, procedures, and approaches that incorporate AI and ML assistance for smart retrofitting, there is a lack of clear information regarding the specific AI or ML techniques employed. The publications generally provide an overview of the benefits and potential of AI in smart retrofitting, but they often do not delve into the specifics of the techniques used. However, a few publications do mention the use of CNN, ANN, and its variants in smart retrofitting, particularly in the context of monitoring and anomaly detection for predictive maintenance. These applications are often seen in the realm of digital twin (DT) or the more advanced form of DT which is digital triplet [6], where AI and ML techniques are integrated to enhance the effectiveness of monitoring and maintenance processes. It appears that the focus of these publications is to highlight the integration of AI and ML techniques into the DT or digital triplet framework rather than providing detailed information on the specific techniques employed.

The following are the few studies that clearly mention the use of AI/ML techniques and the purpose of the used techniques. The study conducted in [315] introduces smart retrofitting as a transformative approach aligned with Industry 4.0. It presents a thermal design methodology that employs machine learning techniques to capture and estimate the overall thermal characteristics and facilitate precise selection of cooling components for embedded systems. The unique contribution of this study lies in the incorporation of two-layer neural network (NN) to predict the heat transfer parameters associated with lumped conductance and capacitance as well as support lumped parameter simulations, providing fresh insights into thermal management within smart retrofitting applications in embedded components.

The study presented in [316] aims to tackle the challenges of retrofitting sensors in existing production and logistics systems by implementing an embedded vision high-bay shelf monitoring system. Its primary objective is to contribute to the knowledge base of sensor systems in the context of Industry 4.0 and establish design principles for retrofitting of visual sensors. The study successfully develops an embedded vision system that integrates computer vision, convolutional neural networks (CNN), k-means clustering, and OPC UA for a cost-effective retrofitting solution. It effectively demonstrates accurate workpiece detection and color recognition using fundamental computer vision algorithms and machine learning techniques. However, to validate the design principles and assess the system's performance, further evaluation in real-world industrial environments is necessary. The study's findings are valuable and offer practical insights that can be applied to a wide range of retrofitting projects involving visual sensors.

The authors in [317], introduce an innovative method for retrofitting traditional analog water meters to meet the requirements of Industry 4.0. The approach utilizes Deep Learning (DL) algorithms, specifically the convolutional neural network (CNN) model and transfer learning, to enhance digit detection in IoT-based analog water meters. The DL model is trained using a diverse dataset of images collected from various environmental conditions. In a related work [318], also conducted by the authors of [317], the authors employed the random forest (RF), a machine learning (ML) algorithm, for water meter reading estimation, which is compared comprehensively with the DL approach presented in this article. The comparative analysis demonstrates that the DL model achieves superior accuracy in digit detection and extraction of generalized features. This research provides valuable insights into retrofitting analog water meters for Industry 4.0 by harnessing the capabilities of deep learning and transfer learning techniques.

The study in [6] presents a novel digital triplet hierarchy for retrofitting conventional drilling machines by integrating digital twin technology, artificial intelligence (AI), and human awareness. The hierarchy encompasses four levels: complex decision-making involving machine learning and human ingenuity, control of behavior predictions for the physical system, real-time observation of system behavior and critical parameters (including the status of the chuck safety guard, pullies, and belts), anomaly detection using artificial neural networks (ANN), and visualization and emulation of virtual features. The successful implementation of this hierarchy demonstrates its efficacy in replicating real-time functionality and reducing complexity through the synergistic interaction between digital twins, intelligent activities, and human awareness. The performance parameters of the digital triplet paradigm for retrofitting are validated through rigorous evaluation, including appraisal, anomaly analysis, and real-time monitoring. The study underscores the feasibility and effectiveness of the proposed hierarchy in achieving real-time functionality while streamlining complexity. It also highlights the potential for enhancing convergence, facilitating data interaction, and harnessing human expertise to advance the field of digital retrofitting. The findings underscore the significance of seamless human–machine integration and advocate for the adoption of user-centric intelligent solutions in future ap-plications.

In [313], a framework is proposed for retrofitting old process plants to meet the requirements of Industry 4.0. The approach integrates digital twin techniques and deep learning algorithms, specifically neural networks (NN), to improve safety and maintainability conditions. A real case study on a two-phase mixing plant is conducted to demonstrate the practicality of the framework. The findings highlight the successful transformation of the old plant into a smart plant capable of effective communication with operators. However, the study acknowledges challenges associated with multidisciplinary and the iterative process of defining new variables.

There are some studies that illustrate researchers are utilizing smart retrofitting techniques to predict the Remaining Useful Life (RUL) of machinery components and enhance maintenance operations associated with machines and components. For example, researchers in [319] retrofit an old legacy drilling machine with cheap electronics components and present a DT based maintenance solution for machine. They use statistical methods like Exponential Degradation Model (EDM) for RUL prediction. Their focus was on the prognosis of the machine. On the other hand, the studies in [320, 321] use ML algorithms for estimating wear in machine tools in retrofitted CNC machines. Both studies use ANN model for predicting machine tool wear.

Several studies have focused on frameworks, procedures, and approaches for smart retrofitting, including the incorporation of AI and ML techniques and cyber-physical systems. However, there is a lack of explicit information regarding the specific implementation of AI or ML techniques and their integration at different layers. For example, the study in [311] focuses on improving operational efficiency and reducing costs through the retrofitting of a bending machine. The study in [327] demonstrates how adaptive manufacturing principles can be applied through retrofitting to enable faster reconfiguration of a roller conveyor. The authors in [322] showcase the sustainability benefits of retrofitting by improving energy efficiency in a desktop tool machine. The study in [330] explores the enhancement of energy efficiency in a mechanical arm through Cyber-Physical Production System (CPPS) based retrofitting. The study in [323] proposes an approach utilizing simulation tools and cloud technology to evaluate energy consumption in a manufacturing system. The study in [324] emphasizes the economic advantages and sustainability benefits of digital twin (DT) technology. The study in [325] address operator safety as a crucial aspect of retrofitting by implementing a Cyber-Physical System (CPS) in a steel mill plant.

Lastly, the study in [313] presents a general framework for retrofitting old process plants, focusing on the application of Industry 4.0 paradigms. The framework is applied to a two-phase mixing plant, resulting in improved safety and maintainability conditions. The study identifies challenges in the retrofitting process, including multidisciplinary aspects, recursion in defining new variables, and information flow management. It highlights the lack of official protocols for transitioning old plants to Industry 4.0 and emphasizes the need to address security concerns. The study utilizes supervised machine learning algorithms for anomaly detection, but the specific techniques are not specified. The findings contribute to the understanding of retrofitting in process industries and suggest future directions for implementing the proposed framework in manufacturing plants.

5.4.2 Summary

Through a thorough examination of the pertinent literature focusing on the recycling, reuse, and retrofitting phase, it becomes apparent that these three components are intricately interwoven when applied to industrial machinery and equipment. In essence, recycling and reuse are indispensable elements encompassed within the broader concept of smart retrofitting, forming a cohesive strategy to optimize the value, lifespan, and sustainability of such equipment. By incorporating recycling and reuse practices into retrofitting processes, industries aim to achieve resource efficiency, minimize waste generation, and significantly reduce the environmental impact associated with equipment disposal.

The synergistic integration of recycling and reuse strategies within retrofitting operations reflects a holistic and forward-thinking approach towards sustainable industrial practices. This comprehensive approach not only prolongs the life of machinery and equipment but also fosters a profound commitment to environmentally responsible operations. By recognizing the intrinsic relationship between recycling, reuse, and retrofitting, industries can embrace a paradigm shift towards a more circular economy, where valuable resources are conserved and utilized to their fullest potential. In doing so, businesses contribute to a greener and more sustainable future while optimizing their operational efficiency and minimizing their ecological footprint.

In our comprehensive analysis, we found that while the existing literature on retrofitting often discusses the potential of integrating AI and ML, it lacks detailed insights into the specific techniques employed. However, through meticulous research, we identified a subset of publications that shed light on the utilization of AI techniques in smart retrofitting, as showcased in the bar chart presented in Fig. 23.

Fig. 23
figure 23

Popular AI techniques applied in smart retrofitting

This bar chart (Fig. 23) provides a visual representation of the adoption of various AI techniques in the context of smart retrofitting, with each technique listed on the y-axis and the number of associated publications listed on the x-axis. As depicted in the bar chart, it becomes evident that among the various AI techniques, Artificial Neural Networks (ANN) stand out with the highest adoption, featuring in 4 publications. This signifies that ANN is the most widely used technique in smart retrofitting, showcasing its effectiveness and popularity in this domain. On the other hand, other AI techniques, such as Convolutional Neural Networks (CNN), Random Forest (RF), transfer learning, and k-means clustering, have 2, 1, 1, and 1 publication, respectively. While these techniques may have relatively lower adoption rates compared to ANN, they still demonstrate their significance and are actively explored in the realm of smart retrofitting.

Overall, the bar chart provides valuable insights into the prominence of AI techniques in smart retrofitting and highlights the diversity of approaches utilized in this field, offering a clear and concise overview of the specific AI techniques being employed based on the number of associated publications. Moreover, the insightful pie chart depicted in Fig. 24 elegantly showcases the distribution of AI techniques within the realm of smart retrofitting. Each segment within the chart represents a distinct AI technique employed in smart retrofitting, with the size of each segment directly proportional to the adoption rate of that specific technique. This visually appealing representation offers a comprehensive and easily understandable overview of the popularity and prevalence of various AI techniques in the context of smart retrofitting.

Fig. 24
figure 24

Distribution of AI techniques smart retrofitting of machines and components

By presenting the data in a pie chart format, researchers and practitioners can quickly grasp which AI techniques enjoy higher adoption rates and are more widely used in this field. The chart serves as a valuable visual aid, making it effortless to discern the relative importance and influence of each AI technique based on the number of publications dedicated to its utilization in smart retrofitting. This concise and informative display empowers decision-makers to make well-informed choices when selecting AI techniques for their smart retrofitting endeavors, ultimately contributing to more effective and sustainable retrofitting practices in industrial settings.

The comprehensive range of AI techniques employed in the field of smart retrofitting is thoughtfully summarized in Table 13. These techniques find their applications within the context of digital twin (DT) or digital triplet approaches, where the seamless integration of AI and ML enhances the efficacy of monitoring and maintenance processes.

Table 13 AI techniques used in smart retrofitting of machine/equipment

Incorporating AI and ML in digital twin frameworks enables a more profound understanding of industrial equipment's behavior, leading to improved predictive maintenance capabilities. By leveraging the power of AI in conjunction with digital twins, real-time data analytics and predictive models can be utilized to detect potential faults and deviations early on, allowing for timely corrective actions and reducing downtime.

Furthermore, the fusion of AI and ML in digital triplets, which combine data from physical assets, digital twins, and external data sources, empowers a holistic approach to smart retrofitting. This innovative approach enhances decision-making processes and supports the development of sophisticated maintenance strategies, ensuring optimal performance and resource efficiency. The application of AI and ML techniques within digital twin and digital triplet frameworks exemplifies the ongoing advancement and transformative potential of smart retrofitting in industrial settings. As industries increasingly embrace AI-driven solutions, the future of smart retrofitting promises even greater efficiency, sustainability, and operational excellence. Additionally, certain studies [315, 327, 328], focus on the convergence of retrofitting with cyber-physical systems (CPS) and digital twin technologies. Retrofitting facilitates the integration of CPS and DT, with digital twins providing the necessary virtual representation and analysis capabilities for effective CPS implementation. This combination of concepts enables improved asset management, enhanced operational efficiency, and more informed decision-making across various industrial domains.

6 Harnessing the power of AI in the equipment lifecycle: Advantages and challenges

6.1 Advantages incorporating AI at design phase

AI techniques offer a range of significant advantages during the product design phase. One of the key benefits is enhanced design optimization, as these techniques can analyze large volumes of data and identify optimal design parameters. This leads to improved product performance and increased efficiency. Another advantage is the ability of AI/ML algorithms to facilitate faster iterations. By rapidly analyzing design iterations and providing real-time feedback, these techniques enable designers to accelerate the prototyping and iteration cycles. This ultimately reduces the time to market for new products.

In addition, AI techniques contribute to improved decision-making during the design phase. By analyzing complex design data and identifying patterns, these techniques provide valuable insights that support informed decision-making. Designers can leverage this information to make more accurate and effective design choices, leading to better overall product outcomes. Furthermore, AI enables design automation by automating repetitive design tasks. Tasks such as generating design alternatives or performing simulations can be streamlined, reducing the manual effort required by designers. This automation frees up designers' time and allows them to focus on more creative and complex aspects of the design process.

Moreover, AI techniques are effective in optimizing designs for multiple objectives. By considering factors such as performance, cost, and sustainability, these techniques help designers find the best trade-offs and achieve more balanced design solutions. This holistic optimization approach ensures that products meet various criteria and deliver value across different dimensions. Overall, the integration of AI in the product design phase offers enhanced design optimization, faster iterations, improved decision-making, design automation, and optimization for multiple objectives. These advantages empower designers to create innovative and high-quality products while optimizing the design process for efficiency and effectiveness.

6.2 Advantages incorporating AI at manufacturing phase

AI techniques offer numerous advantages during the product manufacturing phase, encompassing various manufacturing processes such as additive, subtractive, and hybrid manufacturing, as well as supply chain management, logistics, and inventory management. In terms of manufacturing processes, AI techniques can enhance efficiency and productivity. By analyzing vast amounts of data, these techniques can optimize process parameters, reduce material waste, and improve overall manufacturing performance.

For additive manufacturing, AI algorithms can assist in optimizing printing parameters, improving print quality, and reducing defects. In subtractive manufacturing, AI can optimize tool paths, and machining parameters, and enhance surface finish. In hybrid manufacturing processes, AI can help in integrating and optimizing the combination of different manufacturing techniques. Furthermore, AI techniques support quality improvement by detecting defects, monitoring process variations, and performing real-time quality checks. By analyzing sensor data, image recognition, and other data sources, these techniques can identify deviations from quality standards and trigger immediate corrective actions, ensuring consistent product quality.

In terms of supply chain management, AI techniques provide valuable insights and optimization opportunities. By analyzing supply chain data, these techniques can optimize inventory levels, improve demand forecasting accuracy, and enable more efficient order fulfillment. AI/ML algorithms can identify patterns and correlations in complex supply chain data, enabling better decision-making and enhancing overall supply chain performance. Moreover, AI techniques assist in logistics and inventory management. These techniques can optimize warehouse operations, including inventory allocation, picking, and packing processes, and route optimization for transportation. By analyzing historical data, demand patterns, and real-time information, AI algorithms can improve logistics efficiency, reduce transportation costs, and enhance customer satisfaction.

Overall, the advantages of AI techniques in the product manufacturing phase include enhanced efficiency, predictive maintenance, quality improvement, optimization of manufacturing processes, and improved supply chain management, logistics, and inventory management. These techniques have the potential to revolutionize the manufacturing industry by optimizing operations, reducing costs, and improving overall product quality and customer satisfaction.

6.3 Advantages incorporating AI at maintenance phase

AI/ML techniques offer significant advantages during the maintenance phase, including the areas of predictive maintenance, prognosis (RUL estimation), and product health management. These techniques enable organizations to go beyond reactive maintenance and move towards a proactive approach. These techniques enable predictive maintenance by analyzing vast amounts of data, including sensor readings and historical records, to identify patterns and trends that indicate potential equipment failures. By detecting issues in advance, maintenance teams can schedule proactive maintenance, reducing unplanned downtime and optimizing equipment performance. Additionally, AI facilitates condition monitoring by continuously analyzing real-time sensor data, allowing for early detection of anomalies or deviations from normal operating conditions. This early detection enables prompt intervention, preventing minor issues from developing into major failures and minimizing the impact on production.

Fault diagnosis and root cause analysis are greatly enhanced through AI techniques. By leveraging complex algorithms and machine learning models, maintenance teams can analyze diverse data sources and identify the underlying causes of equipment malfunctions. This knowledge helps in prioritizing maintenance actions, determining the most effective repair strategies, and reducing the time required for troubleshooting. Furthermore, AI techniques contribute to optimizing maintenance planning by considering various factors such as equipment usage patterns, environmental conditions, and historical data. This optimization ensures that maintenance activities are conducted efficiently and at the most appropriate times, minimizing disruptions to production and reducing maintenance costs.

Furthermore, AI techniques contribute to product health management by continuously monitoring and analyzing data related to the performance, reliability, and degradation of equipment. Through advanced analytics and machine learning algorithms, organizations can gain insights into the overall health of their products. This enables them to identify potential issues, assess the impact of those issues on performance and reliability, and take proactive measures to prevent failures or minimize their consequences. By monitoring key indicators and utilizing AI algorithms, organizations can make informed decisions regarding the repair, replacement, or reconfiguration of components, ensuring the highest levels of product quality and customer satisfaction.

In addition to equipment/machine health management, AI techniques support remaining useful life estimation. By analyzing data from sensors, historical records, and other relevant sources, AI models can predict the expected lifespan of components or equipment. This information allows organizations to plan maintenance activities more efficiently, replacing or repairing components before they reach the end of their useful life. This proactive approach minimizes unexpected failures, reduces downtime, and optimizes maintenance costs.

Remote monitoring and assistance capabilities offered by AI/ML play a crucial role in modern maintenance practices. With AI-powered systems, maintenance teams can remotely monitor equipment performance, diagnose issues, and provide guidance to on-site technicians. This remote support reduces the need for physical presence, enhances response times, and enables faster problem resolution. Moreover, AI techniques contribute to knowledge capture and transfer within maintenance teams. By analyzing historical maintenance data and expert knowledge, AI models can extract valuable insights that are used to build knowledge repositories. These repositories enhance troubleshooting processes, support decision-making, and facilitate training programs, ensuring that maintenance teams have access to comprehensive and up-to-date information.

Overall, AI techniques revolutionize the maintenance phase by enabling predictive maintenance, facilitating remaining useful life estimation, prognostic health management, condition monitoring, enhancing fault diagnosis, optimizing maintenance planning, providing remote monitoring and assistance, and promoting knowledge capture and transfer. By harnessing the power of AI, organizations can achieve higher equipment reliability, reduce downtime, extend the lifespan of equipment, enhance maintenance efficiency, and ultimately improve overall operational performance.

6.4 Advantages of incorporating AI at recycle/reuse/retrofit phase

AI techniques offer several advantages during the recycle/reuse/retrofit phase of equipment or machines. These techniques can contribute to the sustainable management of assets, optimize resource utilization, and support decision-making processes. One key advantage is the ability of AI techniques to facilitate the identification and sorting of recyclable materials. By analyzing sensor data, images, or other inputs, AI models can accurately classify and separate different types of materials, making the recycling process more efficient and reducing waste. This improves the overall sustainability of the recycling phase by ensuring that valuable resources are properly recovered and reused.

Additionally, AI techniques can assist in the identification of components or parts that are suitable for reuse. By analyzing historical data, performance metrics, and condition monitoring data, AI models can assess the quality and reliability of components, determining if they can be reused in other equipment or machines. This enables organizations to extend the lifespan of components, reduce waste, and optimize resource utilization. Moreover, AI techniques can support retrofitting processes by analyzing data from various sources to identify opportunities for equipment or machine upgrades. By considering factors such as performance metrics, energy efficiency, and compatibility, AI models can recommend specific retrofits or modifications that can enhance the performance, functionality, or sustainability of equipment or machines. This leads to improved productivity, reduced environmental impact, and enhanced cost-effectiveness.

Furthermore, AI techniques can support decision-making processes during the recycle/reuse/retrofit phase. By analyzing vast amounts of data, including historical records, performance metrics, and environmental impact assessments, AI models can provide insights and recommendations for optimizing recycling, reuse, or retrofitting processes. This enables organizations to make informed decisions about resource allocation, prioritization of actions, and selection of sustainable practices. Overall, AI techniques contribute to the efficient and sustainable management of equipment or machine assets during the recycle/reuse/retrofit phase. They enable accurate sorting of recyclable materials, identification of components suitable for reuse, recommendation of retrofitting opportunities, and support decision-making processes. By leveraging these techniques, organizations can achieve higher levels of resource efficiency, reduce waste, and enhance the sustainability of their operations.

6.5 Challenges of incorporating AI through industrial equipment lifecycle

Integrating AI into the industrial equipment lifecycle offers substantial benefits. However, it is vital to recognize and address the potential challenges and drawbacks that may arise. The following are some of the most prevalent challenges faced when incorporating AI throughout the industrial equipment lifecycle:

  1. 1.

    Initial investment and financial barriers: One of the main challenges is the initial investment required. Implementing AI technologies involves substantial upfront costs for acquiring infrastructure, hardware, software, and providing employee training. This financial commitment can pose a barrier, particularly for smaller organizations with limited resources [76].

  2. 2.

    Integration challenges: Integration challenges are also present when integrating AI systems with existing equipment and infrastructure. Technical complexities and compatibility issues may arise, necessitating careful planning and expertise to ensure seamless integration. Organizations must consider potential disruptions and downtime during the integration process [326].

  3. 3.

    Data privacy and security risks: data privacy and security risks are another important consideration in implementing AI in the industrial equipment lifecycle. Handling sensitive equipment data and ensuring its security can be challenging. Robust cybersecurity measures must be implemented to safeguard the data from unauthorized access, breaches, or misuse. Compliance with data protection regulations and industry standards is vital to maintain data privacy and integrity [327, 328].

  4. 4.

    Skilled workforce requirement: Operating and maintaining AI systems require a skilled workforce with expertise in AI technologies. Organizations need to invest in training their employees or hire specialized personnel to effectively operate and maintain the AI systems. The requirements for these skills and expertise can be challenging, particularly in industries facing a shortage of AI talent [4].

  5. 5.

    Ethical considerations: Ethical considerations also come into play when using AI in the industrial equipment lifecycle. Job displacement is a concern as AI technologies may automate tasks previously performed by humans, potentially leading to workforce reductions. Moreover, ethical concerns arise regarding the potential biases in decision-making that may emerge from AI algorithms. It is essential for organizations to address these ethical considerations, ensuring fairness and transparency in the utilization of AI in equipment operations [13].

In summary, while implementing AI in the industrial equipment lifecycle offers significant advantages, it is essential to carefully assess the challenges and potential drawbacks. The initial investment, integration challenges, data privacy and security risks, skills and expertise requirements, and ethical considerations must all be carefully addressed to ensure successful implementation and mitigate potential drawbacks. With proper planning and management, organizations can leverage the benefits of AI while effectively managing the associated challenges.

7 Conclusion

This comprehensive literature review provides valuable insights into the extensive applications of AI techniques throughout the life cycle of industrial equipment. The impact of AI is far-reaching, transforming traditional approaches and unlocking new possibilities for manufacturers, maintenance teams, and sustainability efforts, from the initial design phase to the recycle/reuse/retrofit phase.

Within the design phase of industrial equipment, the integration of AI techniques has led to a profound transformation. Our research delved into three pivotal stages: Design inspiration and concept generation, Shape synthesis, and Topology optimization. Among the 18 identified AI techniques, Generative Adversarial Networks (GAN) and Deep Learning (DL) emerged as the most prevalent. AI methods such as Artificial Neural Networks (ANN), Genetic Algorithms (GA), and various models belonging to Generative Adversarial Networks (GAN) played key roles in design inspiration and concept generation. Additionally, the Shape synthesis stage showcased the adaptability of Autoencoders (AE/VAE), Deep Convolutional Networks (DCN), and GAN techniques in generating innovative designs. Meanwhile, the Topology optimization stage heavily relied on AI algorithms from the GAN family, accompanied by GAN models, style transfer, and AEs. The incorporation of AI techniques empowered designers, providing them with inspiration, diverse concepts, and the automation of complex tasks like shape synthesis and topology optimization. By harnessing the capabilities of AI, designers can push the boundaries of creativity and innovation, leading to more efficient and groundbreaking designs throughout the industrial equipment life cycle.

The exploration of AI techniques in the manufacturing phase has uncovered a myriad of diverse and impactful applications, leading to a revolution in the industry. The comprehensive study delved deep into additive manufacturing, subtractive manufacturing, and supply chain processes, providing a comprehensive understanding of how AI techniques contribute to various aspects of manufacturing. Through meticulous research, we identified and analyzed a total of 29 distinct AI techniques that are actively employed in manufacturing applications. AI's significant role in additive manufacturing is evident, particularly in process parameter control, optimization, and in-process monitoring. Additionally, AI has made remarkable contributions to subtractive manufacturing by optimizing machining parameters and ensuring superior surface quality, even in non-conventional processes. Furthermore, the implementation of AI-powered decision-making, inventory control, and logistics optimization has brought substantial enhancements to overall supply chain performance. Among the array of widely used techniques for addressing manufacturing challenges, Support Vector machine (SVM), ANN, and Genetic Algorithm (GA) emerged as standout performers. Their collective impact has been instrumental in streamlining manufacturing operations, elevating product quality, and driving the industry towards unprecedented levels of efficiency and innovation. By harnessing the transformative power of AI, manufacturers can optimize their processes, attain higher levels of productivity and performance, and position themselves at the forefront of modern manufacturing practices. AI's role as an indispensable tool in shaping the future of manufacturing is undeniable, offering limitless possibilities and boundless potential for continued advancement in the industry.

In the maintenance phase, we have identified 36 distinct AI techniques utilized for fault diagnosis, root cause analysis, and prognosis of both electrical and mechanical components and machines. The mechanical sector heavily adopts AI techniques to enhance prognostic capabilities and enable predictive maintenance, ensuring optimal performance, reliability, and extended longevity of mechanical assets. Our findings demonstrate that SVM, ANN, Long Short-Term Memory (LSTM), autoencoders (AEs), and their variants (modified AEs) are widely used for fault diagnosis and prognosis of mechanical components and machines. In the electrical/electronic sectors, AI techniques also play a significant role in improving maintenance practices. ANN stands out as a widely studied and versatile technique in fault diagnosis. Other prominent AI techniques in fault diagnosis include GA, LSTM, Extreme Learning Machines (ELM), Convolutional Neural Networks (CNN), and modified Autoencoder (AE). Although some techniques like Recurrent Neural Networks (RNN), K-Nearest Neighbors (K-NN), and Logistic Regression (LogiReg) are less popular. For prognosis, ANN, SVM, and Support Vector Regression (SVR) are widely used techniques, while LSTM, CNN, GA, and AE also play important roles. The diverse array of AI techniques employed showcases the ongoing exploration of AI’s potential to enhance maintenance practices and ensure optimal performance and reliability in the electrical and electronic (EE) sector. Overall, SVM, ANN, LSTM, AE, CNN, GA, ELM, and DBN are some of the AI techniques garnering considerable attention in predictive maintenance. AI's transformative impact on maintenance practices empowers industries to embrace predictive maintenance strategies and optimize the efficiency and reliability of their assets.

Furthermore, the interconnectedness of recycling and reuse with retrofitting forms a cohesive strategy to optimize value, lifespan, and sustainability of industrial equipment. Incorporating recycling and reuse practices into retrofitting aims to achieve resource efficiency, minimize waste, and reduce environmental impact. This approach fosters a commitment to environmentally responsible operations and contributes to a greener and more sustainable future for industries. Our review also explored the integration of AI and ML techniques in smart retrofitting. We found that while the existing literature on retrofitting often discusses the potential of integrating AI and ML, it lacks detailed insights into the specific techniques employed. However, through meticulous research, we identified a subset of publications that shed light on the utilization of AI techniques in smart retrofitting. ANN emerged as the most widely used technique in this area. Other AI techniques, such as Convolutional Neural Networks (CNN), Random Forest (RF), transfer learning, and k-means clustering, also showed significance. These findings indicate the growing interest in applying AI/ML in smart retrofitting to further enhance its effectiveness and potential for sustainable industrial practices.

The future work in the field of AI techniques for industrial equipment holds promising prospects for further advancements and applications. Researchers should focus on refining existing AI techniques and developing novel approaches to address specific challenges across the lifecycle of industrial equipment such as the integration of AI with legacy systems (which could be enhanced by using APIs and middleware applications) and ethical concerns arising from potential biases in decision-making. Integrating AI with the Internet of Things (IoT) offers potential for real-time data-driven decision-making and predictive maintenance. Collaboration and knowledge sharing between academia, industry, and AI practitioners are essential to gain comprehensive insights into AI techniques’ successful implementation in different sectors. Addressing data privacy and security concerns is crucial to build trust in AI-powered systems. Developing user-friendly maintenance decision support systems with AI-generated insights will empower maintenance teams. Industries should prioritize sustainable AI implementation, focusing on practices that optimize efficiency and promote environmental sustainability. Standardized benchmarks and evaluation metrics will facilitate performance comparisons between AI techniques. AI education and training programs should be developed to upskill the workforce effectively. More case studies and real-world implementations are needed to validate AI techniques' effectiveness in diverse industrial scenarios. Interdisciplinary research and collaborations will foster innovative solutions to complex challenges in industrial equipment lifecycle management. Embracing these future recommendations will lead to a transformative impact of AI in industrial settings, fostering greater efficiency, reliability, and sustainability in manufacturing and maintenance practices.