1 Introduction

The present study aims to develop a tool able to estimate the increment in terms of maintenance hours (and therefore costs) for a component/system when the clearances suggested by the supplier around it, are not respected. This is a rather common practice especially in engine rooms of naval ships, research vessels and mega-yachts, since generally they are not large size ships, and they are characterized by a significant technology amount installed onboard (Celik 2009). To develop this approach and the related tool, the space available to operate around/on the component must be defined and quantified.

While considering the best approach in developing the tool, two different, yet interrelated, points of view can be defined:

  1. 1.

    A component-centered design;

  2. 2.

    A system-centered design.

The first aspect was analyzed in Gualeni et al. (2022). The authors defined a GLM (General Linear Model) which considers the cost increase and the clearance reduction as continuous parameters, and it is applied to one component at a time. A general linear relationship is thought to exist between the reduction of clearance and cost increase. The huge advantage of the general linear approach is that it allows defining a continuously generative model, trained with all the necessary evidence provided, that returns the cost increase. The observations are randomly sampled from the most suitable distribution, characterized by a shape that can be varied acting on its input parameters, then disturbed with a random Gaussian noise.

However, the present paper focuses on the second aspect, i.e., it considers the whole system made of the several components/items.

System-centered design is an important approach in ship design, especially in the design of complex marine systems, such as power generation and distribution, propulsion, and navigation (Bosschers et al. 2012, Esmaeilpour et al. 2015). The design of a ship is a complex activity that involves the integration of various systems and components to pursue the desired ship emergent properties: a system-centered design approach can help ensuring that the ship general features and the systems onboard are properly integrated and optimized for performance (e.g., commercial and operational objectives), reliability and safety (Konovessis et al. 2010).

In ship design, the system-centered design approach typically involves the following steps (Wee et al. 2016):

  1. 1.

    Identifying the ship's operational requirements: The design process begins with an understanding of the ship's operational requirements, including its speed, range, payload, and environmental conditions. These requirements provide the foundation for the design of the ship's systems and components.

  2. 2.

    Developing the ship's architecture: The ship's architecture is the overall layout and arrangement of its systems and components. The architecture must be designed to optimize the ship's performance, safety, and maintainability.

  3. 3.

    Designing the ship's systems: The ship's systems, such as propulsion, electrical, and navigation, must be designed to meet the ship's operational requirements and be integrated into the ship's architecture.

  4. 4.

    Validating the ship's design: The ship's design must be validated through testing and analysis to ensure that it meets the ship's operational requirements and safety standards.

The work is structured following the logical steps of design, realization, and testing of the above mentioned systemic model.

  • Sect. 2 introduces the probabilistic inferential approach adopted in the model development;

  • Sect. 3 describes the algorithms used by the model and how the learning phase takes place;

  • Sect. 4 details the architecture of the predictive model applied to the problem at hand;

  • Sect. 5 describes the data generation process for learning the model;

  • Sect. 6 deals with the formulation of predictions;

  • Sect. 7 contains the discussion of the results, followed by the conclusions.

2 Probabilistic inferential approach

Descriptive statistics involves analyzing data in a way that summarizes or describes the system without making conclusions beyond the analyzed data. The two main tools used in descriptive statistics are central tendency and dispersion, which describe the central position of a probability density distribution and the deviation from the most probable values, respectively. Descriptive statistics is limited to the items that have been measured and does not attempt to infer properties about a larger population. Inferential statistics, on the other hand, takes data from a sample and makes inferences about the larger population from which the sample was drawn (Trochim 2006). This requires that the sample accurately reflects the population and it is recommended to use a random sampling method to this purpose. However, there will always be some error between the properties of the global population and the sample's properties. This error is included in the results, and an interval of confidence is outlined.

Within inferential statistics, there are two main approaches: frequentist and Bayesian inference. Frequentist inference involves calibrating the plausibility of propositions by considering repeated sampling of a population distribution to produce datasets. Bayesian inference, on the other hand, preserves uncertainty and uses probability to quantify the degree of belief. It is based on Bayes' theorem and updates an initial guess on the probability based on evidence.

The present work relies on Bayesian inference, which is the core of the implemented model. This approach is widely used in predictive models (Vairo et al. 2019); it uses probability distributions to represent different degrees of belief. It is critical to account for uncertainty in situations where data limitations exist, which can lead to imprecise inference about preferences, sensitivities, and other aspects of behavior. To overcome this limitation, the Markov Chain Monte Carlo algorithm is used (Neal R.M., 1993).

3 Hidden Markov Model (HMM)

A Hidden Markov Model (HMM) is a statistical model used to model sequences of observations. It is particularly useful for problems where the underlying state of a system is not directly observable but can only be inferred from the observations made (Rabiner 1989).

The HMM is a combination of two processes: a Markov Chain, which determines the state at time t, and a state-dependent process which generates an observation. This observation is called emission and it is indicated with E, while S stands for state. For each state S, more than one type of emission E can be obtained (Satish and Gururaj 1993).

Only the state-dependent process, i.e., the emission, can be observed, while the Markov Chain (the states) remains unknown and hidden. The goal is to learn about the hidden states by observing the emissions.

A model can be composed by n possible states and m possible emissions. Figure 1 shows the transitions between the different states. Each transition has a probability, and each state has an emission probability as well (Van den Bosh 2010).

Fig. 1
figure 1

States and emissions of a Hidden Markov Model

These probabilities can be grouped into two matrices:

  • Transition matrix, representing the states’ transitions probabilities.

  • Emission matrix, representing the probabilities to get an emission given a certain state.

A MCMC simulation can be performed by generating several samples according to the transition matrix. Subsequently, the emission matrix constitutes the basis on which the emission associated to the state is determined. Inferential statistics can be applied to this type of model, by simulating and deducing the transition and emission matrices through a forward–backward algorithm, given observations on either the states or the emissions.

3.1 The learning process

The learning process in an HMM is performed by the Baum-Welch algorithm (Baum et al. 1970).

In such a model, two parts must be trained: the Markov Chain, and the observations.

An HMM has two parts:

  • An underlying Markov Chain that describes how likely you are to transition between different states (or stay in the same state). This underlying state is the element of interest. If there are k states in the HMM then the Markov Chain consists of

    1. o

      a k x k matrix saying how likely you are to transition from a state S1 to a state S2,

    2. o

      a k-length vector saying how likely you are to start off in each of the states.

  • A probability model that lets you compute Pr[O|S], the probability of seeing observation O if we assume that the underlying state is S. Unlike the Markov Chain, which has a fixed format, the model for Pr[O|S] can be arbitrarily complex.

To a large degree these two moving parts can be considered independently. You might even have external knowledge that tells you what one of them is, but not the other.

With a large amount of labeled data (the sequence of observations and a knowledge of what the underlying state is), training the HMM breaks down to two independent problems:

  • First: train the Markov Chain with the labels;

  • Then: divvy up the observations based on what state they were in and train P[O|S] for each state S.

If the state labels for our data are reliable, then training the HMM is straightforward.

But usually, we just have the sequence of observations, with only little knowledge of what state the system was in. So, we can guess at what the state labels are and train an HMM using those guesses. Then we use the trained HMM to make better guesses at the states, and re-train the HMM on those better guesses. This process continues until the trained HMM stabilizes. This back-and-forth, between using an HMM to guess state labels and using those labels to fit a new HMM, is the core of the Baum-Welch algorithm.

The Baum-Welch algorithm, which is an expectation–maximization method, is an iterative algorithm that estimates the parameters of an HMM given a set of observations. The learning process in an HMM with the Baum-Welch algorithm can be described in the following steps (Murphy 2012):

  1. 1.

    Initialization: The algorithm starts with an initial estimate of the parameters of the HMM, such as the transition probabilities and the emission probabilities.

  2. 2.

    Forward–Backward Pass: The algorithm then performs a forward–backward pass over the observations. The forward pass computes the probability of observing each sequence up to a particular time step, given the current estimate of the parameters. The backward pass computes the probability of observing each sequence from a particular time step to the end, given the current estimate of the parameters. These probabilities are used to estimate the expected number of times the model is in each state and the expected number of times each state emits each observation.

  3. 3.

    Parameter Estimation: The expected counts obtained from the forward–backward pass are used to update the parameters of the HMM. Specifically, the transition probabilities are updated using the expected number of transitions between states, and the emission probabilities are updated using the expected number of times each state emits each observation.

  4. 4.

    Repeat: Steps 2 and 3 are repeated until convergence. The convergence criteria may vary, but a common approach is to stop when the change in the log-likelihood of the observations is below a certain threshold.

  5. 5.

    Output: The algorithm outputs the estimated parameters of the HMM, which can be used for prediction or further analysis.

The learning process of a HMM is schematically depicted in the following figure (adapted from Vairo et al. 2023).

The Baum-Welch algorithm is a powerful method for training HMMs, but may require a large amount of data to obtain accurate estimates of the model parameters. When such data are not available, or are not reliable, which is often the case in the design phase of an innovative asset, it is possible to generate synthetic data for an HMM, but a certain knowledge of the process is required (Barbu et al. 2009). The synthetic data generation is detailed in Sect. 5.

Note that the quality of the synthetic data generated in this way depends on the accuracy of your knowledge of the underlying process (Bishop 2006). If your knowledge is inaccurate or incomplete, the synthetic data may not be representative of the actual data. Therefore, it is important to validate the synthetic data using appropriate statistical methods before using it for any downstream analysis.

4 Designing the model

Different approaches in solving the proposed problem are always possible. Two of them have been identified. As already mentioned, the first model considers one item per time, while the second one considers all the system’s components simultaneously, including the inter-relations between the components as well, in accordance with the Systemic approach.

While the model may consider factors related to space or limitations in resources, the actual factors that impact maintenance time and costs can be numerous and complex. During the design phase, factors that can affect maintenance time and costs may include (Smith 2017):

  • Accessibility and ease of maintenance: Design features that make it easier to access and maintain equipment or infrastructure can help to reduce maintenance time and costs.

  • Modularity and standardization: Modular or standardized design can simplify maintenance by allowing for easy replacement of parts or components, reducing the need for specialized expertise or tools.

  • Material selection: The selection of materials during the design phase can have a significant impact on maintenance time and costs. Materials that are durable, corrosion-resistant, and easy to clean can help to reduce maintenance requirements.

  • Reliability and durability: Design features that enhance the reliability and durability of equipment or infrastructure can reduce the frequency and duration of maintenance.

  • Predictive maintenance capabilities: Design features that enable predictive maintenance, such as built-in sensors or automated monitoring systems, can help to reduce maintenance time and costs by allowing for early detection of potential problems and scheduling maintenance proactively.

  • Safety and environmental factors: Design features that ensure the safety of maintenance personnel and minimize the impact of maintenance activities on the environment can also affect maintenance time and costs.

However, the design phase is being considered here focused in particular on the general arrangement identification, i.e., on the best exploitation of space in the engine room. In this perspective maintenance is an issue and inability to comply with clearances affects maintenance activity in terms of time and cost, so these are the only influencing factors included in the model.

For considering n elements, a Hidden Markov Model is structured. The maintenance cost/time increase scenario is now translated into a hidden state (hidden states are unobservable entities). Therefore, three states have been defined, while the space reduction ranges are considered as the possible emissions (the observable entities) for each state.

A real combination of element’s states is randomly generated. According to this combination, a sequence of emissions is then generated as test data set (Rabiner 1989). At this point, the true hypothetical state of each component is defined and a series of emissions, which constitutes the observations, is available. Afterwards, three possible cases (sub-models) arise, in relation with the knowledge about the transition matrix and the emission matrix. In fact, the cases that can be considered are:

  • Both matrices, emission, and transition ones, are known.

  • Only the emission matrix is known.

  • No matrix is available.

In the first sub-model, assuming that both transition and emission matrices are known, maybe from experience, inference is performed only to obtain the probability density function of being in one specific state. As prior distribution for each state, a Categorical distribution (a generalization of the Bernoulli distribution when the possible outcomes are more than two, with the same probability for each component) can be chosen. Not to introduce an excessive amount of prior knowledge, which can block the inferential process into local solutions, the elements of evidence can be implemented using a Categorical distribution as well.

A Categorical distribution is a natural choice for the prior distribution for each state in a Hidden Markov Model (HMM) because it represents a discrete probability distribution over a finite number of possible states. The Categorical distribution is used to model the prior probability of being in each state at the beginning of the sequence, and the transition probability matrix is used to model the probability of transitioning from one state to another. The emission probability distribution for each state is also typically modeled using a Categorical distribution, as the emissions are assumed to be generated from a discrete set of possible values (Murphy 2012).

Moreover, the Categorical distribution is a useful choice for the prior distribution; it allows for easy computation of the posterior distribution using Bayes' rule. Specifically, the posterior distribution over the states given the observed sequence of emissions can be computed using the forward–backward algorithm, which is based on the Categorical distribution (Barber 2012).

Subsequently, the MCMC sampling can be performed using the Metropolis–Hastings (MH) sampling within Gibbs algorithm, because it is more suitable to a multi-parameter case, treating each component as independent from the others. The state that is more likely to occur in the Markov Chain is defined as the “forecast state”.

The second sub-model can be used when less information on the system is available. It has been assumed that only the emission matrix is available, while the transition one is unknown. However, this second matrix can be inferred, using a Dirichlet distribution with equal initial likelihoods as prior, while the remaining variables can be defined as in the previous case. While performing the MCMC sampling, not only the states but also the transition matrix components can be sampled and inferred.

The third sub-model is the most generic one and at the same time it is the most common. In fact, no information is available, and all the inter-relations between the components are inferred from the observations. As in the previous case, also the emission matrix needs to be estimated by inference, using a Dirichlet distribution as prior as well. This approach relies on the above described Baum-Welch algorithm and is the most interesting when dealing with the problem proposed in this work, because most of the times these probabilities which compose these matrices are difficult to obtain. If an adequate (high) number of samples is generated and the starting point of the chain is chosen to avoid local minimums of the error function, this approach leads to good results. In particular, the MH within Gibbs algorithm seeks the absolute minimum of the error following the evolution of the Markov chain. In this way the Markov chain converges to what is thought to be the posterior most likely value.

Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method that is often used to approximate the joint distribution of a set of random variables. It works by iteratively sampling from the conditional distributions of each variable, given the current values of the other variables (Neal 1993). This process converges to the joint distribution after a sufficient number of iterations. However, in some cases, it may not be possible to sample from the conditional distributions directly, and alternative methods, such as MH sampling, may be required. MH sampling is a more general MCMC method that can be used to sample from any distribution, even if its form is unknown or intractable. MH sampling works by proposing a new state using a proposal distribution and then accepting or rejecting this proposal based on a probability ratio. If the ratio is greater than or equal to one, the proposed state is accepted. Otherwise, the proposed state is accepted with probability equal to the ratio. This acceptance probability is what allows the sampler to explore regions of low probability density (Liu 2008).

In the analyzed problem, with Gibbs sampling, the conditional distributions of each variable can be sampled directly, but it may be more efficient to use MH sampling to sample from the conditional distributions. To use MH sampling within Gibbs sampling, it is possible to simply replace the direct sampling step for a given variable with an MH sampling step. This involves proposing a new value for the variable using a proposal distribution and then computing the probability ratio for accepting or rejecting the proposal (Roberts et al., 2006). The proposal distribution can be chosen based on the structure of the model and the properties of the variables being sampled. So, MH sampling can be used within Gibbs sampling when it is not efficient to sample directly from the conditional distributions of each variable. This allows the sampler to explore a wider range of values and can improve the convergence of the sampler (Andrieu et al. 2009) (Fig. 2).

Fig. 2
figure 2

Graphical schematization of the HMM learning process

Figure 3 represents the flow chart of the described approach, showing the generation of evidence on the left side and the predictive model on the right side, and then compared to check the accuracy of the prediction.

Fig. 3
figure 3

HMM flow-chart

5 Generation of data

The training/test data are generated following the methodology described in Gualeni et al. (2022). A Beta distribution is used to represent the probability density function for the clearance reduction, and, for the purposes of the present paper, the maintenance cost/time is described by a uniform distribution, considering no available prior information, to successively generate the observations (Bernardo 2006). The Beta distribution’s parameters (a and b) can be modified on need, varying the expected value and the variance of the probability density function. In this case the Beta distribution is used for the space probability density function for clearance reduction; on the other side, costs are sampled from a uniform probability density function.

As descripted in Gualeni et al., (2022), the clearance reduction problem involves determining the optimal layout of equipment in a ship engine room. One way to approach this problem is to use simulation to generate synthetic data for different layouts and clearance values, and then use statistical analysis to identify the optimal layout. The beta distribution is a continuous probability distribution that is commonly used to model proportions or probabilities between 0 and 1, which is a suitable range for clearance values. The beta distribution parameters (a and b), control the shape and location of the distribution. These parameters can be estimated from available data or prior knowledge (Gelman 2013).

Once the beta distribution parameters are determined, a MC simulation is used to generate synthetic clearance values by sampling from the beta distribution. This involves generating random values from a uniform distribution between 0 and 1, and then transforming these values to the corresponding clearance values using the inverse cumulative distribution function of the beta distribution (Huang et al. 2018).

The application case regards the engine room layout of a research vessel: focus is made on the positioning of the three diesel generators (DG1, DG2, DG3), in charge of delivering electrical power on board. Differently from the diesel engines devoted to ship propulsion (bound to geometrical constraints given by the propeller shaft lines), the positioning of diesel generators for electrical power supply can be topic for discussion during the design process of the engine room.

The first layout (Fig. 4) corresponds to the one adopted for the real vessel’s engine room. In this layout, two Diesel generators are located between frame 77 and 83, while the third one is between frame 60 and 66. The Diesel generator which is between the two main engines is defined as number 1, while the other two, located between two pillars, are number 2 and 3. First, the ideal clearances needed to maintain each macro-group have been defined, subsequently, the real clearances available around the items in both x and y direction, have been measured using the drawings of the top view of the engine room’s layout.

Fig. 4
figure 4

Engine room first layout (original project)

The second layout (Fig. 5) is an alternative to the real one, developed to save some space in the engine room and also to test the method, because it could be already foreseen that the cost needed to maintain the Diesel engine increases. In this layout, all the three generators have been located between the two pillars, leaving an empty space between the two main engines, with the idea of locating the other pumps and machinery in this area. The engine number 2 have been rotated of 180◦ to allow more mobility around the Air suction and exhaust gas system. This arrangement, if possible to realize according to the general arrangement of the ship, could lead to a shortening of the engine room’s length. In this configuration, the clearance reduction values are equal for all the engines because they are equally distanced between each other and the main obstacle is constituted by the near engine.

Fig. 5
figure 5

Engine room second layout

The third layout (Fig. 6) is similar to the second one, but the engines are longitudinally staggered. This arrangement has been obtained modifying the position of the ladder, of the cooling water pumps and of the fire seawater pump. The Diesel generators location have been decided to misalign the engines and ensure the required maintenance clearance around the two considered macro-groups of items. Furthermore, it has been chosen to transversely align the alternators because, being electrical machinery, they are easier to maintain, since their elements can be removed along the x-axis. In this layout, the DGs experience the right clearance around them.

Fig. 6
figure 6

Engine room third layout

With reference to the three layouts identified for the research activity and shown in Figs. 4, 5 and 6, the clearances reduction, representing the expected values of proper generative (Beta) distributions about systems emissions, are reported in Table 1.

Table 1 Mean clearances reduction (%)

As mentioned above, starting from the expected clearances reductions related to each layout (defined from the above reported layouts definitions), it is possible to create the generative beta distributions, as shown in Figs. 7 and 8.

Fig. 7
figure 7

Generative distribution for the clearance reductions of the first layout (Beta distribution 1)

Fig. 8
figure 8

Generative distribution for the clearance reductions of the second layout (Beta distribution 2)

The evidence on clearances reduction at each step are sampled from the beta distributions in Figs. 7 and 8.

6 Determining the state-emission sequences

In the HMM, at each time step, an evidence of clearance reduction is generated, according to emission probability (beta distribution of Figs. 7 and 8). The process of generating summary data on observations on the reduction of clearances is detailed in the previous section, where the process is described from what is proposed in Gualeni et al. (2022). The Beta parameters a and b, which control the shape and location of the distribution, are related to the different considered layouts. That is in relation with the state (which can be known or inferred from the generated observations).

The model (λ) depends on: λ = (Q, O, A, B)

  • Q: hidden sequence of states (maintenance cost/time)

  • O: observed emission sequence = {σ1, …, σk}

  • A: n x n transition probabilities (probability for the system to change the state)matrix A(i,j) = Pr[q t+1=j|q t =i]

  • B: probability of generating an emission (the visible clearance reduction) in the actual state.

B(i,j) = probability of generating σj in state qi = P[at =σj |qt = i]; where at is tth element of generated sequence

The problem we are going to solve with the HMM is a Learning problem (the third above mentioned sub-model):

  • Given an observation sequence O and the set of states (clearance reductions) in the HMM, learn the HMM parameters A and B for generating the most probable sequence of maintenance time/cost increase.

7 Results and discussion

The input to such a learning algorithm would be an un-labeled sequence of observations O and a vocabulary of potential hidden states Q.

The standard algorithm for HMM training is the Baum-Welch algorithm, which is a specific application of the Expectation–Maximization (EM) algorithm (Bahl, 1983). The algorithm will let us train both the transition probabilities A and the emission probabilities B, which represent the parameters that determine the sequence of hidden states. It is an iterative algorithm, computing an initial estimate for the probabilities, then using those.

In the context of maintenance time/cost prediction, the Hidden Markov Model (HMM) predict the sequence of states that are most likely to be connected to a given sequence of observations (clearance reductions). In this case, the states represent the underlying classes or categories of the time/cost increments associated with the different layouts. In the evaluation of the prediction, two different layouts are considered, and for each layout, a sequence of five possible predicted states is represented. These states correspond to the five different classes of time/cost increment that were obtained from the HMM. By predicting the most likely sequence of these states, it is possible to estimate the expected time/cost for each maintenance activity, for each layout.

It is worth noting that the selection of the number and definition of the different classes or states used in the HMM can have a significant impact on the accuracy and usefulness of the predictions. Therefore, the number of states in each sequence can be even higher but it implies a superior exponential computational effort. It is worthwhile mentioning that each bar of the following figures represents the percentage of cost increase in relation with a sample of clearance reduction percentage derived from beta functions. For each DGi it has been considered sufficient and representative an evaluation of five states.

Each bar in the following charts represents the probability that the increase in time/maintenance cost falls into one of the five classes described above. For each configuration, therefore, the maximum value (i.e., the increment value linked to the highest probability), the minimum value (i.e., the increment value linked to the lowest probability) and the average will be assessed. The results are reported in Table 3.

Considering the three DGs in a comprehensive view, it is possible to derive an assessment of the comprehensive Engine room solution.

First layout:

Second layout:

Given the evidence on space reduction sampled from the beta distribution 1, for the first layout, and for the beta distribution 2 for the second layout, a remarkable comparability with the results obtained, with the evaluation methodology based on the focus about the item/element (summarized in Table 2), can be observed.

Table 2 Cost increase prediction with the GLM in Gualeni et al. (2022)

To favor this comparability, in Table 3 the minimum, maximum and average values, as described above, are reported for each configuration as derivable from Figs. 9 and 10.

Table 3 Cost increase (min, max, average) with HMM
Fig. 9
figure 9

Predicted sequence of maintenance time/cost increase for the first layout

Fig. 10
figure 10

Predicted sequence of maintenance time/cost increase for the second layout

8 Conclusion

The HMM approach has proven to be a reliable method for predicting the state of the different systems and observing their relation with the total engine room layout, given the evidence on the space reduction for some components. The predictive capability of the method obviously depends on the representativeness of the observations, which, in this case, were generated by random samplings from appropriate distributions, as detailed in Sect. 5, deriving from field knowledge.

The process of time/cost prediction using a HMM involves modeling the relationship between the clearance compliance associated with each considered layout and the hidden classes of maintenance time/cost increments. The HMM assumes that the hidden states are related to the clearance compliance, which are Markovian, and estimates the model parameters including initial state probabilities, transition probabilities, and emission probabilities. With these parameters, the prediction algorithm can be used to find the most likely sequence of hidden states connected with the considered layout and make predictions about future time/cost increments.

The predictive algorithm, described in Sect. 4, is a dynamic programming algorithm that uses the estimated model parameters to efficiently compute the probabilities of all possible state sequences (depicted in Figs. 9 and 10) and select the most likely one.

The method described in Gualeni et al. (2022) gave very close results, but the application of that method inevitably required to make assumptions also on the increases in maintenance costs, which in this second approach were not necessary, due to the characteristics of the Baum-Welch algorithm. In fact, it tests a multitude of samples (in this case, a uniform distribution was used for maintenance costs, so as not to need a priori knowledge), to select only those for which the model is likely to converge.

The whole process allows for accurate and efficient prediction of maintenance time/cost, but careful consideration and evaluation of the data and model assumptions is necessary to ensure the reliability and validity of the results.

The proposed model, therefore, can represent a useful instrument to define, in the design phase, the most appropriate layout, adequately balancing the engine room space requirements with the containment of maintenance costs.