1 Introduction

Job shop scheduling problem (JSP) refers to the ordering of available shared resource allocation and production tasks within a certain period of time, so that certain performance indicators can be optimized. The typical JSP model is more ideal. It is difficult to fully reflect the actual application. With the intensification of the marketization of manufacturing, the production resources are not infinitely usable, and the processing time of each process alone is not always the same. This results in the flexibility of the production process. The flexible job shop scheduling problem (FJSP) came into being; it is more in line with the actual production situation, but it also increases the complexity of the problem [1]. FJSP is a typical combinatorial optimization problem, which belongs to the NP problem and has always been a key and difficult point in the research of the manufacturing industry [2].

For FJSP problems, a large number of scholars and researchers have invested a lot of time and energy in in-depth research, made some research progress, and proposed many FJSP problem solving algorithms [3]. The current FJSP problem solving algorithms can be roughly divided into three categories: exact solution algorithms, heuristic algorithms, and artificial intelligence algorithms. The branch-and-bound method is the most classic and accurate solution algorithm. It is simple and easy to implement. For small-scale FJSP problems, the solution efficiency is high; however, as the scale of the manufacturing industry continues to grow, the scale of processing products is getting larger and larger. The boundary method is difficult to meet the FJSP problem solving requirements of modern production and manufacturing [4]. The most representative heuristic algorithm is the Lagrangian relaxation method. Others have high efficiency in solving small-scale FJSP problems, but they have the same flaws as the exact solution algorithms, and their application scope is limited [5]. Artificial intelligence algorithm is mainly used to simulate the behavior of biological groups in nature. It has the advantages of parallelism, fast search speed, etc. It is the most widely used in FJSP problem solving and has become the main research direction [6]. Some scholars proposed the use of genetic algorithm, simulated annealing, particle swarm optimization algorithm, ant colony algorithm, and firefly algorithm to solve the FJSP problem and obtained better results [7,8,9]. However, in practical applications, these artificial intelligence algorithms all have their own deficiencies, such as the late convergence speed is slow, easy to obtain local optimal solution [10]. Artificial immune algorithm (AIA) is a kind of evolutionary algorithm that simulates the biological immune system. It is a multi-point random search algorithm. It has self-learning, self-organization, and memory under the premise of maintaining the excellent characteristics of the genetic algorithm. Such characteristics, so compared with other evolutionary algorithms, have better global search capabilities; have been successfully applied in cloud computing resource scheduling, robot path planning, and other fields; and provide a new research tool for solving FJSP problems [11].

In order to obtain a more optimal FJSP problem optimization program and reduce the production cost of the manufacturing industry, an improved FJSP problem solving method based on the modified artificial immune algorithm (MAIA) was proposed, and some standard examples were used to test the performance of the algorithm.

2 The mobile agent-based data fusion algorithm

2.1 Mathematical model of FJSP

The FJSP can be expressed as follows: n workpieces are processed on m machines, the processing time and production cost of each operation are known, and the processing order of each workpiece on each machine is constrained. The requirements are determined and the process constraints are required. The processing start time, completion time, and processing sequence of all the parts on the compatible machines make the production cycle, production cost, and equipment utilization process performance index optimal or suboptimal. The objective function of FJSP is

$$ \min {f}_1=\min \left(\max {C}_i\right) $$
(1)
$$ \min {f}_2=\min {F}_p=\sum \limits_{k=1}^m\sum \limits_{j=1}^{n_j}\sum \limits_{i=1}^{n_i}\left({F}_{ij k}{X}_{ij k}+\left({T}_{S_{ij k}}-{T}_{E_{i\left(j-1\right)k}}\right){F}_{S_{ij}}{X}_{ij k}\right) $$
(2)
$$ \min {f}_3=\min \sum \limits_{k=1}^m\sum \limits_{j=1}^{n_j}\sum \limits_{i=1}^{n_i}{p}_{ijk}{X}_{ijk} $$
(3)
$$ \min {f}_4=\min \left(\max \sum \limits_{j=1}^{n_j}\sum \limits_{i=1}^{n_i}{p}_{ijk}{X}_{ijk}\right) $$
(4)

The corresponding constraint condition is

$$ \left\{\begin{array}{l}\sum \limits_{k=1}^m{X}_{ijk}=1,{X}_{ijk}=1\;\mathrm{or}\;0\\ {}{T}_{s_{ijk}}=\max \left({T}_{E_{i\left(j-1\right)k}},{t}_{1k}\right)\\ {}\max \left({t}_{1k}\right)+{p}_{ijk}\le {t}_{2k};\\ {}{P}_{ijk}\ge 0\;\mathrm{and}\;{P}_{i0k}=0\\ {}i=1,2\cdots, {n}_i,j=1,2\cdots, {n}_j\\ {}k=1,2\cdots, m\end{array}\right. $$
(5)

In the function, C i is the completion time of the work piece Ji; F p is the total processing cost; F ijk is the processing cost of the jth process of workpiece i on the k machine tool; \( {T}_{s_{ijk}} \) is the first processing time on the k machine tool for the jth process of workpiece i; \( {T}_{E_{i\left(j-1\right)k}} \) is the workpiece i in (j − 1)th process on the k machine end time; t1k andt2k are the start and end points of the idle time period of machine k, respectively; \( {F}_{S_{ij}} \) is the unit time storage cost between the (j − 1)th and jth processes of the workpiece i; P ijk is the time required for the jth machining operation of the workpiece i on the kth machine; and X ijk indicates whether the jth procedure of the workpiece i is selected to be machined on the k machine [12].

2.2 Improved artificial immune algorithm

Artificial immune algorithm (AIA) is an intelligent algorithm for simulating an artificial immune system, with good robustness, parallel search ability, and so on; its working principle is as follows: first, through the combination of antibodies and antigens and then through the antibody cloning, mutation, selection, and other operations, to achieve the optimal purpose of the antibody and antigen expression problem objective function. It is very suitable for solving a multi-objective optimization problem; the FJSP problem is a multi-objective optimization problem in essence, and the work steps of artificial immune algorithm are as follows:

  1. 1.

    Set the objective function and corresponding constraints of the multi-objective optimization problem.

  2. 2.

    Generate the initial antibody group of the artificial immune algorithm, that is, the candidate solution or feasible solution of the multi-objective optimization problem.

  3. 3.

    Calculate the affinity of the antibody and the antigen; the affinity can describe the degree of matching between the solution and the objective function.

  4. 4.

    Save the antibody with higher affinity and enter the next-generation antibody group to generate immune memory cells.

  5. 5.

    Selecting, promoting, and inhibiting antibodies to ensure the diversity of individuals in the antibody population.

  6. 6.

    Crossing and mutating individuals to create new antibody populations.

  7. 7.

    For individuals with low antibody-adaptive fitness values, individuals with high fitness values in memory cells are replaced to produce next-generation antibody populations.

  8. 8.

    If the algorithm satisfies the end condition, the algorithm is terminated; otherwise, the process jumps to step 3.

From the above, we can see that the workflow of the standard artificial immune algorithm is shown in Fig. 1.

Fig. 1
figure 1

The workflow of the standard artificial immune algorithm

2.3 Correlation operator of artificial immune algorithm

Artificial immune algorithm requires some operator during the work process, as follows:

  1. 1.

    Affinity evaluation operator between antibodies. Set the antibody as a viable solution to optimization problems. The affinity of the ith and jth antibodies is aff(ab i ) and aff(ab j ), respectively. The affinity evaluation operator is mainly used to describe the similarity of i and j; the affinity evaluation formula is

    $$ aff\left({ab}_i,{ab}_j\right)=\left\{\begin{array}{ll}1&, aff\left({ab}_i\right)= aff\left({ab}_j\right)\\ {}\frac{1}{1+\mid aff\left({ab}_i\right)- aff\left({ab}_j\right)\mid },& else\end{array}\right. $$
    (6)
  2. 2.

    Antibody concentration evaluation operator. Set the individual of the antibody group as N. The calculation formula for the antibody concentration evaluation operator is

    $$ den(abi)=\frac{1}{N}\sum \limits_{j=0}^{N-1} aff\left({ab}_i,{ab}_j\right) $$
    (7)
  3. 3.

    Mutation operator. The mutation operator is used for nascent antibodies to ensure multiple samples of individuals in the antibody group. The calculation formula is

    $$ \left\{\begin{array}{l}{G}_N^{\prime }={G}_M+\gamma \times \eta \left(0,1\right)\\ {}\gamma =\frac{1}{\eta}\times {e}^{-f}\end{array}\right. $$
    (8)

    In the formula, G is genes of antibody; G is genes of antigen; f is an affinity function; η is a control parameter; and N(0,1) is a Gaussian variable.

2.4 chaotic simulated annealing particle swarm parallel artificial immune optimization algorithm

Similar to other group intelligent algorithms, the standard artificial immune algorithm has the disadvantages of being easy to fall into local optimality and prematureness. For this reason, the chaotic simulated annealing particle swarm optimization algorithm is introduced into the artificial immune algorithm. The chaos theory is used to dynamically adjust the parameters of the particle swarm optimization algorithm. At the same time, the dynamic inertia weight method is used to accelerate the convergence of the algorithm. The simulated annealing process based on the automatic attenuation coefficient is used to improve the probability and speed of searching for the optimal solution. The optimized artificial immune algorithm solves the problem of slow convergence in the late search period to ensure the diversity of the population and improves the search speed and efficiency of the algorithm, resulting in an improved artificial immune algorithm.

The execution process of chaotic simulated annealing particle swarm optimization algorithm consists of the following three parts.

  1. 1.

    Using the chaos theory to dynamically adjust parameters r 1 , r 2 of the particle swarm optimization algorithm, resulting in an excellent population;

  2. 2.

    Using the formulae (1) and (2) to search for the optimal solution in evolution;

  3. 3.

    Simulate the annealing algorithm to locally optimize the position of each particle in the particle swarm optimization algorithm and repeatedly run the iteration process until the termination condition is satisfied. The improved algorithm is shown in Fig. 2.

Fig. 2
figure 2

chaotic simulated annealing particle swarm optimization algorithm

The steps of the chaotic simulated annealing particle swarm optimization algorithm are as follows:

  1. 1.

    Randomly generate initial populations of m particles, initialize each particle’s velocity, and position, and give inertia weight ω, learning factors c 1 and c 2 , and initial acceptance probability P r .

  2. 2.

    Calculate the fitness of each particle i and initialize the annealing temperature \( {T}_0=\frac{\left({f}_{\mathrm{min}}^0-{f}_{\mathrm{max}}^0\right)}{\ln {P}_r}=-\frac{\left|\varDelta f\right|}{\ln {P}_r} \).

  3. 3.

    The fitness of each particle is taken as the particle extremum pbest, and the individual extremum is selected as the population extremum gbest.

  4. 4.

    If the algorithm reaches the termination condition, the result is output; otherwise, the cycle of k from 1 to M is performed, where M is the maximum number of iterations.

  5. 5.

    Calculate the fitness f i (k) and the average fitness f avgi (k) for each particle.

  6. 6.

    If the particle’s fitness is better than the original individual’s extreme value pbest, then the current fitness is set to pbest and the optimal individual extreme value is chosen as the population extreme value gbest.

  7. 7.

    Update the flying position and velocity of each particle according to Eqs. (1) and (2).

  8. 8.

    Calculate the fitness of each new particle f i (k + 1) and the average fitness f avgi (k + 1).

  9. 9.

    Calculate the amount of change in fitness caused by the two positions Δf = f i (k + 1) − f i (k). If Δf < 0 or exp.(− Δf/T) > rand, accept the new position; otherwise, keep the old position.

  10. 10.

    According to the individual fitness and average fitness, calculate the temperature automatic attenuation coefficient ζ.

  11. 11.

    T k  + 1 = ζT k , T k  + 1 = ζT k , k = k + 1, where ζ ∈ (0, 1), turn to (4).

2.4.1 Fitness parameter strategy

Using chaos to adjust the parameters related to particle velocity updates [13], when generating chaotic sequences, a logistic model is used:

$$ {\lambda}_i^{t+1}=u{\lambda}_i^t\left(1-{\lambda}_i^t\right),i=1,2\dots, n $$
(9)

In the formula, \( {\lambda}_i^t \) is the value of the chaotic evolution of λ i in step t, λ i  ∈ [0, 1], 1 ≤ u ≤ 4. When u = 4 and λ i is not equal to 0.25, 0.5, or 0.75, the system exhibits full chaos. The chaotic sequence exhibits excellent randomness, and the trajectory of chaotic variable may not repeatedly traverse the entire search space. Perform chaos optimization on r 1 and r 2 as follows:

$$ \left\{\begin{array}{c}{r}_i\left(t+1\right)=4{r}_i(t)\left(1-{r}_i(t)\right)\\ {}{r}_i(t)\in \left(0,1\right),i=1,2\end{array}\right. $$
(10)

The inertia weight value ω can be used to balance the current velocity of the particle by controlling the historical velocity to balance the global and local search of the particle swarm optimization algorithm; at the same time, the appropriate ω can reduce the time to find the optimal solution. The dynamic adjustment mechanism of inertia weight is used to take a larger ω in the early stage of search. When the particle is searched in a large space, the algorithm has a good global search capability; as the number of iterations increases, ω gradually decreases, in the local area The speed of particles gradually slows down to improve the search accuracy of the algorithm, so that the optimal solution can be found quickly and accurately. The inertia weight setting formula for each particle is as follows:

$$ \omega (t)=\frac{\omega_{\mathrm{max}}-t\left({\omega}_{\mathrm{max}}-{\omega}_{\mathrm{min}}\right)}{t_{\mathrm{max}}} $$
(11)

In the formula, ωmaxand ωmin are the maximum and minimum inertia weights, respectively; t is the current algebra; and tmax is the maximum number of iterations.

2.4.2 Simulated annealing algorithm parameter control

Initial temperature setting

The temperature initial value is an important influence parameter of the global search performance of the simulated annealing algorithm. The higher the initial temperature is, the stronger the global search ability is; and the greater the possibility of searching for the global optimal solution, the longer the search time. Conversely, although the search time is reduced, the global optimal solution may not be searched. Using a temperature initialization method based on fitness and acceptance probability, the initial temperature T0 is determined by the following equation:

$$ {T}_0=\frac{\left({f}_{\mathrm{min}}^0-{f}_{\mathrm{max}}^0\right)}{\ln {P}_r}=-\frac{\left|\varDelta f\right|}{\ln {P}_r} $$
(12)

In the formula, \( {f}_{\mathrm{min}}^0 \), \( {f}_{\mathrm{max}}^0 \), and Δf are the maximum and minimum target function fitness values and their differences calculated according to the initial particle group, respectively, and p r is the initial acceptance probability, and the general value is [0.7, 0.9].

Annealing velocity

The global search ability of the simulated annealing algorithm depends on the annealing speed. In general, first set the initial temperature and then follow the temperature decay function to implement the cooling process. However, the fixed temperature decay function cannot perceive the current convergence condition and cannot dynamically adjust the local search depth according to the convergence condition. The dynamic temperature decay coefficient is introduced to enable the algorithm to perceive the local convergence according to the current fitness of the individual particles and the average fitness of the population. The temperature decay rate and the local search depth are dynamically adjusted according to the current conditions to ensure the diversity of the population in the search process. The formula for the adaptive temperature decay coefficient is:

$$ \xi =\frac{\mu +N\left(0,1\right)\left[1-\exp \left({f}_{pi}-{f}_{avg}\right)\right]}{2{T}_k} $$
(13)

In the formula, f avg is the average fitness value of the population; f pi is the current particle’s fitness; μ is the initial temperature attenuation coefficient; N(0,1) is a random number with a variance of 1 and a mean value of 0 Gaussian distribution; and T k is the temperature of the particle in the previous iteration.

2.4.3 Design of artificial immune optimization algorithm based on chaotic simulated annealing particle swarm parallel algorithm

Cross-antibody acceptability of the algorithm

Set the antibodies a i  = {φ0, ⋯, φn − 1} and a j  = {r0, ⋯, rn − 1}; n represents the number of genes in an antibody. The cross-knot between them is k, and the resulting new antibodies are:

$$ {T}_c\left({a}_i,{a}_j\right)=\left\{\begin{array}{l}{a}_i^{\prime }=\left\{{\varphi}_0,\cdots, {\varphi}_k\in {a}_i,{r}_{k+1},\cdots, {r}_{n-1}\in {a}_j\right\}\\ {}{a}_j^{\prime }=\left\{{r}_0,\cdots, {r}_k\in {a}_j,{\varphi}_{k+1},\cdots, {\varphi}_{n-1}\in {a}_i,\right\}\end{array}\right. $$
(14)

The sub-antibody \( S=\left\{{a}_i^{\prime },{a}_j^{\prime}\right\} \) is accepted according to the Metropolis guideline of the simulated annealing algorithm. min{1, exp(1 − aff(S) − aff(F)/T k )} > r c , where rc is a random number, F = {a i , a j } is the parent antibody, and Tk is the kth annealing temperature.

Variant antibody acceptability of the algorithm

Set the variant junction of antibody a i  = {φ0, ⋯, φn − 1} to k and the new antibody generated is:

$$ {T}_m\left({a}_i\right)={a}_i^{\prime }=\left\{{\varphi}_0,\cdots, {\varphi}_k,{\varphi}_{k+1},\cdots, {\varphi}_{n-1}\right\},{\varphi}_k^{\prime}\oplus {\varphi}_{k-1} $$
(15)

The sub-antibody \( {a}_i^{\prime } \) is accepted according to the Metropolis guideline of the simulated annealing algorithm. \( \min \left\{1,\exp \left(1- aff\left({a}_i\right)- aff\left({a}_i^{\prime}\right)/{T}_k\right)\right\}>{r}_m \), where rm is a random number and Tk is the kth annealing temperature.

Improvement in selecting factor calculation method

Set the annealing selection probability of antibody a i  = {φ0, ⋯, φn − 1} to P(αi), and its selection factor δi) is calculated as:

$$ \delta \left({a}_i\right)=p\left({a}_i\right)\times \mid A(t)\mid $$
(16)

In the formula, |A(t)| is the scale of the population A(t); Stri) represents the enhancement degree. Its calculation formula is

$$ Str\left({a}_i\right)=\sum \limits_{j=1}^n{d}_{ij},{d}_{ij}=\left\{\begin{array}{ll}1& aff\left({a}_i\right)\ge aff\left({b}_i\right)\\ {}0& aff\left({a}_i\right)< aff\left({b}_i\right)\\ {}0& j=j\end{array}\right. $$
(17)

The sub-antibody \( {a}_i^{\prime } \) is accepted according to the Metropolis guideline of the simulated annealing algorithm, \( \min \left\{1,\exp \left(1- aff\left({a}_i\right)- aff\left({a}_i^{\prime}\right)/{T}_k\right)\right\}>{r}_m \) where rm is a random number and Tk is the kth annealing temperature.

In order to test the performance of the improved artificial immune algorithm (MAIA) and the standard artificial immune algorithm (AIA), a typical nonlinear function is selected to analyze its performance. These function tests are defined as follows:

$$ {\displaystyle \begin{array}{r}{f}_1(x)=\exp \left(-{\left({x}_1-3\right)}^2-{\left({x}_2-5\right)}^2\right)\\ {}+\exp \left(-{x}_1^2-{x}_2^2\right),\left|x\right|\le 10\end{array}} $$
(18)
$$ {\displaystyle \begin{array}{r}{f}_2(x)=\exp \left(-{\left({x}_1-4\right)}^2-{\left({x}_2-4\right)}^2\right)+\exp \Big(-{\left({x}_1+4\right)}^2\\ {}-{\left({x}_2-4\right)}^2\Big)+{2}^{\ast}\exp \left(-{x}_1^2-{\left({x}_2+4\right)}^2\right)\\ {}+{2}^{\ast}\exp \left(-{x}_1^2-{x_2}^2\right),\left|x\right|\le 5\end{array}} $$
(19)
$$ {f}_3(x)={\sum}_{i-1}^n\Big({x}_i^2-10\cos \left(2\times \pi \times {x}_i\right)+10,\left|{x}_i\right|\le 5.12 $$
(20)
$$ {f}_4(x)=-{e}^{-{\left(x-4\right)}^2-{\left(y-4\right)}^2}-{e}^{-{\left(x+4\right)}^2-{\left(y-4\right)}^2}-2{e}^{-{x}^2-{\left(y+4\right)}^2}-2{e}^{-{x}^2-{y}^2} $$
(21)
$$ {f}_5(x)=-0.5+\frac{\sin \sqrt{x^2+{y}^2}-0.5}{{\left(1.0+0.001\left({x}^2+{y}^2\right)\right)}^2} $$
(22)
$$ {f}_6(x)=\sum \limits_{i=1}^N\left[100{\left({x}_{i+1}-{x_i}^2\right)}^2+{\left({x}_i-1\right)}^2\right] $$
(23)
$$ {f}_7(x)=\sum \limits_{i=1}^n\left({x_i}^2-10\cos \left(2\pi {x}_i\right)+10\right) $$
(24)

The above seven standard test functions have different shapes and can test the performance of the algorithm in an all-round way. f 1 ~f 3 are continuous unimodal functions and are usually used to measure the convergence speed of the algorithm. From the convergence performance of the f 1 ~f 3 functions, it can be concluded that compared to the AIA, the MAIA avoids the difficulty of the AIA being trapped in the local optimum, and the global search capability is stronger, which proves the feasibility of improving the AIA.

The functions f 4 ~f 7 are complex nonlinear multi-peak functions, and there are a large number of local extrema, which are usually used to measure the population diversity and global search performance of the algorithm. From the performance tests of f 4 ~f 7 functions, it can be concluded that compared with AIA, MAIA has fewer evolution iterations, better overcomes the disadvantages of AIA (such as easy to fall into local optimum and premature), and has higher convergence accuracy and speeds up convergence (Fig. 3).

Fig. 3
figure 3

Comparison of performance before and after improvement of artificial immune algorithm. a Comparison of the convergence curves of f 1 . b Comparison of the convergence curves of f 2 . c Comparison of the convergence curves of f 3 . d Performance comparison of f 4 . e Performance comparison of f 5 . f Performance comparison of f 6 . g Performance comparison of f 7

3 Improved application of artificial immune algorithm in FJSP

3.1 Simulation parameters

In order to test and improve the performance of the artificial immune algorithm in the FJSP application, select four instances of FJSP:

  1. (1)

    4 workpieces are processed on 6 machines.

  2. (2)

    8 workpieces are processed on 10 machines.

  3. (3)

    12 workpieces are processed on 9 machines.

  4. (4)

    12 workpieces are processed on 10 machines

The application example adopts C++ and MATLAB mixed programming. The author uses Matlab to call the MEX program files written in the C programming language and complete the simulation experiment.

3.2 Results and analysis

3.2.1 Improve the effectiveness of artificial immune algorithm analysis

The experimental results of an application example based on an improved artificial immune algorithm are shown in Fig. 4. From Fig. 4, we can see that the improved artificial immune algorithm can get a better flexible job shop scheduling scheme. The simulation experiment results prove the effectiveness of the improved artificial immune algorithm in flexible job shop scheduling problems.

Fig. 4
figure 4

Improved FJSP solution result of artificial immune algorithm. a 4 workpieces 6 machines (4 × 6). b 8 workpieces 6 machines (8 × 10). c 12 workpieces 9 machines (12 × 9). d 12 workpieces 10 machines (12 × 10)

3.2.2 Analysis of advantages of improved artificial immune algorithm

In order to test the superiority of the improved artificial immune algorithm, the FJSP classical algorithm of the literature [14,15,16] was selected for comparison experiments. All the experiments were performed 100 times. The improved artificial immune algorithm and the classical algorithm searched for flexibility. The number of optimal solutions for the job shop scheduling problem and the number of iterations are shown in Table 1. Analysis of the Table 1 shows that for a smaller flexible job shop scheduling problem. The success rate of all algorithms is relatively high and the number of iterations is also relatively small. For the smaller flexible job shop scheduling problems, such as 12 workpieces 10 machines, the success rate of the improved artificial immune algorithm is much higher than that of the current classical solution algorithm, and the number of iterations is relatively small. This algorithm improves the efficiency of solving the flexible job shop scheduling problem and proves that is superior of solving in FJSP.

Table 1 Performance comparison with current classical algorithms

4 Conclusions

FJSP is a major topic in the current production practice. There are many constraints, and the traditional intelligent algorithm has its own defects. This study proposes an chaotic simulated annealing and particle swarm improved artificial immune algorithm and applied to flexible job shop scheduling problem solving process. The experimental results show that the improved artificial immune algorithm’s optimization ability and optimization efficiency are better than that of the standard artificial immune algorithm, and a better solution to the flexible job shop scheduling problem is obtained. Compared with the current typical flexible job shop scheduling problem, the algorithm is solved. Improving artificial immune algorithms has obvious advantages. How to introduce a better intelligent algorithm to obtain a better flexible job shop scheduling solution is the content that will be studied in the next step.