Bi-objective scheduling algorithm for scientific workflows on cloud computing platform with makespan and monetary cost minimization approach

Scheduling of scientific workflows on hybrid cloud architecture, which contains private and public clouds, is a challenging task because schedulers should be aware of task inter-dependencies, underlying heterogeneity, cost diversity, and virtual machine (VM) variable configurations during the scheduling process. On the one side, reaching a minimum total execution time or makespan is a favorable issue for users whereas the cost of utilizing quicker VMs may lead to conflict with their budget on the other side. Existing works in the literature scarcely consider VM’s monetary cost in the scheduling process but mainly focus on makespan. Therefore, in this paper, the problem of scientific workflow scheduling running on hybrid cloud architecture is formulated to a bi-objective optimization problem with makespan and monetary cost minimization viewpoint. To address this combinatorial discrete problem, this paper presents a hybrid bi-objective optimization based on simulated annealing and task duplication algorithms (BOSA-TDA) that exploits two important heuristics heterogeneous earliest finish time (HEFT) and duplication techniques to improve canonical SA. The extensive simulation results reported of running different well-known scientific workflows such as LIGO, SIPHT, Cybershake, Montage, and Epigenomics demonstrate that proposed BOSA-TDA has the amount of 12.5%, 14.5%, 17%, 13.5%, and 18.5% average improvement against other existing approaches in terms of makespan, monetary cost, speed up, SLR, and efficiency metrics, respectively.


Introduction
Recently, information technology (IT) was undergone a revolution. In this line, cloud computing attracted great attention in both industries and research communities for the sake of its pervasiveness, elasticity, and economy of scale [1]. Meanwhile, cloud computing is an amazing option for both individuals and organizations that do not have any exact resource usage pattern [2,3]. For instance, in the case of the garment industry for Valentine's Day or Christmas, private cloud owners can exploit the public cloud to cover their sporadic burst of resource demand instead of proactively B Mirsaeid Hosseini Shirvani mirsaeid_hosseini@iausari.ac.ir; mirsaeid_hosseini@yahoo.com Reza Noorian Talouki reza.noorian@gmail.com 1 Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran resource procurement. Cloud has a wide range of applications from business to even academic projects. One of its abundant academic projects is in scientific workflow scheduling. Workflows include a set of tasks with different sizes, characteristics, and data dependency control flow between sub-tasks [4]; it is of comprehensive and complicated computation tool. Workflows such as LIGO [5,6], SIPHT [5,6], Epigenomics [5,6], Cybershake [5,6], etc., which are modeled in the form of directed acyclic graphs (DAGs), are popular paradigms in both industries and sciences [7]. Take that a university that has its private datacenter intends to execute such scientific workflows and requests more storage and computing resources during this process. Therefore, it can engage the public cloud to make hybrid cloud architecture. Note that the hybrid cloud is deemed a unique entity for users. One of the most important issues in the execution of workflows on cloud infrastructure is to schedule tasks and to allocate resources to these types of projects efficiently so that the maximum execution time of the last task, the socalled makespan, is minimized [8]. In this regard, schedulers encounter several challenges such as being aware of tasks inter-dependencies, underlying infrastructure heterogeneity, difference in VMs speed and pricing schemes, data transfer time on network channels, etc.
Several works have been published in the literature to solve workflow scheduling on cloud infrastructure with total execution time reduction, energy efficiency, reliability maximization viewpoints, but less paid attention to monetary costs which may have a big conflict with users' monetary budget. For instance, a load-aware heuristic strategy for dynamic workload and service scheduling in a cloud environment has been proposed by Lu et al. [9]. The main objective of the proposed algorithm was to improve execution performance; then, it has been validated by series of experiments. A novel hybrid discrete particle swarm optimization (HDPSO) algorithm was proposed to reduce maximum workflow execution time on cloud heterogeneous platforms [1]. To do so, this problem has been formulated into a single objective optimization problem. Although it had great improvement, it has not considered VMs' monetary costs. An energyaware workflow scheduling algorithm has been presented with the aim of datacenter power management and keeping users' service level agreement (SLA) in [10]. Utrera et al. [11] have proposed an efficient algorithm to balance imbalance parallelizable programs on spare nodes with the aid of maximum resource utilization. Although it improves infrastructure resource utilization by packing tasks on the same processor, it does not consider user requirements as one of the most important stakeholders in the system. A multiobjective optimization workflow scheduling with execution time and energy efficiency has been propounded by Durilo et al. [12]. They applied the HEFT method as a list scheduler algorithm which has two important phases; at first, it constructs an ordered list of tasks guaranteeing topological order and dependency constraints; at the second phase, it picks a task with the highest priority to map on the processor which finishes the task execution at the quickest as possible time. It seems the suggested work is not suitable for users with tight budget constraints. Since workflows are modeled in the form of DAGs and there exist dependencies between tasks, data transfer between tasks worsens execution time, network traffic, and monetary costs as well. Therefore, the duplication technique may decrease network bandwidth usage and also can improve parallel path and degree of parallelism [13]. Specifically, it is a promising technique for communication-intensive DAGs which have a high communication-to-computation (CCR) rate. Qi Tang et al. have outlined task scheduling on a homogeneous platform by applying the duplication technique [14]. The outcome of their design was promising, but the duplication technique burdens more monetary costs as the scheduler must rent a couple of VMs instead of one. However, their algorithm did not take into account limitations for the number of allowable task duplications.
Reviews of published works in workflow scheduling on cloud platforms reveal that there exists a big gap in the literature for considering user monetary cost budget apart from the makespan minimization perspective. The most important innovation of the current paper which it conveys is that it formulates workflow scheduling problems on cloud platforms with makespan and monetary cost viewpoints. This is a bi-objective optimization problem under some constraints which is an NP-Hard problem. Since it is a discrete optimization problem, the simulated annealing (SA) algorithm is utilized which is very adaptive with discrete search space. However, canonical SA is a point-wise meta-heuristic computation; it cannot explore search space efficiently. This is the reason that a new population-based version of SA is presented; it is done by defining new operators and applying the crowding distance concept to make a bi-objective version of SA (BOSA). Also, to reach concrete results, the proposed algorithm is combined with the HEFT approach. In this way, it can explore search space efficiently to lead Pareto set of potentially conflicting objectives. The main contributions of the paper are as follows: 1. To present a new duplication-based list scheduler 2. To present a pricing model for VM deployment in cloud computing environment 3. To formalize workflow scheduling problem on heterogeneous cloud platforms into a bi-objective optimization problem with makespan and monetary cost optimization viewpoints 4. To present a bi-objective optimization based on simulated annealing task duplication scheduling algorithm (BOSA-TDA) along with new operators to solve the stated discrete bi-objective optimization problem The rest of the paper is organized as follows. "Related works" classifies task scheduling algorithms in the form of related works. "Problem background" presents the problem background. Several proposed models are outlined in "System, application, scheduling, and pricing models". "Problem statement" states problem formulation. Our proposed BOSA-TDA algorithm is elaborated in "Proposed bi-objective optimization based on simulated annealing task duplication scheduling algorithm (BOSA-TDA)". To validate the current work, "Simulation and evaluation" is dedicated to simulation and evaluation. Finally, "Conclusion and future work" concludes this article along with future work inclination.

List-Based
Heuristic Clustering-Based Duplication-Based Meta-Heuristic

Related works
Task scheduling is an important concept in all fields of computation domains especially when it is subjected to resource constraints; this is the reason that it has a long history. Figure 1 shows the categories of task scheduling algorithms in the literature. A review in the literature reveals that the traditional task scheduling research has focused on list scheduling algorithms; chief amongst are heterogeneous earliest finish time (HEFT) [19] and critical path on a processor (CPOP) [19]. The basic idea of the HEFT version of list scheduling is that it consists of ordering a list of tasks by assigning priority to each one. The tasks are selected to the assigned priority and the ready task with the highest priority is removed from the task list to be assigned to an available virtual machine that guarantees the earliest finish time (EFT). In this category, another list scheduler, CPOP maps each task which is in the critical path, on the fastest VMs/processors in heterogeneous parallel platforms whereas the other tasks are mapped on VMs based on the EFT concept. Although the two aforementioned list schedulers were promising techniques in the primary era of scheduling, several improvements on these versions have been published in the literature to enhance schedulers' performance. In this regards, different extensions of list schedulers are such as CCP, CEFT, RHEFT, DHEFT, PEFT which have been customized for cloud environments [54][55][56]. For instance, robust HEFT (RHEFT) and distributed HEFT (DHEFT) have been developed to embed user's quality of service (QoS) requirements in the model apart from makespan [56]. A cost-effective fault-tolerance (CEFT) scheduling algorithm for real-time tasks in cloud environment was presented in [21]. This scheduling algorithm is applied in cloud environment with permanent or transient failure. The simulation result shows the CEFT gains promising balance between low cost and deadline guarantee in real-time cloud systems. In addition to, a novel list scheduler algorithm which combines machine learning techniques and HEFT (known as QL-HEFT) was presented in [22]. The QL-HEFT scheduler utilizes upward ranking values from HEFT which are used for reward in Q-learning process. After an ordered list is provided; then, QL-HEFT engages the earliest finish time procedure to schedule high prior task which is placed in an ordered list by Q-learning process, on the fastest VM that returns the optimal result. The QL-HEFT was compared with three different classic list schedulers upward, downward, and CPOP approaches that were presented in [19]. The simulation results proved the superiority of QL-HEFT against counterparts in term of average response time. Also, Arabnejad and Barbosa in [54] have presented a novel list scheduler, predictable earliest finish time (PEFT) by introducing a look-ahead feature with computation of an optimistic cost table. It preserves time complexity against other existing approaches, but it lacks to consider rented VMs' cost. Some other famous list schedulers are: Highest Level First with Estimated Times (HLFET) [15], Modified Critical Path (MCP) [16], Dynamic Critical Path (DCP) [17], Dynamic Level Scheduling (DLS) [18], and Longest Dynamic Critical Path (LDCP) [20] in which their concentration are mostly on critical path management of given DAGs.
Recently, heuristic-based approaches became popular besides list schedulers. A heuristic-based algorithm normally finds a near-optimal solution in polynomial time. It searches for a path in the solution space at the expense of ignoring some possible trajectories [34]. Clustering and duplication are two prominent heuristic-based task schedulers in parallel systems [13,25,[28][29][30][57][58][59][60][61][62]. The heuristic-based scheduling algorithms can be classified as cluster-based schedulers such as [28,57,58] and task duplication-based schedulers Other approaches [19,26,70] [ 64] [ 30,69] such as [13,25,29,30,[59][60][61][62]. In the former method, the scheduler reduces communication costs by creating high communication-intensive dependent tasks as a cluster and mapping that clustered tasks on the same VM/processor whereas, in the latter approach, the duplication-based scheduler increases the degree of parallelism by executing a key subtask on more than one processor. Lin et al. [13] and Mishra et al. [28] have utilized clustering and duplication techniques in task scheduling problems. In [13], authors make some new graphs from input DAG by utilizing clustering, duplication, and replication methods. The main objective is to minimize makespan subject to keeping throughput and utilization at appropriate levels; then, one of the newly generated graphs which optimizes objective function and meets problem constraints is selected as a final solution. This work was validated in both real datasets and random task graphs which proved its superiority against some comparative algorithms. However, these heuristics are not appropriate in the platforms with a limited number of parallel VMs/processors [1,8]. In addition, the duplication method was applied for task scheduling on homogeneous platforms in [14]. The outcome of their design was promising in makespan reduction, but their algorithm did not consider limitations for the number of allowable task duplications since it burdens users more monetary charge. Several heuristics have been devised to solve scheduling Bag-of-Tasks applications on hybrid clouds under due data constraints in [63]. This paper's trend was to optimize the total cost function which contains tardiness penalties and public cloud usage cost. Clustering and Scheduling System II (CASS II) has been presented to improve scheduling performance. To do so CASS II engages tasks on critical path to construct a cluster. Then, it assigns this cluster on the fastest available processor without considering any duplication technique [23]. Another duplication scheduling heuristic is discussed in an extended report published by Oregon state university [27].
For instance, a shuffled genetic-based algorithm has been presented for task scheduling algorithms [8]. In its initial population, two individuals are filled by upward and downward ranking algorithms and the third individual is filled by level ranking which is drawn from the HEFT approach; then, the rest population is created by shuffling these individuals to produce feasible chromosomes. The same approach has been done by hybrid discrete particle swarm optimization (HDPSO) algorithm which produces initial swarm followed by two proved theorems [1]; then, it is randomly combined with the Hill Climbing method to make a good balance between exploration and exploitation in search space. Both presented models formulated scheduling problems as a single objective optimization by reducing the makespan viewpoint. A multi-objective optimization workflow scheduling with execution time and energy efficiency inclination has been propounded by Durilo et al. [12]. Although this improves makespan and power consumption at the same time, it is not suitable for users with tight budget constraints. Another bi-objective optimization task scheduling with maximizing reliability and minimizing energy perspectives has been propounded by Zhang et al. [77]. This bi-objective HEFT (BOHEFT) scheduler weights system reliability more than performance metrics and maps tasks on heterogeneous VMs till low energy consumption and high reliability are simultaneously achieved. This algorithm ignores makespan and utilized VMs' cost taking into consideration. Since the task scheduling problem is a discrete optimization problem, the simulated annealing (SA) algorithm seems to be an efficient approach to reach the global optimum in discrete space [78]. Several versions of SA have been developed to figure out different scheduling problems [72,73,79]. As the SA algorithm is a point-wise optimization approach, it has two basic drawbacks. Firstly, it cannot explore search space efficiently in comparison with population-based evolutionary algorithms such as GA because it cannot generate a handful of candidate solutions. Secondly, it is hard to customize point-wise SA for multi-objective optimization problems. In [37] a hybrid genetic and simulated annealing algorithm (GASA) has been presented to solve scheduling problem in a cloud environment. This work is based on a list scheduler but to generate a handful of promising lists, it utilizes GA algorithm along with its strong crossover operator. To improve the gained solutions, it utilizes SA operators. Also, in [38] another hybrid genetic and thermodynamic simulated annealing algorithm (GATSA) was proposed to solve workflow scheduling in a cloud environment with regards to makespan minimization viewpoint. The proposed GATSA utilizes thermodynamic laws to gradually and variable decrease the temperature in the cooling phase. To this end, it applies variable cooling amount based on discrepancies fitness between each pair of consecutive solutions whereas it was neglected in the canonical SA. The conducted simulations in different circumstance proved the dominance of the GATSA against other counterparts in terms of scheduling evaluation metrics. In this line, a min-max ant colony optimization algorithm has been presented in literature to solve job scheduling in grid computing systems [50].
An overall review of the literatures associated with workflow scheduling reveals that there is a clear lack of workflow scheduling algorithms that optimizes both equally important makespan and monetary cost functions. In this line, the development of an intrinsic discrete-nature meta-heuristic algorithm such as SA can efficiently explore discrete search space. To solve the discrete bi-objective workflow scheduling problem, this paper extends a hybrid population-based biobjective optimization algorithm based on simulated annealing and task duplication scheduling techniques in such a way that it can cover existing aforementioned shortcomings.

Problem background
In this section, a few concepts from the multi-objective optimization theory and canonical SA for a better understanding of this work are succinctly introduced.

Multi-objective optimization and crowding distance concepts
A multi-objective optimization problem is an issue that has several conflicting objectives which need to be optimized simultaneously. Without loss of generality, Eq. (1) outlines a multi-objective optimization problem with minimization inclination [80].
An element x * ∈ X =(x 1 , x 2 ,…, x N ) is called an Ndimensional feasible solution or a feasible decision. A vector z * : F(x * ) ∈ R k for a feasible solution x * is called an objective vector or an outcome. In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid to non-dominated solutions (or Pareto optimal solutions); those solutions cannot be improved in any of the objectives without worsening at least one of the other objective. In mathematical terms, a feasible solution In the multi-objective domain, two concepts of convergence and diversity are very important issues where it differentiates other multi-objective optimization algorithms in term of performance. For instance, the famous NSGA-II algorithm introduces two effective selection criteria namely, Pareto non-dominated sorting and crowding distance to guide the search towards the optimal front [81]. The Pareto nondominated sorting is used to divide the individuals into several ranked non-dominated fronts according to their dominance relations. The crowding distance is used to estimate the density of the individuals in a population; it is beneficial when the algorithm encounters memory size limitations. The multiobjective algorithm prefers two kinds of individuals: 1) the individuals with lower rank and 2) the individuals with larger crowding distance if their rank is the same [81]. For the last criterion, the crowding distance is employed that was defined in [82]. Our criterion is to prefer solutions with the lower rank and higher crowding distance value; the higher value of crowding distance means the solution sets were derived from broader area. Finally, the non-dominated solutions are returned.

Simulated annealing (SA)
Simulated annealing is one of the most popular metaheuristics developed that derives its inspiration from the natural world. In the case of simulated annealing, this inspiration comes from the behavior of fluids when they are subjected to control cooling such as in the production of large crystals. Simulated annealing for combinatorial optimization was introduced by Ref. [78] and independently by Ref. [83]. Despite other meta-heuristic approaches which have evolutionary trend, the SA tends to examine the worse solution apart from the good solution because in this way it runs away from getting stuck in a local optimal trap. During the annealing process, if a better solution is gained, it is accepted but in the case of producing a worse solution it is accepted by the amount of probability [84,85]. It can be tuned in such a way that acceptance of the worse solution can happen at the beginning phase of the algorithm and when it reaches the end, the probability of worse solution acceptance is near to 0. In the other words, when SA became familiar with the search space, it behaves similar to other evolutionary algorithm at the last epochs; namely, it accepts only better solutions. The SA has one plus point and one negative point. The plus point is relevant to its flexibility for discrete optimization like task scheduling problems, but the negative point is relevant to its nature which is a point-wise algorithm. This is the reason the population-based of SA is presented to gain better results; as a matter of fact, the final results proved this idea.

System, application, scheduling, and pricing models
To present the proposed algorithm, some models are introduced for better understanding. Also, this paper presents mathematical optimization models. The nomenclature tabulated in Table 2 applied in the paper makes the paper easy to follow.

System model
In this section, understudy system model is introduced which executes DAG applications. The system contains a set of L different heterogeneous VMs which are interconnected with high-speed networks; namely, VMset {VM Pr 1 , VM Pr 2 , . . . , VM Pr k , VM Pu k+1 ,…VM Pu k+m L }. In this model, the number of k VMs out of L makes a private cloud whereas the number of m VMs out of L makes a public cloud. Here, there is no difference between VMs of private and public clouds provided the underlying network is high speed. For the sake of simplicity, it is taken a heterogeneous cloud platform with L different integrated VMs which is deemed a unique entity for users, but the heterogeneity is based on processors' architecture, speeds, and pricing schemes. For pricing model, the private cloud is considered on-demand whereas the public cloud is considered with charge period basis (c.f. Eqs. (17)(18)(19)(20)(21)). Moreover, the processing power in term of number of MIPS and monetary cost in term of $/hour associated with each VM are variable and determined in advance. Such a system is depicted in Fig. 2.
In this model, the front-end layer is the user layer in which the user submits his/her request. The scheduling layer contains resource manager, job scheduler, DAG maker, and task scheduler which pays attention to the user's QoS request and his/her cost budget. In the back-end, there exists both private and public cloud that makes hybrid architecture. The main concentration of the current paper is on task scheduling in a hybrid architecture. For simplicity, a uniform high-speed network is considered in which all VMs can uniformly communicate with each other with the same bandwidth BW. So,

Application model
Each workflow application is modeled to directed acyclic graph (DAG); it is considered a directed acyclic task graph in . . , t n } in a DAG represents a task. Also, a DAG has two special nodes t entry and t exit that do not have predecessor and successor nodes respectively. The set of edges E in the graph where {e t i , t j ∈ E|t i t j } represents dependency between tasks t i and t j . That is a precedence constraint that indicates the task t j can start its execution only after completion of the task t i . The set t j ∈ T : e t j , t i ∈ E of all immediate predecessors of t i is referred to as Pred(t i ) and the set t j ∈ T : e t i , t j ∈ E of all immediate successors of t i is referred to as Succ(t i ). A task without any predecessor is called an entry task, i.e., Pred(t i ) ∅ and a task without any successor is called an exit task, i.e., Succ(t i ) ∅. The size of the task t i and the weight assigned to edge e t i , t j for computation and communication are represented by Si ze(t i ) and e t i , t j respectively. The amount of execution time for the task t i on VM k is calculated via Eq. (3). Besides, the average execution time of the task t i on this heterogeneous platform is gained via Eq. (4). Where ES (VM k ) and nP are execution speed of VM k in terms of MIPS and the number of virtual processors in the system. The ith task in the application TaskList List of tasks in the first part of a chromosome DuplicationList List of duplication of tasks in the second part of a chromosome The γth duplication of ith task in application t entry The entry task with no predecessor t exit The exit task with no successor The set of immediate successors of task t i Pred(t i ) The set of immediate predecessors of task t i Transfer time in edge e(t i , t j ) where tasks t i and t j are running on different virtual machines VM k and VM l respectively via common bandwidth B VM Moreover, the data transfer time (TT) between each pair of VMs can be calculated via Eq. (5) in which the amount of data being transferred and common bandwidth are effective in the TT parameter [86].
This algorithm is evaluated using synthetic data from five real-world scientific workflow applications, such as Montage (generation of image mosaics of the sky), Epigenomics (mapping of the epigenetic state of human cells), SIPHT (The bioinformatics project that is conducting a wide search for small untranslated RNAs in bacteria), CyberShake (generating seismic hazard maps for earthquake detection), and LIGO (detection of gravitational waves in the universe) [5].

Scheduling model and duplication technique
The scheduling model determines which task is assigned on which type of VM in regards to objective functions. The scheduling model that this paper applies is a list scheduler which has two important phases; at the first phase, it provides a list of ordered tasks with priority weight whereas at the next phase it picks a high priority task to assign on the available VM which guarantees the earliest finish time not the earliest start time. For the first phase, three approaches upward, downward, and level rankings of the HEFT algorithm are applied with the incorporation of duplication method if necessary. For the VM selection, two functions are engaged which are: Earliest Finish Time (EFT) and Earliest Start Time  [5], c SIPHT [5], d LIGO [5], and e Cyber Shake [5] (EST). The first function indicates the earliest time in which a virtual machine VM pr/pu k can finish the execution of the subtask t i whereas the second indicates the earliest time that the execution can be started. To do so, two famous functions downward ranking Rankd(.) and upward ranking Ranku(.) are applied. The former starts from the entry task to the exit task to weigh each task a priority whereas the latter starts from the exit to the entry task. Both of them have recursive behavior. The downward ranking of a task t i is recursively calculated by Eq. (6) where its value is considered zero for the entry task which Eq. (7) indicates.
where pr ed(t i ) is the set of an immediate predecessor of task t i and e(t j , t i ) is the average communication cost of edgee(t j , t i ), and ET (t j ) is the average computation cost of the task t j . Rankd(t i ) is the longest distance from the entry task to the task t i , excluding the computation cost of the task Fig. 4 An example of a DAG application with 10 tasks [97] itself [19]. Similarly, the upward rank of a task t i is recursively defined by Eq. (8) [19]. Also, for creating a sequence Table 3 Available VMs, their execution times, and Monetary cost for the workflow depicted in Fig. 4 T asks ordered of tasks based on upward ranking, each upward raking value for each task t i , except for exit task, is calculated via Eq. (8) whereas upward value for exit task is set by its average computation cost which Eq. (9) shows.
The term Succ(t i ) is the set of immediate successors of task t i . In addition, the term Ranku(t i ) is the length of the critical path from task t i to the exit task t exit , including the computation cost of the task t i . The third heuristic is the level ranking approach in which it assigns a level number to a task. The entry task has level 0, but for other tasks, the level is recursively calculated by Eq. (10). Then, the tasks are sorted based on their level ranking with increasing order.
In addition, in this paper, task scheduling performance is improved by utilizing the duplication technique which doubles critical tasks on different VMs because this technique enhances running parallelism. On the other hand, duplication may shorten the time interval in which VMs are at service; therefore, it can potentially decline monetary cost. Note that, duplication technique intrinsically leads to charge for a couple of VMs rent instead of one VM, so, there exists a clear conflict between makespan reduction and monetary costs. This is the reason this issue is formulated as a bi-objective optimization problem by applying the dominance concept for finding solutions that compromise between objectives. To apply the duplication technique to an ordered list, a random number is dedicated to each task t i which has the minimum value, min 1 and the maximum value, max min {count{VMs}, max{count(Pred{t i }), count(Succ{t i })}}; these numbers are determined based on the number of VMs, the number of predecessors, and the number of successors associated with each task. Then, it can be balanced in the proposed enhanced simulated annealing process. After an ordered list is prepared by any ranking heuristic; then, the duplication technique is applied. Hereafter, each duplicated task is treated the same as the original task in the list. To apply the VM selection phase, new EST (t i , VM k ) is defined which is used to show the last time the task t i whether original or duplicated task can wait for execution on VM k . If t i is an entry task in a DAG or it is the first task must be assigned in the VM k s task list that the V M k .List ∅ shows the case, the EST (t i , VM k ) is equal to the boot-up time of VM k , shown by BUT(VM k ); otherwise, the term EST (t i , VM k ) is calculated with Eq. (11). In this equation, t γ j is γ-th duplication of task t j that is a member of the predecessor of t i , otherwise task t j is an original predecessor of task t i . The important note is that, the fastest duplicated task and the slowest original task in the predecessor of task t i should be taken into consideration. In addition to, the function Avail (VM k ) is used to indicate the time which this VM's last task has been finished and it is available for the new task.
When a new virtual machine VM k is intended to be started before the task scheduling can be performed, it is needed to boot up the virtual machine VM k in the system; where the function BUT(VM k ) is considered to measure this boot-up time. This overhead is negligible for long-term scheduling, but it can become a problem when running a virtual machine is unnecessary. A scheduling algorithm may terminate a running task to save cost, but it can be restarted to meet the executing task. The overhead caused by launching a new virtual machine may not justify the cost. The time which is determined by BUT(VM k ) is effective at the initial run of the virtual machine and it can also affect the monetary cost [87].
The value of Earliest Finish Time function, EFT(t i , VM k ) for each task t i , whether it is duplicated or an original on the virtual machine VM k is calculated by adding two values EST (t i , VM k ) and ET(t i , VM k ) which Eq. (12) shows.
The function AFT(t i ), in Eq. (11), is utilized for the actual finish time of the task t i which can be measured via Eq. (13) and also Eq. (15).
The term VM index is used for calculating AST(t i , VM index ) according to Eq. (14). The term VM index indicates task t i is started on it. The term AFT(.) can be measured via Eq. (15).
In this paper, the first objective function is to minimize the makespan parameter which is determined by Eq. (16).

Pricing model
Cloud computing follows the pay-per-use pricing model which means users being charged for the whole time duration even if they use only a fraction of it. Thus, in the proposed model, each instance of leased VM k is charged per hourly time interval [88][89][90][91][92][93]. Infrastructure as a service (IaaS) providers offer instances of the virtual machine from the set of available V Ms to its clients. Each V M has different configurations like memory size, CPU type, and cost per time unit. The V M configuration determines that the cost of faster V Ms are costlier as compared to the slower ones. The function EUnit(VM k ) shows monetary price that must be paid per each unit of execution time on VM k . Also, the function EC(t i , VM k ) shows that the fee to be paid for each t i execution on VM k according to Eq. (17) similar to literatures in [88,94,95]. In Eq. (17), the term EFT(t i , VM k ) indicates the time interval needed for execution of task t i on VM k which is gained via Eq. (12). The upper bound is used to round the execution time because the payment of VMs is based on unit of time in cloud environment. For instance, if one deploys an a1.medium instance from Amazon EC2 which attributed with vector a1.medium ( vCPU 1, Mem 2 GiB); its on-demand hourly rate is $0.0255/h [89]. So, the EUnit(VM k ) value is $0.0255/hour. If one deploys such on-demand VM for 20 h and 35 min duration, he/she must pay for 21 h. Consequently, the bill is $0.5355.
The function Tunit(VM k , VM l ) is used to show the monetary price which must be paid per each unit of communication data between VM k and VM l . Also, the term TC t i , t j shows the total fee to be paid for communication cost from tasks t i to t j between VM k and VM l provided each task is assigned on different virtual machines VM k and VM l where k l, according to Eq. (18).
Since this article considers hybrid cloud architecture as infrastructure to execute workflows, both private and public cloud must be taken into consideration. Note that, the pricing procedures of the two clouds are different. The private VM scheduler uses an on-demand provisioning approach. The term on-demand is the standard model that is considered by most existing scheduling techniques [88,96]. Metered services, also called pay-per-use models, are any type of payment structures in which customers have access to potentially unlimited resources but only pay for what they use. VM instances can be launched and terminated at any time. Because of this, scheduling algorithms need to estimate an optimal number of instances to be allocated before begin of execution provided it is not specified by users; so, it must determine additional instances if it is required during the execution. In this line, those instances that no longer contribute to the workflow execution should also be switched off to save cost. On the other hand, the public VM scheduler uses the charge period provisioning approach. With this assumption, scheduling algorithms try to exploit all the leftover resource utilization of the charged periods and decide whether to terminate machine instances at the end of their charge periods to minimize the cost. With this fine-grained assumption, the scheduler should be aware of fitting tasks within the charged period to take cost-effective procedure; this makes it complicated as considering these points during the scheduling process [96].
Therefore, the amount of total monetary cost (ToC) variable comprises both total execution and transfer costs which are brought in Eq. (19). So, the second objective function is to minimize the ToC cost function.
The total execution cost (TEC) variable represents the amount of monetary cost that will be paid for the execution of all tasks in a workload. The TEC value varies depends on one uses the public virtual machine VM pu k ∈ {public VMs} or private virtual machine VM pr k ∈ {private VMs} because private VM scheduler applies on-demand provisioning and public VM scheduler applies charge period provisioning approaches respectively. According to Eq. (20), the TEC value is measured whether private cloud or public cloud is applied. Thus, for private VM usage, this cost value is calculated in such a way that this is the cost of each task that is executed within ∀ VM β k |β ∈ private VM and for public VM cost calculation is for the time between the period start and end time of using (∀ VM β k |β ∈ public VM ). The decision variable x k is used to indicate whether VM k is utilized or not. The terms t Last and t First indicate to the end of the last task execution time and the start of the first task on VM k . Also, such as TEC, the measurement of TTC variable in hybrid cloud environment has different values depending on which type of underlying public/private virtual machines are applied. Only in the case of communication is inside or starts from public VMs (VM β l |β ∈ public VM), the transfer monetary cost can be ignored. Equation (21) demonstrates how to calculate the TTC variable.
Note that, the proposed pricing model is a general model in which it can be utilized for both private cloud owners and individuals without any infrastructure, but both are on a tight budget intend to utilize the public cloud. For the sake of simplicity, in the simulation, only public cloud adoption is taken into consideration. To this end, the part associated with a private cloud is overlooked from the pricing model, although it does not have any side effect on the proposed model.

An illustrative example
To present the effectiveness of the proposed model, a sample DAG depicted in Fig. 4 is considered. Then, the results relevant to executions of different approaches are reported. This sample graph consists of ten tasks from t 1 to t 10 . The data transfer time between the tasks is shown by the number above each arc. For executing the tasks in this workflow, a set V Ms  Table 3. Note that, the lease time interval and boot-up time of the V Ms are taken to be zero in this illustrative example. Table 3 also illustrates the task t i 's execution time and monetary cost on each VM j .
For encoding the candidate solution, it has two parts; namely, the first part is the task list and the second part is the number of corresponding duplicated tasks. Figure 5 depicts the valid chromosome relevant to a DAG drawn in Fig. 4. In this Figure, tasks t 1 and t 4 were duplicated two times. Note that C max notation is used for workflow's maximum completion time or makespan. Figure 6 illustrates a comparison between the novel BOSA-TDA algorithm versus several proposed algorithms in the literature.
As Fig. 6 demonstrates BOSA-TDA (depicted in Fig. 6e) outperforms others in terms of makespan. After the BOSA-TDA, NSGAII (Fig. 6a), Single objective GA (SOGA) (Fig. 6d), HEFT-TD (Fig. 6b), and Lookahead HEFT-TD (Fig. 6c) have the next ranking order in term of makespan. Note that, SOGA only intends to minimize makespan metric in which it neglects to take cost reduction improvement; then, based on its solution, the cost of utilized VMs is calculated and reported. Usually, it has a good result in the first objective function. In this regards, Fig. 7 depicts the effectiveness of BOSA-TDA in comparison with other approaches in terms of monetary cost, SLR, speedup, and efficiency which are evaluation metrics.
Regarding Fig. 7, it can be concluded that BOSA-TDA outperforms other approaches in terms of evaluation metrics.

Problem statement
The problem, in this paper, is to map a set of tasks of a given workload on a set of VM instances in such a way that total monetary cost (ToC) and makespan of the workflow scheduling are simultaneously minimized while some constraints are preserved. This problem is formulated to a biobjective optimization problem with service cost and service time minimization viewpoints; this formulation is drawn in Eqs. (22)(23)(24)(25)(26). Note that the first and the second objectives have been elaborated in Eq. (16) and Eq. (19) respectively.
makespan ≤ Deadline. (26) In this formal presentation, two constraints (25,26) can be adjusted by users depending on their monetary budget and time sensitivity. Since the objectives in a bi-objective optimization problem usually conflict with each other, the Pareto dominance concept is commonly used to compare generated solutions [87].

Proposed bi-objective optimization based on simulated annealing task duplication scheduling algorithm (BOSA-TDA)
The problem statement in the previous section reveals some important points. Namely, the issue we face is an NP-Hard problem with discrete search space. In addition, it is a multiobjective optimization problem the reason why it needs profound exploring search space to find abundant Pareto solutions. To have abundant solutions, the SA suffers to provide a handful of candidate solutions since the canonical SA is a point-wise algorithm. So, a new population-based BOSA-TDA algorithm is devised that takes benefit of HEFT approaches, i.e., upward, downward, and level rankings, and also duplicated list of tasks in their initial population. Algorithm 1 illustrates the novel proposed BOSA-TDA.
This pseudo-code in Algorithm 1 is relevant to the main algorithm; it receives a DAG and underlying hybrid platform specifications; then, it returns a set of non-dominated solutions as PS. Since diverse solutions are preferable against dense solutions in multi-objective problems, before the main algorithm returns the final non-dominated solutions which were gained from the first front-ranking, it calls the Crowd-ingDistance procedure to find diverse solutions such as in [81] . Similar to other population-based algorithms, the proposed algorithm starts with random initial individuals. Note that each individual in the population is a record of three fields. Namely, the fields Chrom, Obj1, and Obj2 which are used for solution encoding, the first and the second objectives respectively. In this line, the first three individuals of the population are created according to upward, downward, and level ranking approaches, for the rest of the population the CreateRandomSolution algorithm is called to produce other individuals, which is depicted in Algorithm 2. After the individuals are prepared, the objective functions can be calculated via Eq. (16) and Eq. (19) respectively. Then, it plummets into the main loop of Algorithm 1 through lines . This repeats MaxIteration times which is set 50 times in this paper. In each iteration, for all individuals of the population, several instructions are run. For each individual as a candidate, the SA is run to explore the neighborhood of each solution. To do so, a neighborhood operation is defined which calls randomly one of the four algorithms: Algorithm 3 through Algorithm 6; by this, search space will be efficiently permutated. If a new solution is better than the current one, it is accepted. In the other words, if the new solution dominates the previous one it is substituted in line#43, otherwise the worse solution is probabilistically accepted in line #46. The SA utilizes the temperature concept, temp, to take over the algorithm. This is set the temp to the big value at the outset and it gradually decreases it to reach freeze value. The exponential function exp(− new Pop/temp) is used to calculate the probability of worse solution acceptance; the temperature temp value is near to freeze, the chance of acceptance is near to 0. The parameter new Pop is used to have the effectiveness of normalized objective functions in the decision. The coefficient α is applied to indicate the importance of objective functions. For now, this article takes 0.5 as the same importance for each of both. After that, the small cooling stage happens to reach in stable point. Since the range of two objective functions is different, the individuals' objective functions are normalized based on the minimum and maximum of both objective function values of the whole population in each round.

Initial population
The proposed BOSA-TDA similar to other meta-heuristic computations starts with the initial population. As the main algorithm depicts, the first three individuals of the population are placed by upward, downward, and level ranking algorithms which are HEFT-based approaches. For the rest individuals, Algorithm 2 is called which has random treatment. It guarantees the casual traversing of search space. For this reason, Algorithm 2 is called to create populations with more plenitude solutions. To do so, it generates new chromosomes in two steps. The first step is to create a random order of tasks for the first part of chromosome <T ask List > and the second is to create a random number of duplications in < DuplicationList > for each task t r . The initial value of < T ask List > is set to null. In this pseudo-code, the list of tasks that have no predecessor, namely Pred(t i ) ∅; or input nodes are put in set {Avail Set}. The variable list {Avail Set} is a set of tasks that can be selected at any time for inserting to the list < T ask List >. In the CreateRandom-Solution algorithm in lines (4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14), while loop is dedicated to continuing until the set {Avail Set} has any member of predecessors. The variable Visited is an array that holds the visited task number at any time. In this regard, in line 5, a random task t r is selected from the set {Avail Set}. If all predecessors of t r are visited before then t r is removed from the set {Avail Set}. Then, the values Succ(t r ) are added to the set {Avail Set}. Afterward, the selected task t r is added to < T ask List > otherwise the task t r is removed from the set {Avail List} because t r in the forthcoming rounds will be added.
After adding all tasks of set T to the list < T ask List > separately, it is turned to the second step which is placed in lines (15)(16)(17)(18)(19). In this section, there should be a random number assigned to the task that represents the number of duplicates for each task. However, the values assigned to the task list must have at least one and a maximum value of Max D because the value more than this number will only cause additional duplication and burdens redundant monetary costs without any performance improvement.

Simulated annealing process
As the main body of the algorithm calls an SA-like procedure, it needs some operators to permute discrete search space efficiently. For this, four different neighbors of current state algorithms are applied; each of which is randomly called in every round. As a candidate solution has chromosome encoding which has two parts task list and duplication list, our new operators target both parts. Note that, both task lists and duplication lists are incorporated in proposed meta-heuristic operations. The name of four neighbor algorithms are CrossoverTask, CrossoverDuplication, Muta-tionTask, and MutationDuplication Algorithms.

CrossoverTask algorithm
The CrossoverTask procedure receives two chromosomes from the Population in the form of < T ask List > as input and crossover them to generate a new chromosome. The first one is Pop[i] and the second one is a random Pop[rand]; then, it returns a better child. For doing this, x1 and x2 are considered as inputs and Y 1 and Y 2 are new children. Firstly, the algorithm generates a random number R and acts as a single point crossover of the genetic task scheduling algorithm such as in [8], i.e., it copies x1 [1..R] to corresponding elements of Y 1 and copies x2 [1..R] to Y 2. Then, all tasks in x2 that do not exist in Y 1 will be inserted to Y 1 in the same order. Also, all tasks in x1 that do not exist in Y 2 will be inserted to Y 2 in the same order; this procedure guarantees dependency constraints. Then, it returns dominated child. Algorithm 3 is presented to show this type of crossover. Also, Fig. 8 illustrates its functionality.

CrossoverDuplication algorithm
The CrossoverDuplication procedure receives two chromosomes from the population in the form of < DuplicationList > as input and crossover them to generate a new chromosome.  Fig. 9 illustrates its functionality.  Fig. 10 illustrates its functionality.

MutatationDuplication procedure
Such as all evolutionary algorithms the operation of mutation may help to avoid getting stuck in local optimal; this is the reason to customize mutation operation in both task and duplication lists. By getting a chromosome, the value of its < DuplicationList > part is mutated. This function randomly mutates one or more members of the list < DuplicateList >. Also, the new values do not violate the minimum and maximum conditions. Algorithm 6 depicts the MutationDuplication procedure. In addition to, Fig. 11 illustrates its functionality.

Simulation and evaluation
This section is dedicated to the simulation and evaluation of the novel BOSA-TDA. To do so, miscellaneous scenarios are conducted to reach robust results. As such, different scenarios and datasets are generated. Forthcoming subsections are considered for scheduling metrics; scenarios; and datasets; experiments; and data analysis for this reason. Each scenario has been independently executed 20 times each of which sets MaxIteration 50 in its main loop; then, the average of execution results was reported.

Scheduling metrics
To have better evaluation and comparisons, several metrics are applied which are being pervasively used in literature. These metrics are enlisted in this section.

Communication to computation ratio (CCR)
The unpleasant feature of network delay is considered in this model whenever there needs to be communication between virtual machines. Unfortunately, network delay has a drastic impact on execution time; this is the reason the scheduler must be aware of the workload nature. The edges of the DAG representing the dependencies can be weighted to indicate the data transfer requirements. In this regard, the communication to computation ratio (CCR) of a graph is used for knowing how to extend the workload communication-intensive or computation-intensive is and to define how long the data transfer on the network will take. This CCR parameter is calculated via Eq. (27). It is the ratio of the average communication cost to the average computation cost. If a DAG's CCR value is very low, it can be considered as a computationintensive application [1,19].

Schedule length ratio (SLR)
Since each graph has its feature to be utilized in the scheduling process, the makespan gained as a result of each algorithm is no longer meaningful. To obviate this problem, considering a lower bound for execution time is a beneficial approach to give a bright clue. To avoid confusion, it needs to be normalized. To do so, it leverages the critical path (CP) concept to make a new parameter schedule length ratio (SLR). The graph's CP is the longest path from entry node to exit node in which it cannot be parallelizable owing to dependency. If the scheduler executes nodes relevant to CP on the fastest available processors/VMs, the makespan parameter cannot be less than CP's length. Then, the new parameter SLR which is a normalized metric regardless of the studied graph can be measured via Eq. (28). This metric is defined relative to the critical path rather than the total execution time. This is because the shortest execution time of a job on a highly parallel platform is determined by the length of its critical path. Note that, SLR value of each schedule is greater than one. If

Speed up
This metric indicates how many times the algorithm runs faster in comparison to on a single processor or a VM, preferably the faster processor [1]. This metric is attained via Eq. (29).
Speed up serial execution on the fastestV M Makespan

Efficiency
The complementary metric is efficiency because the speed up metric does not determine you gained this level of speed up with spending how many processors [1]. This metric is calculated via Eq. (30).

Scenarios and datasets
To assess the performance of the proposed BOSA-TDA, the famous workloads such as synthetic data from five real-world scientific workflow applications are used [47], such as Montage (generation of image mosaics of the sky), Epigenomics (mapping of the epigenetic state of human cells), CyberShake (generating seismic hazard maps for earthquake detection), LIGO (detection of gravitational waves in the universe), and SIPHT (used in biology) [5]. In this line, the evaluation is based on the comparison between the proposed algorithm against other state-of-the-arts in terms of prominent evaluation metrics of scheduling domain such as makespan, cost, SLR, speedup, and efficiency. Moreover, to demonstrate the  . Since some clouds neglect in/out data transfer charges and for the sake of simplicity, the data transfer cost and model constraints are omitted. In this dataset, it is taken a positive correlation coefficient near to 1 value for ES and EC vectors. This indicates that once execution speed increases, the execution cost also increases. Also, the datasets are conducted in such a way that having diverse workloads in terms of CCR; in this way, the robust evaluation can be made. To do so, four different values for CCR which are 0.1, 0.5, 1.0, and 5.0 considered respectively for computation-intensive, rather computation-intensive, moderate, and communication-intensive graphs. As the result of extensive simulations for communication-intensive graphs with parameter CCR equal and more than 5 proves that there is not any meaningful discrepancy between the performance of comparative algorithms, the last condition is ignored. Note that, in case of last condition, all algorithms behave such as a

Experiments
In this section, experimental results that are drawn from extensive simulations of proposed BOSA-TDA in comparison with state-of-the-arts NSGAII [81], HEFT-TD [30], Lookahead [30], and SOGA [85] are reported. Note that, the average result of 20 independent executions is reported in terms of makespan, total monetary costs, SLR, speed up, and efficiency metrics. Figures 12 through 16 are dedicated to these contrasts. As such, Figs. 12 and 13 respectively depict algorithms comparison in terms of makespan and monetary costs separately relevant to computation-intensive (CCR 0.1), rather computation-intensive (CCR 0.5), and moderate (CCR 1.0) of understudy workloads. Figure 12 is condensed to show the comprehensive behavior of comparative algorithms in term of makespan value of all scenarios.  In this regard, the Fig. 13 is also condensed to illustrate the comprehensive behavior of comparative algorithms in term of monetary cost value of all scenarios.
For computation-intensive workload case studies which have CCR 0.1, Fig. 12 demonstrates that BOSA-TDA outperforms against other algorithms in term of makespan metric except for in workloads LIGO and Montage that the BOSA-TDA has the same behavior with NSGAII; also, in Epigenomics dataset in which it has the same behavior with HEFT-TD and Lookahead algorithms; in other cases, the BOSA-TDA has significant dominance against counterparts. For monetary cost analysis, Fig. 13 proves the dominance of BOSA-TDA against other state-of-the-arts in all cases.
In addition to, for rather computation-intensive graphs which have CCR 0.5, Fig. 12 proves that BOSA-TDA beats other algorithms in terms of makespan except for in Montage and Epigenomics workloads so that the BOSA-TDA has the same behavior with NSGA II. Also, the BOSA-TDA does   Fig. 13 shows, the BOSA-TDA has improvement in cost reduction against others in the majority workloads except for in Montage and SIPHT workloads where the Lookahead algorithm has dominance against BOSA-TDA.
In the aforementioned workloads, only SOGA and BOSA-TDA have the same results.
In the moderate workloads where CCR 1.0, Fig. 12 demonstrates BOSA-TDA beats other approaches in all circumstances in term of makespan improvement except for in contrast with SOGA where it has not any dominance in the Cybershake graph. Also, the BOSA-TDA has superiority against other algorithms in term of monetary cost improvement in LIGO and Montage workloads, but it fails to outperform versus HEFT-TD, Lookahead, and NSGAII  algorithms in Epigenomics workload and against SOGA and Lookahead algorithms in SIPHT workload. In addition to, the BOSA-TDA and SOGA have the same result in the CyberShake graph. Totally, in CCR 1.0, the BOSA-TDA has superiority in 24 cases out of 25 in term of makespan improvement and 19 cases out of 25 in terms of cost reduction improvement. Figure 14 is dedicated for analysis of comparative algorithms in term of SLR value.
However, in term of SLR, the BOSA-TDA beats other approaches in the majority of scenarios, but it is not dominated by the rest. In Fig. 14, for computation-intensive graphs, the BOSA-TDA has the same result in a few cases where in the most cases it has dominance against other approaches. In rather computation-intensive graphs, the BOSA-TDA has the same output with NSGAII and SOGA in two cases; in other cases it beats others. In the moderate graphs, the BOSA-TDA beats others in all cases.
Accordingly, Fig. 15 is dedicated for comparison of comparative algorithms in term of speedup; it shows that the BOSA-TDA has superiority versus other approaches in term of speedup metric in all of the scenarios except for in some limited cases where the BOSA-TDA has the same result as the NSGAII indicates.
One of the most important metrics which releases us from the misleading conclusion is the efficiency metric. To this end, the Fig. 16 is dedicated for analyzing the comparison of comparative algorithms in term of efficiency.
Evaluation associated with the execution of different simulations points out that the BOSA-TDA dominates other algorithms in terms of efficiency except for in moderate Cybeshake graphs (with CCR 1.0) in which three algorithms NSGAII, HEFT-TD, and Lookahead have dominance against BOSA-TDA. In the other words, the BOSA-TDA dominates in 22 cases out of 25 cases versus other approaches. As Fig. 16 depicts, in CCR 1.0 scenarios, the NSGA II competes with the BOSA-TDA in terms of evaluation metrics only in some datasets, but in other cases, the BOSA-TDA outperforms other state-of-the-arts significantly.

Data analysis
This section presents data analysis to a better understanding of the proposed algorithm's performance in contrast with other approaches in terms of prominent metrics derived from literature. In addition, a relative percentage deviation (RPD) metric is applied to point out the amount of enhancement gained via proposed BOSA-TDA approach [1]. To do so, Tables 4, 5, and 6 are dedicated to this reason. Table 4 illustrates a makespan comparison of different comparative algorithms. To have a ranking list in terms of makespan metric from the best to the worst algorithm, NSGAII, SOGA, HEFT-TD, and Lookahead algorithms are placed in the ranking list, but after the BOSA-TDA. The negative cell value means deterioration whereas the zero value means no improvement was gained. Table 5 illustrates the cost reduction comparison of different algorithms. For cost reduction metrics, Lookahead, HEFT-TD, NSGAII, and SOGA algorithms are placed in the ranking list from the best to the worst, but after the BOSA-TDA which is in the first place.
In this regard, Table 6 illustrates the SLR metric comparison of different comparative algorithms. For SLR metric, NSGAII, SOGA, HEFT-TD, and Lookahead algorithms are placed in the ranking list from the best to the worst, but after the proposed BOSA-TDA. Table 7 illustrates the Speedup metric comparison of different comparative algorithms. For Speedup metric, again NSGAII, SOGA, HEFT-TD, and Lookahead algorithms are placed in the ranking list from the best to the worst, but after BOSA-TDA. Table 8 illustrates the Efficiency metric comparison of different comparative algorithms. Also, for the efficiency metric, NSGAII, SOGA, HEFT-TD, and Lookahead algorithms are placed in the ranking list from the best to the worst, but after BOSA-TDA.

Conclusion and future work
The majority of existing workload scheduling algorithms in the cloud environment only intend to minimize makespan metric whereas they neglect to consider user service bill. Since cloud providers provision different processing power in terms of VM configurations, it burdens variable charges for subscribers. This is the reason that this article formulated workload scheduling in hybrid cloud architecture as a bi-objective optimization problem. To deal with the combinatorial problem, a novel hybrid population-based simulated annealing algorithm by applying the duplication technique has been presented which is named BOSA-TDA. To evaluate the performance of BOSA-TDA in terms of derived prominent metrics from literature, extensive scenarios on set of well-known workloads have been conducted. The reported results from the simulation of extensive scenarios proved the superiority of the proposed algorithm against other state-ofthe-art approaches in terms of metrics in this ambit. For future work, we intend to model cloud reliability for mission-critical workloads; then, the scheduling problems can be formulated as a new bi-objective optimization algorithm with total execution time and reliability perspectives.
Funding No fund available.

Conflict of interest
The author declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.