You are on page 1of 10

Journal of Intelligent Manufacturing, 16, 361370, 2005 2005 Springer ScienceBusiness Media, Inc. Manufactured in The Netherlands.

A neural network model and algorithm for the hybrid ow shop scheduling problem in a dynamic environment
LIXIN TANG*
Department of Systems Engineering, Northeastern University, Shenyang, China E-mail: qhjytlx@mail.sy..ln.cn

WENXIN LIU
Department of Electrical and Computer Engineering, University of Missouri Rolla, USA

JIYIN LIU
Business School, Loughborough University, Ashby Road, Loughborough, Leicestershire LE11 3TU, UK Received September 2003 and accepted August 2004

A hybrid ow shop (HFS) is a generalized ow shop with multiple machines in some stages. HFS is fairly common in exible manufacturing and in process industry. Because manufacturing systems often operate in a stochastic and dynamic environment, dynamic hybrid ow shop scheduling is frequently encountered in practice. This paper proposes a neural network model and algorithm to solve the dynamic hybrid ow shop scheduling problem. In order to obtain training examples for the neural network, we rst study, through simulation, the performance of some dispatching rules that have demonstrated effectiveness in the previous related research. The results are then transformed into training examples. The training process is optimized by the delta-bar-delta (DBD) method that can speed up training convergence. The most commonly used dispatching rules are used as benchmarks. Simulation results show that the performance of the neural network approach is much better than that of the traditional dispatching rules. Keywords: Dynamic scheduling, hybrid ow shop, neural network, DBD algorithm

1. Introduction A hybrid ow shop (HFS), also called exible ow shop, can be regarded as a generalized ow shop with K processing stages, of which at least one consists of multiple identical machines. All jobs to be processed in the HFS need to go through the
*Author for correspondence.

stages in the same sequence. A job can be processed on any one of the machines at a stage. If there is only one machine at every stage, the HFS becomes a classical flowshop. If there is only one stage, then HFS becomes a machine shop with parallel machines. HFS is common in continuous process industries such as chemical industry and steel making industry. Take steel industry as an example. The process can be roughly divided into

362 three stages: steelmaking, refining and continuous casting. Figure 1 illustrates the steel making and continuous casting production system. The system consists of three stages (steel making stage, refining stage and casting stage) with converter furnaces, refining furnaces and continuous casters, respectively, as the machines. In each stage, there are multiple machines and a job (a charge for steel making) can be processed on any of them. All jobs follow the same production process: steel making, refining and continuous casting. The productivity of HFS depends very much on effective scheduling of the operations in the system. HFS scheduling problems have attracted signicant attention in recent years, e.g., Brah and Hunsucker (1991), Gupta (1988), and Lee and Vairaktarakis (1994). Most HFS scheduling studies reported in the literature consider a static production environment involving a fixed number of jobs. A general static scheduling problem includes the following assumptions: (1) the processing times of the jobs are deterministic; (2) the jobs are all available for processing at time zero or with known arrival times; (3) all machines are continuously available. However, practical production usually operates in a dynamic environment with stochastic events, such as random job arrivals, machine breakdowns, due-date changes, cancellation of orders, urgent new orders and operation delays. For example, in the steelmaking production, Charges (jobs) to be processed at a continuous caster (machine) arrive in a dynamic manner. One caster has more than several streams through which hot liquid metal is
Steelmaking operation
Converter furnace #1 Converter furnace #2

Tang et al.

converted into slabs. In practice, some of the streams may sometimes stop working properly. When this situation occurs, the original production schedule has to be revised so that the new equipment constraints are satised. In addition, most manufacturing companies are facing great challenges by quickly changing customer needs in the increasingly competitive world market. It is thus necessary to modify and improve an existing schedule in response to the changes in the conditions, demands and constraints. These also require more effective and more exible production scheduling and shop oor control strategies. Scheduling activities in such an ever-changing system is a dynamic process in nature. Because the conditions and environments in dynamic and static scheduling problems are different, it is essential to develop new scheduling methods suitable for dynamic environments. In this paper, we construct a neural network model for the dynamic hybrid ow shop scheduling problem with dynamic job arrivals. The arrival times of the jobs are unknown in advance and job arrivals are assumed to follow a Poisson process. Once a job arrives, its processing times at all stages will be known. Our objectives are to minimize the average ow time, the average tardy time and the percentage of tardy jobs. The neural network model includes three sub-networks, each corresponding to one of the above performance criteria. The training examples of the neural network are generated from a large number of simulations. The sub-network can be trained separately and simultaneously to decrease training time. As standard
Casting operation
Continuous caster #1 Continuous caster #2

Refining operation
Refining furnace #1 Refining furnace #2

Converter furnace #M1

Refining furnace #M2

Continuous caster #M3

Stage 1

Stage 2

Stage 3

Fig. 1. Steel making system as an example of hybrid flow shop.

A neural network model and algorithm in a dynamic environment

363

back propagation (BP) training algorithm has the disadvantages of slow convergence being easily trapped into local optima, the delta-bar-delta algorithm is used to further speed up the convergence of the training process. The DBD algorithm adjusts the learning rate of the training process dynamically based on the variation of training error and optimizes the training process. Finally, the performance of the trained neural network system is compared with best results of the traditional scheduling method. The rest of this paper is organized as follows. In Section 2, we briefly review related previous work. Section 3 presents the neural network model for solving the dynamic HFS problem. Then the mechanism to retrieve training knowledge in the neural network model is described in Section 4. Section 5 describes the training process and method to solve the NN model. Section 6 reports the simulation results of the proposed algorithm in comparison with the performance of the traditional dispatching rules. Section 7 gives conclusions.

2. Literature review Dynamic scheduling problems have received immense attention in literature. Major review articles appear every three or four years. Study of dynamic scheduling can be roughly classied into three categories: simulation, heuristic methods, and knowledge-based approach. Simulation is the most widely used technique to investigate dynamic scheduling. A survey on dynamic job shop scheduling using simulation is given by Ramasesh (1990). Simulation methods improve the performance by testing a number of rules among which the best one is selected (Park, 1988). As the system becomes complex and more rules are involved, the computation time needed to perform a valid comparison of the rules may dissolve the dynamic merit of the dispatching rules. Artificial intelligence (AI) methods have also been used to improve the performance of rules (Pierreval, 1992). They try to incorporate expert and domain knowledge in selection of rules so that not all the rules need to be tried every time. The performance of these methods relies heavily on the quality of the knowledge incorporated, which is not easy to obtain.

Dynamic scheduling problems are often handled using dispatching rule based approaches. These approaches emphasize the dynamic nature of the system. Simple dispatching rules such as First-InFirst-Out (FIFO) and Shortest Processing Time (SPT) are often used independently on each machine. Over a hundred such rules have been developed up to now. Classication and comparison of these rules have been made by Panwalkar and Iskander (1977) and Blackstone et al. (1982). While these rules can handle dynamic problems and are easy to implement, they have a major disadvantage of being myopic, i.e., a decision is made based only on the situation on a single machine. The effects of the decision on other parts of the system are not considered. Therefore the overall system performance is not optimized. Therefore, the dispatching rules are often used together with other methods (such as expert system and neural network etc.) to overcome their shortcomings. Sim et al. (1994) proposed an expert neural network system for dynamic job shop scheduling. The model consists of 16 sub-networks; each sub-network extracts its scheduling knowledge from its corresponding training examples. The expert system is used to decide the input of each sub-network and the sub-networks can be trained separately to decrease training time. The input of each neural network corresponds to job arrival rates, current scheduling criterion and the applicability of the selected 10 scheduling rules. The output of the neural network is used to sort waiting jobs. The limitation of the method lies in the fact that the input arrival rate can only be one of the eight values and it didnt consider the influence of other factors (such as jobs due date). Cho and Wysk (1993) proposed a robust adaptive scheduler based on neural network and simulation. Previous simulation results are used to generate neural networks training examples and decided the structure of network. The networks structure has good expandability and wide applicable scope. However, the method of generating training examples influences the performance of the scheduler since different researchers address different simulation background. In addition, the paper didnt describe the process of generating examples. Jones et al. (1995) proposed a framework for real-time sequencing and scheduling problems. The method includes neural network,

364 genetic algorithm and real-time simulation. But the paper only provide a framework and further research effort is needed for practical use. Liu and Dong (1996) used neural network to select scheduling rules. The neural networks input corresponds to process routing and processing time for waiting jobs and its output corresponds to available scheduling rules. The method has two disadvantages. It needs a large number of training examples and long training times to ensure the performance and the networks size would grow intractably if the problem scale in the method of generating training examples increases. All of the above papers use job shop as background and study scheduling method for dynamic job arrivals. To our best knowledge, there is little research on the application of neural network in dynamic scheduling problem of HFS. The proposed neural network method for the dynamic hybrid ow shop scheduling will be introduced in detail in the subsequent section.

Tang et al.

Output Input

Input layer Hidden layer Output (8 (10 neurons) layer neurons) (1 neuron)

Fig. 2. The structure of each sub-network.

3. The neural network model for dynamic scheduling of HFS Neural networks (NN) are collections of mathematical models that emulate some of the observed properties of biological nervous systems and draw on the analogies of adaptive biological learning. The key element of the NN paradigm is the novel structure of the information processing system. The advantage of NN lies in their resilience against distortions in the input data and their capability of learning. These features give them a good perspective to the eld of dynamic scheduling. First, NN need to be trained with some training samples about the performance of different dispatching rules in different working status. Then the trained NN can be used to decide the priority of waiting jobs. The performance of the trained neural networks is mostly determined by the effectiveness of the training process, such as the accuracy of training samples and the convergence of the selected training algorithm. These issues for our NN model are described in detail in the following sections. The neural network model used here comprises three sub-networks, each corresponding to one

performance criterion. Each sub-network can be trained separately or simultaneously to decrease training time. All sub-networks share the same structure including one input layer, one hidden layer and one output layer (see Fig. 2). The numbers of nodes in the input layer, the hidden layer and the output layer are 8, 10, and 1, respectively. The structure of each sub-network is shown in Figure 2. The rst six nodes in the input layer correspond to six dispatching rules that have shown good performances in previous research. If one of the six dispatching rule is supposed to be used, then the corresponding input node is set to 0.5, other ve nodes are set to )0.5. Node 7 and node 8 correspond to the utilization level and due date demand k, respectively. The utilization level is in the ranges of 0 to 1 and directly fed into the network. The due date demand is converted to the range of 0 to 1 and then fed into the network. The meanings of the input nodes are listed in Table 1. The number of nodes in the hidden layer is decided by try and error. The output layer only comprises one node whose value is used to determine the priority of waiting jobs. 4. Simulation experiment for the acquisition of training examples One difculty in the application of neural networks to dynamic scheduling is the acquisition of training examples. Since the acquisition of training examples and the verication of the proposed method are all done by simulation, the simulation experiment is introduced here in detail.

A neural network model and algorithm in a dynamic environment Table 1. Input layer nodes corresponding information Node Corresponding information 1 2 3 4 5 6 7 8 SPT: shortest processing time FCFS: first come first serve LWR: least work remaining CR: smallest value of slack time/remaining work HRN: largest value of (waiting time ) processing time)/processing time MDD: smallest value of max(due date, current time + remaining work) Current utilization level Current converted due date demand

365 5000 jobs. The simulation run ends when 15 000 jobs finish processing. Twenty replications are run for each configuration to draw a scientific conclusion.

4.2. Simulation implementation Job process information should be adequate both for obtaining the priority when it is waiting for processing and for the computation of performance criteria when the job is nished. In the simulation program the process information is divided into two categories: one is static information and the other is dynamic information. The static information refers to that known before the job is being processed and kept unchanged during processing, such as job number, arrival time, processing time in each stage, and due date. The dynamic information includes that decided during the processing, such as the starting time, process machine, nishing time at a stage. In the program the process information is stored in an array of 4*N+4 elements (N is the number of stages). The content of the array is given below:

4.1. Simulation environment The simulations are done in the following environment:

The hybrid flow shop includes three stages,


each stage includes two parallel machines.

Inter-arrival times between jobs were generated


from exponential distribution, arrival rates were determined by utilization level as below:
k l g l M p n

Static information
Bit 1: job number. Bit 2: arriving time of the job. Bit 3 to N+2: processing times in different stages. Bit N+3: jobs due date.

where k is the arrival rate of jobs, l the utilization level, g the processing ability of the shop, M the total number of machines in the HFS, n the number of " stages, p is the average processing time of all stages.

Machine processing time of a job on different


machines in the same stage is the same, setup time and transportation time are independent to processing order and are included in the processing time. Processing time in each stage is generated from a uniform distribution over the integers between 1 and 99. Jobs due dates are generated as follows. Job due date = arrival time + k* job total processing time.

Dynamic information
Bit N+4: jobs current position (odd number denotes that the job is waiting for processing, even number denotes job is being processed on a machine). Bit N+5 to 4N+4: starting time, process machine and finishing time at the stages. There is a trigger event list in the simulation program and there are two kinds of trigger events: one is job arrival and the other is job nishing in some stage. To simulate the process of scheduling the trigger list is continuously sequenced according to their occurrence time. The earlier an event happens, the earlier it is processed. It is proven that the speed of the trigger-event driven method is much

As previous research seldom considers performance criteria related to due date, this paper includes the following three performance criteria in the objective function: average tardy time, percentage of tardy jobs, and average flow time. In order to eliminate the influence of edge effect on the result, performance criteria are not computed until

366 faster than that of recycling scanning method. The ow chart of the simulation program is shown in Fig. 3. 5. The training process and method to solve NN model The experiment consists of three phases: collection of training examples, training of neural

Tang et al.

network, and scheduling with the neural network method. 5.1. Collection of training examples In order to acquire training examples, 1920 simulation runs (4 settings of utilization level 4 settings of due date demand 6 scheduling rules 20 repetitions) are done to compute the three performance criteria of the selected six
Start Initialize relevant parameters

Sort trigger events by their occur time

Job arrival Yes Update the occur time of job arrival event Is there idle machine in first stage? Yes Add a job finished trigger event

Is the first event job arrival?

Job finished No Delete the event Is there idle machine in next stage? Yes Choose a job from waiting jobs No

No

Add a job finish trigger event

No Is there any idle machine in next stage? Yes Choose a job from waiting jobs Add a job finished trigger event Yes

Is the last stage of processing? Yes Finish job number >5000? Yes Finish job number>15000? No Compute current value of the three performance criteria No

No

End

Fig. 3. Flow chart of simulation programming.

A neural network model and algorithm in a dynamic environment

367

scheduling rules under various congurations. The simulation results are then converted into 288 (4 settings of utilization level 4 settings of due date demand 6 scheduling rules 3 performance criteria) training examples. If all the 288 training examples are used to train one neural network then its convergence speed is intolerable. In order to speed up the convergence speed of the neural networks training process, the training examples are divided into three groups according to their performance criteria and each group is then used to train its corresponding neural sub-network. So each sub-network has 96 training example (4 due date conguration 4 utilization level 6 scheduling rules). The outputs of the NN for the training examples are linearly mapped to a value between 0.1 and 0.9. If a rule has the best performance in some conguration, then its corresponding examples output is set to 0.1. If a rule has the worst performance at the same conguration, then its corresponding examples output is set to 0.9. For example, if the simulation result show that SPTs average ow time is the shortest and CRs average ow time is the longest when due date factor is 1.5 and utilization level is 75%, then the input vector for the example corresponding to SPT is (0.5, )0.5, )0.5, )0.5, )0.5, )0.5, 0.75, 0.15) and the output is 0.1. The input vector for the example corresponding to CR is ()0.5, )0.5, 0.5, )0.5, )0.5, )0.5, 0.75, 0.15) and output is 0.9. 5.2. Training of neural network As standard BP algorithm has the disadvantages of slow convergence speed and the training process

is easy to get into local optima, delta-bar-delta (DBD) algorithm is used to speed up the convergence speed of the training process. DBD algorithm dynamically adjusts the learning rate in the process of training based on the variation of training error. Though the DBD algorithm increases computation complexity and memory demand, it remarkably speeds up training process. The DBD algorithms formula is shown as follows.
( Daij k 1 a if Sij k 1Dij k > 0 baij k if Sij k 1Dij k < 0 0 others

where aij(k) is the learning rate at iteration k; Daij(k+1) is adjustment of aij(k) after iteration k; Dij k @Ek=@xij k; Sij(k)=(1)n)Dij(k)1)+n Sij(k)1); E(k) is the error of iteration k; xij (k) is the connective weight of neuron i and neuron j at iteration k. Typically, the range of a, b and n are 10)4 a 0.1, 0.1 b 0.5 and 0.1 n 0.7, respectively. Figure 4 shows the convergence curves of using and not using DBD algorithm in training the same example set for average flow time. From the figure we can see that the DBD algorithm remarkably decreases the training time and is especially useful in planar area. 6. Simulation results The trained NN can then be applied to solve the HFS scheduling problem. Whenever a machine becomes available and there is more than one job waiting at the stage, different dispatching rules and the system status data are input to the NN. The dispatching rule gives the smallest NN output will then be used to

Fig. 4. Comparison of training with and without DBD.

368

Tang et al.

Table 2. Average optimality performances in different problem scenarios for objective of minimizing average flow time Util. lev. 0.75 0.75 0.75 0.75 0.8 0.8 0.8 0.8 0.85 0.85 0.85 0.85 0.9 0.9 0.9 0.9 Due date 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 SPT 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00049 1.00047 1.00066 1.00000 1.00056 1.00066 1.00113 FCFS 1.11244 1.11244 1.11244 1.11244 1.15874 1.15874 1.15874 1.15874 1.22648 1.22708 1.22706 1.22730 1.34324 1.34399 1.34412 1.34476 LWR 1.01474 1.01474 1.01474 1.01474 1.02285 1.02285 1.02285 1.02285 1.03413 1.03464 1.03462 1.03482 1.05440 1.05499 1.05510 1.05559 CR 1.17500 1.14665 1.13400 1.12623 1.28294 1.24029 1.21369 1.19928 1.47596 1.42089 1.37149 1.33670 1.88778 1.78049 1.69382 1.62047 HRN 1.04501 1.04501 1.04501 1.04501 1.06567 1.06567 1.06567 1.06567 1.09308 1.09362 1.09359 1.09381 1.14190 1.14254 1.14266 1.14220 MDD 1.02214 1.03510 1.04090 1.04183 1.03112 1.05057 1.06157 1.06575 1.04362 1.06787 1.08848 1.10029 1.06496 1.09336 1.12335 1.14736 NNS 1.00030 1.00017 1.00017 1.00028 1.00078 1.00046 1.00034 1.00004 1.00030 1.00000 1.00000 1.00000 1.00072 1.00000 1.00000 1.00000

Util. lev. means utilization level and due date means due date demand.

Table 3. Average optimality performances in different problem scenarios for objective of minimizing tardy job percentage Util. lev. 0.75 0.75 0.75 0.75 0.8 0.8 0.8 0.8 0.85 0.85 0.85 0.85 0.9 0.9 0.9 0.9 Due date 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 SPT 1.00608 1.07636 1.15364 1.63755 1.00895 1.06910 1.08328 1.11310 1.00998 1.06414 1.07007 1.04600 1.01512 1.05911 1.04161 1.02724 FCFS 1.43340 2.26413 3.04085 4.68082 1.44711 2.34007 3.07948 3.46749 1.44484 2.41964 3.32008 3.64681 1.41970 2.45632 3.50038 4.11072 LWR 1.00000 1.00951 1.07808 1.57252 1.00000 1.00254 1.01919 1.07501 1.00000 1.00000 1.01449 1.01789 1.00000 1.00000 1.00000 1.00000 CR 1.33139 1.84765 1.99980 2.39075 1.32503 1.91133 2.16295 1.98816 1.30649 1.95823 2.36527 2.24922 1.27530 1.96425 2.49546 2.59954 HRN 1.31386 1.72603 1.78831 2.00571 1.34371 1.87765 2.01829 1.78216 1.36039 2.03939 2.39086 2.20649 1.35947 2.17818 2.80030 2.93353 MDD 1.14015 1.20304 1.00000 1.00000 1.16130 1.35560 1.26459 1.00000 1.17519 1.51262 1.64847 1.46579 1.17885 1.65582 2.04337 2.12171 NNS 1.00701 1.00000 1.05212 1.51184 1.00979 1.00000 1.00000 1.04540 1.01515 1.01164 1.00000 1.00000 1.01771 1.02514 1.00561 1.00000

Util. lev. means utilization level and due date means due date demand.

select a job to process on the available machine. We compare the performance of NN method and methods with xed dispatching rules through simulation. The results for the three schedule criteria are summarized in Tables 2, 3, 4, respectively.

In order to provide a result convenient for comparison, all performance data in the tables under same utilization level and due date factor are divided by the best value under the same condition. From gures in the tables, we can see that

A neural network model and algorithm in a dynamic environment

369

Table 4. Average optimality performances in different problem scenarios for objective of minimizing average tardy time Util. lev. 0.75 0.75 0.75 0.75 0.8 0.8 0.8 0.8 0.85 0.85 0.85 0.85 0.9 0.9 0.9 0.9 Due date 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 1.5 2.0 2.5 3.0 SPT 1.01215 1.36904 2.34343 4.35647 1.00000 1.21439 1.79695 2.90938 1.00000 1.09146 1.41293 2.03851 1.00000 1.01370 1.12749 1.39996 FCFS 1.62296 2.21781 3.31058 5.12969 1.65648 2.01573 2.59732 3.47120 1.71102 1.86972 2.14227 2.59541 1.80392 1.85346 1.90246 2.08424 LWR 1.08458 1.51802 2.68423 5.07664 1.09168 1.37700 2.10208 3.46048 1.10800 1.25244 1.66539 2.43892 1.12988 1.18070 1.34021 1.68365 CR 2.10309 2.91536 4.40414 6.49412 2.32139 2.96372 3.94504 5.40100 2.63972 3.09786 3.75170 4.74412 3.19767 3.38886 3.61.11 4.06635 HRN 1.17554 1.26433 1.39476 1.54093 1.21035 1.22865 1.26636 1.32089 1.24800 1.19195 1.16538 1.18759 1.30561 1.22611 1.13669 1.11546 MDD 1.01678 1.00053 1.00000 1.00000 1.04240 1.01790 1.00000 1.00000 1.07832 1.02247 1.00000 1.00000 1.11892 1.06162 1.00241 1.00000 NNS 1.00000 1.00000 1.03489 1.10697 1.01406 1.00000 1.00618 1.01806 1.01838 1.00000 1.01591 1.06356 1.04327 1.00000 1.00000 1.02260

Util. lev. means utilization level and due date means due date demand.

The NN integrates six dispatching rules


scheduling knowledge. Though it doesnt always perform best, it is the only method that can give consistently good performance under all the three performance criteria. The scheduling rules can only provide feasible or satisfactory scheduling for single performance criteria in nature, there is no scheduling rule that can give satisfactory scheduling for all performance criteria. 7. Conclusions Though HFS has broad practical background, HFS scheduling has not attracted as much attention as job shop or ordinary ow shop. Most previous research focuses on static scheduling problems. In addition, the limited dynamic scheduling research is mainly simulation studies. This paper proposed a neural network method for the dynamic scheduling of HFS with dynamic job arrivals. Simulation results showed that the method performs very well for various scheduling criteria. The proposed method only considered one type of random event, dynamic job arrivals. There can be some other types of random events in practical dynamic production environment, such as ma-

chine breakdown, rush orders and order cancellation etc. Further research is needed to develop methods for problems with such events.

Acknowledgment This research was in part supported by National Natural Science Foundation of China (Grant Nos. 70425003, 70171030 and 60274049). This research is also partly supported by the Fok Ying Tung Education Foundation, Excellent Young Teacher Program of Ministry of Education, China.

References
Blackstone, J. H. Jr., Philiphs, D. T. and Hogg, G. L. (1982) A state-of-the-art survey of dispatching rules for manufacturing job shop operations. International Journal of Production Research, 20, 2745. Brah, S. A. and Hunsucker, J. L. (1991) Branch and bound algorithm for the flow shop with multiple processors. European Journal of Operational Research, 51, 8899. Cho, H. and Wysk, R. A. (1993) A robust adaptive scheduler for an intelligent workstation controller.

370
International Journal of Production Research, 31(4), 771789. Gupta, J. N. D. (1988) Two-stage hybrid flowshop scheduling problem. Journal of Operational Research Society, 34(4), 359364. Jones, A., Rabelo, L. and Yih, Y. (1995) A hybrid approach for real-time sequencing and scheduling. International Journal of Computer Integrated Manufacturing, 8(2), 145154. Lee, C. Y. and Vairaktarakis, G. L. (1994) Minimizing makespan in hybrid flowshops. Operational Research Letters, 16, 149158. Liu, H. J. and Dong, J. (1996) Dispatching rules selection using artificial neural networks for dynamic planning and scheduling. Journal of Intelligent Manufacturing, 7, 243250.

Tang et al. Panwalkar, S. S. and Iskander, W. (1977) A survey of scheduling. Operations Research, 25, 4561. Park, Y. B. (1988) An evaluation of static flowshop scheduling heuristics in dynamic flowshop model via a computer simulation. Computers & Industrial Engineering, 14, 103112. Pierreval, H. (1992) Expert system for selecting priority rules in flexible manufacturing systems. Expert Systems with Applications, 5, 5157. Ramasesh, R. (1990) Dynamic job shop scheduling: a survey of simulation results. Omega, 18, 43 57. Sim, S.K., Yeo, K.T. and Lee, W.H. (1994) An expert neural network system for dynamic job shop scheduling. International Journal of Production Research, 32(8), 17591773.

You might also like