You are on page 1of 17

CHAPTER 6

USING GENETIC ALGORITHMS TO SOLVE TTP

We developed a hill-climbing genetic algorithm (HCGA) in Chapter 4 and reconstructed it in


Chapter 5 to solve succesfully both symmetric and asymmetric traveling salesman problems. In
this and next chapter we shall again combine hill-climbing and genetic algorithm techniques to
develop a hybrid HCGA for timetabling problems. We are mainly interested in solving the
timetabling problem given in Section 3.2, i.e. a mixture of class and course timetabling
problems, where we basically schedule lecturers to groups so that every lecturer is assigned to
only one group at the given time, and vice versa. Furthermore, groups may have common
students, and we are also concerned about the room assignments. Our TTP is actually a search
problem, where we have to find a feasible solution. We believe that ideas presented in this
dissertation are readily applicable for a wide range of scheduling problems.

A word of warning should be given before we proceed. We should not blindly believe that the
method we have developed outperforms other methods also in problem areas we have not
tested, or that it also works for different problem types without refinements. On the contrary,
we should search for new design features derived from problem specific information. It has
been proved recently [Wolpert and MacReady, 1995] that all search algorithms that look for an
optimum of some cost function perform exactly identically when averaged over all possible
cost functions. This result implies three important conclucions:

• If a search algorithm works well for a given problem type, that does not guarantee that it
will work as well for a different problem type.
• There exists no superior optimization method.
• If we are faced with a new problem area, we should first analyze the problem and then
choose the most applicable optimization method, not vice versa.

However, there is still room for comparisons of different optimization methods, but not in the
sense of finding the overall best one, but to find out on which problem areas a particular
method is likely to be better than others. The third conclusion is exactly where we started from
in Chapter 3. We shall now continue the search for a better algorithm to solve our TTP.

The framework for a hill-climbing genetic algorithm for TTP is quite different from that for
TSP. However, the main idea remains the same: combining the best features of hill-climbing
with the best features of genetic algorithms. There is no universal chromosome representation
for TTP. In exam timetabling the chromosome (see e.g. [Corne et al., 1993])

4, 3, 4, 1, 1, 2, 4
KIMMO NURMI 106

represents a solution in which exam 1 takes place at period 4, exam 2 at period 3, and so on.
Thus the position represents an exam and the value represents a period and their relative
(instead of absolute) positions determine the solution. In our timetabling problem, where we
schedule tuples, this would mean that tuple t1 takes place at period 4, tuple t2 at period 3, and
so on. We actually started with this approach, but soon it became evident that this
representation cannot handle the complexity of our problem. We also tried the implicit
representation suggested in [Paechter et al., 1994], where a chromosome is a permutation of
numbered events and heuristic H is used to fit these events into periods by making sure that all
the constraints are satisfied. Again we ended up with extreme difficulties to handle all the hard
constraints our problem imposes.

We decided to use the object model described in Section 3.3 as a basis for our chromosome
presentation. This means that a chromosome is an ordered list of periods, which in turn are
ordered lists of tuples. For example, the above chromosome would be represented as

(4,5), (6), (2), (1,3,7) ,

where the position now represents a period and the value represents tuples assigned to that
period, i.e. just opposite to the earlier constructions.

Recall that in Section 4.4 we stated that a good representation is such that

- there is correlation between local optima,


- genes map onto parts of solutions,
- nearby genes in the chromosome interact with each other.

We can safely state that all these requirements are satisfied, so we can turn to the question of
how to implement crossover and mutation operators using this representation. We start with a
crossover operator. Note that again (as in the previous chapter with TSP) none of the general
operators, such as one-point or uniform crossover, will work in their current form with our
chromosome representation. This is because they may and will produce children which are
infeasible solutions in the sense that some tuples will not be scheduled while the others are
scheduled more than once. This is referred to as the label replacement problem in [Glover,
1987]. A label replacement algorithm in [Abramson and Abela, 1991] corrects this problem by
restoring lost tuples and transfering duplicated tuples in a random location. Abramson and
Abela see the label replacement as a form of mutation, because it introduces extra diversity
into the population.

General crossover operators can be used for the exam timetabling problem, but they fail on
difficult problems (see e.g. [Corne et al., 1993]). This means that we should try to implement a
specialized crossover operator for a given timetabling problem as in [Burke et al., 1995].
However, no good crossover operators have been developed for the timetabling problem in our
interest. We tried several approaches, but none of them worked well enough. The best results
were obtained by crossing tuples between two chromosomes either according to a lecturer or
group value. This means that we randomly generate a lecturer value l. We then copy the tuples
from the first parent with a lecturer value less than or equal to l to the child chromosome at the
same period where they were in the parent. Finally we do the same operation for the second
6. USING GENETIC ALGORITHMS TO SOLVE TTP 107

parent’s tuples with a lecturer value greater than l.

There are mainly two reasons for the unsuitability of the above crossover operator for our
timetabling problem:

• We have to evaluate the whole timetable after applying the crossover operator and that will
be extremely costly in a complex problem such as ours.
• What is good about any given timetable is the temporal relationship between tuples rather
than their absolute assignments to certain periods.

Evaluation is the computational bottleneck of a genetic algorithm, so we need to use such


genetic operators with which we only have to calculate the change in the cost function without
having to recalculate the whole timetable. Moreover, changing too much in the current
solution, as crossover operators do, will break the relative goodness between the tuples.

Recall again that in Section 4.4 we stated that a good recombination

- combines genes so that good things stay good and


- is explorative enough but not too disruptive.

These observations lead us to use only mutation operators in our TTP. In the next section we
shall develop a new hill-climbing mutation operator for TTP.

6.1. A NEW HILL-CLIMBING MUTATION OPERATOR FOR TTP

An obvious mutation operator for our timetabling problem would be the one we have already
used in Section 3.4. We generated a random neighbor of the current solution by moving a
random tuple from its current period to a new randomly chosen period which is different from
the period where the tuple is currently assigned to. We shall call this random mutation
operator. We can further develop this operator by using a more intelligent way to select the
tuple to move as well as the period where to move the tuple to.

Some previous work

A violation directed mutation (VDM) operator developed in [Ross et al., 1994] uses
tournament selection to choose both the tuple and the period. A violation score for a tuple t is
the change in fitness value (cost function), if t is removed from the timetable. A violation score
for a period p is the change in fitness value if tuple t is moved from its current period to p.
Tournament selection was used, because rank-based and fitness-based selections would have
required a calculation of violation score for all periods. Ross et al. examined different
tournament sizes, as well as random and best selections, by solving exam timetabling problems
using the following GA configuration: A chromosome (timetable solution) was first selected at
random, then with probability 0.2, the random mutation operator was applied to each tuple
with probability 0.02, and with probability 0.8 the VDM operator was applied with the given
variation. The population size in their experiments was 1000. They also experimented with the
uniform crossover, and found it to be inferior to VDM. Their results were clear in that the
KIMMO NURMI 108

tuple selection may as well be random, while the power of the operator lies in the period
selection. General indications were that higher pressure in the period selection is preferred,
although not too high, since selecting the best period rapidly decreased the solution quality, i.e.
we should not be too greedy.

The random mutation operator, as well as a double move mutation operator, were used in
[Schaerf, 1996] to solve a class timetabling problem. A double move is a random move
followed by another move, which repairs one of the possible infeasibilities created by the first
one. The chromosome is represented as a simple matrix M so that each row represents a
weekly timetable for a lecturer and an entry in Mij contains the name of the group that lecturer
li is meeting at period pj. However, Schaerf does not consider room assignments. Actually he
does not use a genetic algorithm but the tabu search framework with the following
configuration. First, the double move mutation operator (RNA phase) is applied as far as it
makes no improvement for a given number of iterations (in his experiments 100000). Then, a
tabu search starts (TS phase) using the random mutation operator and goes on until it makes no
improvement for a given number of iterations (1500 in his experiments). These two phases are
repeated for the best solution to be found, and the algorithm stops when no improvement has
been found in a given number of cycles (normally four in his experiments). Tabu search was
carried out for all neigborhood moves, excluding those that were considered semi-illegal, i.e.
moves that do not affect the structure of the solution and are thus zero-cost preventing TS to
choose up-going moves. The size of the tabu list varied between 15 and 35. Schaerf argued this
configuration with two key points. First, tabu search is too time consuming to start with.
Therefore the RNA phase represents a fast method to generate a reasonably good solution as
input for TS. Second, after TS finds no improvement, RNA phase might ”shuffle” the solution
(even if it does not improve the solution) so that TS is able to find improvement in the next
cycle. A similar idea was earlier used in [Costa, 1994], but Costa did not use double moves but
only single moves.

New algorithm

We developed a new mutation operator for TTP, which we shall call hill-climbing mutation
operator and later greedy hill-climbing mutation operator (GHCM). The basic idea was to
incorporate techniques of a human timetabler into the mutation operator. A human timetabler
normally processes more than one, usually up to five tuples at the same time. He applies a local
exchange strategy after globally viewing the current timetable. This strategy or operation is a
mixture of swapping tuples, moving tuples from their old periods to new periods and removing
other tuples from these periods and placing them in turn in new periods. These exchange
operations are repeated until the timetabler has found a feasible solution or gives up the whole
task.

Our mutation operator is based on moving a tuple t from its old period p1 to a new period p2.
This operation has two consequences: first, it reduces the number of violations in period p1 and
second, it increases the number of violations in period p2. We can proceed by paying attention
to these two periods. We have basically three possibilities. There should be now ”room” for
another tuple in period p1, so we could try to move one of the tuples from period p2 to period
p1 (swapping), or we could just select whichever tuple we like and move it to period p1
(backward selection). Moreover, there should be now ”too many” tuples in period p2, so we
could try to move one of them to a new period (forward selection). We can repeatedly apply
this idea ending up with a sequence of moves. Swapping is actually a special case of both
6. USING GENETIC ALGORITHMS TO SOLVE TTP 109

backward and forward selections. Our experiments clearly showed that forward selection
outperforms the other two. Note that the sequence of forward selection moves might (and quite
often does) end up with the original period of the first move thus creating a cycle of moves. A
pseudo code for the basic hill-climbing mutation operator is given in Figure 6.1.

Select an initial tuple t


Set best_cost = ∞ and cumulative_cost = 0
WHILE True
Select a new period p for the tuple
cumulative_cost = cumulative_cost + cost(t to p)
IF cumulative_cost ≤ best_cost THEN
best_cost = cumulative_cost
Save the sequence of moves
ENDIF
IF no more moves allowed THEN
EXIT WHILE
ENDIF
Select a new tuple t from period p
ENDWHILE
Save the sequence of moves

Figure 6.1. Basic hill-climbing mutation operator in


pseudo code.

We should next find the answers to the following questions:

• How should the first tuple be selected?


• How should the new period for the tuple be selected?
• How should the new tuple from that period be selected?
• How many moves are allowed to be made?

Our exhaustive experiments confirmed the observations made in [Ross et al., 1994a] that the
initial tuple selection can be random. Actually it turned out that tournament, or any other
selection method, produces worse results than random selection (see Table 6.1) and they
require more time to execute. An unexpected observation was that the best way to select a new
period for the tuple was to consider all possible periods and select the one which causes least
increase in the cost function. Moreover, the best way to select a new tuple from that period was
again to consider all the tuples in that period and select the one for which the removal caused
KIMMO NURMI 110

the greatest decrease in the cost function when considering the removal cost only. This is
surprising, because normally a-not-so-greedy-strategy ends up with better results. However,
tournament, or any other selection method, produced clearly worse results (in respect to time
and solution quality) than the greedy strategy.

We found in our experiments that very rarely an improvement was found after a sequence of
20 moves and practically never after 50 moves. Actually the solution quality decreases if we
are too greedy and use values over 10, and also if we allow only short sequences of moves (see
Table 6.1). The running time of the mutation operator depends linearly on the maximum
number of moves, so we have two choices: either we set the maximum number of moves to
some low value or we somehow decide when there is no use of moving further. We decided to
stop the moving if the last move causes an increase in cost function and if the cost function
value is larger than that of the previous non-improving move. Figure 6.2 gives three examples
of sequence of moves. As noted earlier, our test runs directed us to terminate on the tenth move
at the latest (see Table 6.1). Compared to terminating always on 40th move this approach
decreases the running time to one tenth. We also found out that terminating immediately after
finding an improvement in the cost function is not a good strategy. Neither was the use of
semi-illegal moves as suggested earlier (see Table 6.1).

16
14
12
10
cost

8
6
4
2
0
1 2 3 4 5 6 7 8
m ove num ber

Figure 6.2. Examples of termination criteria.


All three sequences of moves terminate on the eighth move.

We can improve the mutation operator further by introducing a tabu list, which prevents
reverse order moves in the same sequence of moves, i.e. if we move a tuple t from period p1 to
period p2, we do not allow t to be moved back to period p1 before a new sequence of moves
begins (see Table 6.1).

The results in Table 6.1 clearly show the superiority of using a tabu list and not a semi-
illegality check. Moreover, the maximum move sequence of 10 and random selection should
also be used. Finally, we fine-tuned the operator with the following improvements, which all
had a clear impact on solution quality:

• sequence of moves with zero-cost are accepted (as already explained in Subsection 3.4.1)
6. USING GENETIC ALGORITHMS TO SOLVE TTP 111

• in the case of equal costs the longest sequence of moves will be used (shuffling the solution
most)
• in the case of equal cost periods (or tuples) the selection is made randomly
• when selecting a tuple to be removed from a given period, only such tuples are considered
that have at least one clash (i.e. same group, lecturer or room value or group overlap) with
the tuple that was moved to that period (that tuple is not considered as one).

Sample Problem A3 Sample Problem B3

Without tabu list 15-51-34 27-32-41


With semi-illegality check 12-31-57 24-36-40
Maximum move sequence of 2 13-9-78 11-12-77
Maximum move sequence of 50 21-35-44 29-39-32
Initial selection size of 10 22-22-56 17-26-57
(using tournament selection)

Table 6.1. Results of solving two sample timetabling problems 100 times with random
initial solutions. The hill-climbing method of Figure 3.3 has been modified so that the
random moves have been replaced with the greedy hill-climbing mutation (GHCM)
operator. The values indicate how many times the given GHCM variation produces
better-equal-worse results than the normal GHCM (with tabu list, no semi-illegality
check, maximum move sequence of 10 and initial selection size of 1). The time limit
was 60 minutes.
KIMMO NURMI 112

Select (randomly) a tuple t


Set best_cost = ∞, cumulative_cost = 0 and move_number = 1
WHILE True
Select a new period p for the tuple among all the
possible periods so that a move t to p is not in
tabu list and cost(t to p) is minimized
Update tabu list with move t to q, where q is t’s old period
cumulative_cost = cumulative_cost + cost(t to p)
IF cumulative_cost ≤ best_cost THEN
best_cost = cumulative_cost
Save the sequence of moves
ENDIF
IF cost(t to p) > 0 AND cost function value is above the
cost function value in the previous non-improving move
OR move_number = 10 THEN
EXIT WHILE
ENDIF
Remove a tuple u from period p among all the
possible tuples so that u and t have clashes and
the cost function value is minimized
Set t = u
ENDWHILE
Save the sequence of moves
Figure 6.3. Greedy hill-climbing mutation (GHCM) operator in pseudo code.
With these refinements we are ready to give a pseudo code for our greedy hill-climbing
mutation operator (GHCM) in Figure 6.3. So far we have worked with a single chromosome
(timetable), since we have used mutation operators only. In the next section we shall discuss
how to combine our GHCM operator with other genetic algorithm features, such as population,
selection process and reproduction.

6.2. HYBRID HCGA FOR TTP

Recall that building a genetic algorithm includes implementing chromosome decoding, fitness
evaluation, genetic operators, reproduction, selection process and choosing parameter values.
The chromosome in our timetabling problem is the direct presentation of the timetable itself, so
we do not need a chromosome decoding process. The fitness value of the chromosome
(timetable) is the value of the cost function which calculates the number of violations in the
timetable (see Section 3.3). We use only one genetic operator, the GHCM operator which was
developed in previous section, and no recombination operators, since they are hard to apply to
our problem (see discussion in the previous section). The GHCM operator actually hybridizes
the GA in the sense that mutation is carried out by applying a fast greedy local search heuristic.
6. USING GENETIC ALGORITHMS TO SOLVE TTP 113

We experimented with three reproduction methods (see Section 4.3): steady state reproduction,
generational replacement with elitism and modified steady state reproduction. We developed
the last one to prevent the possible premature convergence into a single timetable. It differs
from the general version in that the least fit chromosome in the generation is replaced by the
best one not until there have been k improvements in the population. The parameter k was
chosen to be the number of chromosomes, i.e. the population size. The modified steady state
reproduction was clearly better than the other two, so it will be used in later experiments.

The selection process should only choose one chromosome at a time since we have no
recombination operator. We could therefore (deterministically) choose all chromosomes in the
current population, one at a time, applying our mutation operator. However, our experiments
advised us favor fit chromosomes moderately. The best results were obtained by using the
marriage selection with small selection size. The selection size of two will be used in later
experiments.

The last phase in the construction of the GA is the selection of parameter values. We ended up
using population size 20 and, of, course, mutation probability 1.0 and recombination
probability 0.0. The algorithm is terminated according to the given time limit. The reason for
this is the practical nature of TTP; the timetabler has certain due dates in his work. Therefore
the computational experiments presented in this chapter, and also in Chapters 3 and 7, are
based on elapsed time. Recall that all the test runs in this dissertation were performed using a
233 MHz microcomputer.

Because chromosomes do not directly interact with each other (we have no recombination
operator), the question arises whether it would be better just to run single chromosome version
several times. Actually the total running times are equal if we run the normal version once
using a population size of n or the single chromosome version n times with a different initial
population. The results with these two approaches indicated clearly that the real genetic
approach with multiple chromosomes works better. Furthermore, combining interactive
features with the timetabling process is more useful and beneficial with multiple timetables
(see Chapter 8). The only drawback might be the limit in population size due to memory
restrictions.
KIMMO NURMI 114

Set the time limit t and the population size n


Generate initial population of timetables by randomly assigning
tuples to periods
Set betters_found = 0
WHILE elapsed-time < t
REPEAT n times
Select a timetable by using a marriage selection
Apply the GHCM operator to this timetable
Calculate the change in fitness value
IF fitness_value_change < 0 THEN
betters_found = betters_found + 1
ENDIF
ENDREPEAT
IF betters_found > n THEN
Replace the worst timetable with the best timetable
Set betters_found = 0
ENDIF
ENDREPEAT
Choose the fittest timetable from the population

Figure 6.4. Hybrid hill-climbing genetic algorithm (h-HCGA) for TTP.

Now we are ready to give the complete genetic algorithm to solve the timetabling problem.
The algorithm is called hybrid hill-climbing genetic algorithm for TTP (hybrid HCGA or h-
HCGA), see Figure 6.4. It uses the GHCM genetic operator, modified steady-state
reproduction, marriage selection with k = 2 and population size 20. In the following section we
shall compare the new algorithm with two other methods recently published and specifically
designed for timetabling problems as well as with tabu search, which produced best results in
the test runs of Section 3.5.

6.3. EXPERIMENTAL STUDY

In this section we shall solve 20 problems using four different methods. Four of the problems
are the same as in Section 3.5 (see Appendix). Even if these problems are artificial, they still
have the properties of real world timetabling problems. The other problems are different
variations constructed from three main problem types. The problems are artificial and they
have been constructed to present difficult special cases. We shall start our study by solving the
first four problems.
6. USING GENETIC ALGORITHMS TO SOLVE TTP 115

The experimental study compares four heuristic methods, tabu search, Schaerf-Costa, Ross and
hybrid HCGA. The tabu search method is exactly the same as we described in Section 3.4. The
algorithm gave in Section 3.5 the best results when compared with three other methods. The
Schaerf-Costa method is also based on tabu search, but it introduces further ideas in moving
tuples. A brief description of this method was given in Section 6.1 and the original method is
documented in [Schaerf, 1996]. Our implementation does not exactly follow the original
method, but we also use the ideas from [Costa, 1994]. These include changing the
chromosome representation and replacing the double moves by single moves to be able to
solve our version of the timetabling problem. The Ross method is basically a genetic algorithm
which uses a violation directed mutation operator explained in Section 6.1, see [Ross et al.,
1994a]. Our version of Ross method uses a population size of 200. Finally, the h-HCGA is the
method developed and described in the earlier sections.

Table 6.2 shows the test results for the first four problems when using the four heuristic
methods above. The results show that the h-HCGA works extremely well. It solved also the
problem B3 in every test run when the time limit was increased to 10 hours, while the other
methods gained only marginal improvement.

Problem Method Feasible Median Maximum Median time if


found cost cost optimum found

A2 TS 9 0 1 3
ROSS 9 0 1 17
S-C 8 0 1 31
h-HCGA 10 0 0 1

A3 TS 8 0 3 34
ROSS 0 2 4 -
S-C 0 2 5 -
h-HCGA 10 0 0 6

B2 TS 1 1 1 124
ROSS 0 3 4 -
S-C 1 1 2 69
h-HCGA 10 0 0 17

B3 TS 0 3 5 -
ROSS 0 6 9 -
S-C 0 4 7 -
h-HCGA 5 1 1 104
Table 6.2. Results of solving the sample problems by using four heuristic methods, tabu
search, Ross method, Schaerf-Costa method and h-HCGA (10 runs each). The time limit
was 180 minutes (500 minutes for the Ross method).

The artificial problems, called all-N-problems, have a timeframe of N days with N lessons in
KIMMO NURMI 116

one day totaling N2 periods. There are N groups, N lecturers and N rooms. All possible
combinations of (group, lecturer, room)-tuples are to be scheduled and one, and only one, of
each. This will give us N3 tuples to be scheduled. Even if there are many feasible solutions to
this problem, it should be difficult to solve since none of the groups, lecturers or rooms have
free periods, i.e. it can be thought of as a tightly constrained timetable. Table 6.3 shows one
possible solution to the problem with N = 3.

We can calculate the difficulty measure for the artificial problems as follows. If there are m
timeslots and n tuples, we have mn possibilities to schedule tuples into timeslots. Moreover, the
total number of clashes between all-N-tuples is 1.5N4(N−1). For the all-3-problem this would
mean a search space of size 1025 and 243 clashes between 27 tuples. The cost of random initial
solution of an all-N-problem is about 1.1N3. The difficulty measures defined in Section 3.5 for
all-N-problems are therefore

diff1 = 1.5N4(N−1) / (N2 ⋅ N3) = 1.5(1 − 1/N)


diff2 = 1.1N3 / (N2 ⋅ N) = 1.1.

A B C
1 1 2 3
2 1 2 3
3 1 2 3
4 2 3 1
5 2 3 1
6 2 3 1
7 3 1 2
8 3 1 2
9 3 1 2

Table 6.3. One possible solution to the all-N-problem with N = 3. Letters in


columns are groups, numbers in cells are lecturers and shades in cells are rooms.

We solved six all-N-problems with N = 5, 6, …, 10 using the tabu search and the h-HCGA
method. The reason for not showing the results of the other two methods is their weak
performance compared with the tabu search and h-HCGA. The test results given in Table 6.4
confirm our earlier observations that the h-HCGA performs extremely well. The time it takes
to solve an all-N-problem by using the h-HCGA seems to be about O(3.4N).

We also compared h-HCGA method with the parallel genetic algorithm developed in
[Abramson and Abela, 1991]. The parallel implementation used up to 15 processors with peak
speedup of nine compared with the sequential version. Table 6.5 shows two problems (datasets
8 and 9) they solved with their algorithm. Dataset 8 includes 150 tuples with five groups, five
lecturers and five rooms, and dataset 9 includes 180 tuples with six groups, six lecturers and
6. USING GENETIC ALGORITHMS TO SOLVE TTP 117

six rooms. The tuple data was generated randomly. The results show the power of h-HCGA in
these tests, too. The number of generations needed to find the feasible solution by using h-
HCGA was considerably small. Note that the parallel implementation used a population size of
100 compared with 20 in h-HCGA.

Problem Method Feasible Median Maximum Median time if


(N) found cost cost optimum found

5 TS 10 0 0 69
h-HCGA 10 0 0 1

6 TS 4 2 3 81
h-HCGA 10 0 0 2

7 TS 0 4 4 -
h-HCGA 10 0 0 6

8 TS 0 10 12 -
h-HCGA 10 0 0 22

9 TS 0 15 20 -
h-HCGA 10 0 0 69

10 TS 0 19 25 -
h-HCGA 10 0 0 234

Table 6.4. Results of solving the all-N-problems by using tabu search and h-HCGA
(10 runs each). The time limit was 500 minutes.

Problem 1 Problem 2

Number of tuples 150 180


Number of timeslots 30 30
Number of groups/lecturers/rooms 5 6

Number of generations needed to find the >500 >6000


feasible solution in [Abramson and Abela, 1991]

Number of generations needed to find the 179 78


feasible solution by using h-HCGA (median of 10)

Table 6.5. Results of solving two timetabling problems in [Abramson and Abela,
1991] by using the parallel method with 10 processors and the h-HCGA method.
KIMMO NURMI 118

However, Abramson and Abela noted that their genetic algorithm results were not as
encouraging as the simulated annealing results in [Abramson, 1991]. Their most recent results
in [Abramson et al., 1998] improve the simulated annealing approach further and achieve very
good results. The method uses geometric cooling with reheating as a function of cost. The
authors could not give any general rules for the parameter values of their method. They stated
that the parameter values are so problem-specific that many test runs are first required before a
new problem instance can be solved, thus diminishing the value of the method in real world
timetabling. Moreover, they used an application-specific attached processor, which was
designed specifically for the timetabling problem. Tuples in their test problems were generated
by choosing random group, lecturer and room identifiers, and then assigning them to the first
such period that can accomodate them. If no such period can be found, the consideration of the
tuple is discarded and another one selected randomly. Their test problems should be difficult to
solve, since the resulting schedules are tightly constrained (as all-N-problems). Each of their
test problems had a 30-period timeframe.

We solved seven of the problems in [Abramson et al., 1998] (referred as Abramson-N).


Abramson et al. were able to solve these test problems with their algorithm. To keep the
comparison fair we did not fine-tune or change our current algorithm. The h-HCGA was also
able to obtain feasible solutions to these problems, see Table 6.6. The running time of our
algorithm with a PC is also noteworthy. The difficulty measures for these test problems are
diff1 = 1.44, since there are on average 1300N clashes between tuples, and diff2 = 1.1, since the
initial cost is about 33N. Note that diff2(Abramson-N) = diff2(all-N-problem), for all N, and
diff2(Abramson-N) > diff2(all-N-problem), for N < 27.

Number of groups/ Number of [Abramson et al., 1998] h-HCGA


lecturers/rooms tuples (time in seconds*) (time in seconds**)

5 150 9 59
6 180 32 183
7 210 63 445
8 240 112 762
9 270 271 1602
10 300 2398 3148
11 330 7555 6950

Table 6.6. Results of solving seven random timetabling problems in [Abramson et al.,
1998] by using the simulated annealing method and the h-HCGA method. The values
show the median time for obtaining feasible solutions.

* by using a specifically designed and application-specific attached processor


** by using a 233 MHz microcomputer
6. USING GENETIC ALGORITHMS TO SOLVE TTP 119

Since the pure tabu search using simple random mutation operator works quite well, we could
try the tabu search by using our GHCM operator thus replacing the genetic algorithm
framework with the tabu scheme, but our quite extensive test runs clearly proved the
superiority of GA. Furthermore, using both GA and tabu scheme together did not work either.

A refinement

However, we can improve the solution quality of our GA with a simulated annealing
refinement. This refinement is very different from the standard SA. We do not consider the
final move value but accept upward (increasing cost) submoves at every move level in one
single application of the GHCM-operator. That is, when the GHCM-operator selects, at some
level of its search, a new period for the current tuple, we apply the following selection
mechanism. Periods are first shuffled and this random order is used to select a new period for
the comparison. The period is selected as the current best one if its fitness is less than the
current best one or if the current temperature accepts their difference. The same mechanism is
used in the GHCM-operator while removing a tuple. It should still be noted that the SA-
flavored hybrid HCGA (h-SAHCGA) is not a simulated annealing algorithm, since it does not
make upward moves.

As our earlier test runs have showed, the pure hybrid HCGA finds very good results. It is
necessary to use SA refinement only in extremely difficult problems, but as expected it will
also take more time. We use fast annealing and no reheating to cut down the running time and
to keep the method applicable for real world timetabling situations. The initial temperature is
calculated by using van Laarhoven's method (3.4) and the annealing time was chosen to be the
size of the neighborhood, i.e. the number of tuples times the number of periods (see Subsection
4.3.3).

Number of groups/ [Abramson et al., 1998] h-HCGA h-SAHCGA


lecturers/rooms (time in minutes*) (time in minutes**) (time in minutes**)

7 0 (1) 0 (8) 0 (25)


10 0 (40) 0 (52) 0 (108)
13 2.8 (354) 2*** (598) 1**** (988)

Table 6.7. Results of solving three random timetabling problems in [Abramson et al., 1998]
by using the simulated annealing method and the basic and SA-flavored h-HCGA methods.
The values show the average (final) solution cost and the median time taken to reach that
solution.

* by using a specifically designed and application-specific attached processor


** by using a 233 MHz microcomputer
*** a feasible solution was found in one out of three runs.
**** a feasible solution was found in half of the runs.
KIMMO NURMI 120

We solved three Abramson-N problems with N = 7, 10 and 13 by using the h-SAHCGA


without any fine tuning or change in GA or SA parameters given earlier. The results given in
Table 6.7 suggest that our method finds at least as good solutions as the method given in
[Abramson et al., 1998]. We also found the feasible zero-cost solution to Abramson-13
problem in half of the runs. It is unclear whether the feasible solution was found in [Abramson
et al., 1998]. Moreover, we believe that our approach is readily available to a broad class of
timetabling problems without any change in configuration or parameter values.

We have showed in this chapter that genetic algorithms can be used to derive feasible solutions
to timetabling problems. Recall that in the previous chapter we showed the same for traveling
salesman problems. The key point in these two chapters was to show that a genetic algorithm
is a good framework for developing solution methods, but we should not follow it too strictly.
That is, in TSP our GA did not use any selection pressure and in TTP our GA did not use any
recombination operators even if both of them are normally considered necessary parts of
genetic algorithms. Furthermore, our GA for TTP uses selection only when choosing a
timetable to be mutated. Actually, using a deterministic approach is only little worse than using
a marriage selection. This confirms again our earlier claims that we do not (necessarily) need
selection pressure in genetic algorithms. The most important part in GA design is the
development of a fast greedy heuristic operator suitable for the problem at hand. We believe
that the ideas presented in this chapter are applicable to a broad class of scheduling problems.

So far we have only discussed hard constraints and we have only searched feasible solutions
giving no attention to the solution quality of feasible solutions. In the following chapter we
shall add also soft constraints to our timetabling model. Finally, in Chapter 8 we shall present
our Real World Timetabling Problem Solver, which combines the automatic solution finding
with the user interaction.
6. USING GENETIC ALGORITHMS TO SOLVE TTP 121

You might also like