You are on page 1of 14

MODELS IN OPERATIONS RESEARCH (OR) ORIGIN OF OPERATIONS RESEARCH The roots of OR can be traced back many decades when

early attempts were made to use scientific approach in management of organizations. However, the beginning of the activity called OR has generally been attributed to the military service in early World War II. Because of the war effect, there was an urgent need to allocate scarce resources to the various military operations and to the activities within each operation in an effective manner. Therefore the British and the Americans management called upon a large number of scientists to apply scientific approach to dealing with this situation. In effect, they were asked to do research on (military) operations These teams of scientists were the first Operations Research teams. Their efforts allegedly were instrumental in winning the Air Battle of Britain, the Island campaign in the Pacific, the battle of the North Atlantic, etc. Spurred on by the success of OR in the military, industry gradually became interested in the new field. Two factors played a key role in the rapid growth of OR during the period: The substantial progress that was made early in improving the techniques available to OR. Many of the standard tools for OR e.g. Linear Programming, Dynamic Programming, Queuing Theory and Inventory Theory were relatively developed. The onslaught of the computer revolution: the development of the electronic digital computers with their ability to perform arithmetical calculations thousand times faster than a human being can, was a tremendous boom to OR. Nature of Operations Research OR may be described as a scientific approach to decision making that involves the operations of organizational systems. As the name implies, OR involves research on operations. It is applied to problems that concern how to conduct and co-ordinate the operations or activities within an organization The approach of OR is that of scientific method. In particular, the process begins by carefully observing and then constructing a scientific (typically mathematical) model that attempts to abstract the essence of the real problem. It is then hypothesized that this model is a sufficiently precise representation of the essential features of the situation, so that the conclusions (solutions) obtained from the model are also valid for the real problem. This hypothesis is then modified and verified by suitable experimentation. Thus, in a certain sense, OR involves a creative scientific research into the fundamental properties of operations. Specifically, OR is concerned with the practical management of organization An additional characteristics of OR is that it attempts to find the best or optimal solution to the problem under consideration rather than being content with merely improving the status quo. The goal is to identify the best possible course of action

The Methodology of Operations Research When OR is used to solve a problem of an organization, the following seven step procedure should be followed: i. Formulate the problem. ii. Observe the system iii. Formulate a mathematical model of the problem iv. Verify the model and use the model for prediction v. Select a suitable alternative. vi. Present the results and conclusions of the study to the organization. vii. Implement and evaluate the recommendations. LINEAR PROGRAMMING Introduction Many operations management decisions involve trying to make the most effective use of an organisations resources. Resources typically include machinery (such as planes in the case of an airline), labour (such as pilots), money, time, and raw materials (such as jet fuel). These resources may be used to produce products (such as machines, furniture, food, and clothing) or service (such as airline schedules, advertising policies, or investment decisions). A linear programming problem may be defined as the problem of maximizing or minimizing a linear function subject to linear constraints. The constraints may be equalities or inequalities. Linear Programming (LP) is a widely used mathematical technique designed to help operations managers plan and make the decision necessary to allocate scarce resources. A few examples in which LP has been successfully applied in operations management are: i. Scheduling school buses to minimize the total distance traveled when carrying students. ii. Allocating police patrol units to high crime areas in order to minimize the response time to 9111 calls. iii. Scheduling tellers at banks so that needs are met during each hour of the day while minimizing the total cost of labour. iv. Selecting the product mix in a factory to make the best use of machine and labour hours available while maximizing the firms profit. v. Picking blends of raw materials in feed mills to produce finished feed combinations at minimum cost. vi. Determine the distribution system that will minimize total shipping cost from several warehouses to various market locations. vii. Allocating space for a tenant mix in a new shopping mail so as to maximize revenues to the leasing company. The word linear means that all the mathematical functions in this model are required to be linear functions. The term programming here does not imply computer programming, rather it implies planning. Thus, Linear Programming (LP) means planning with a linear model. It refers to several related mathematical techniques that are used to allocate limited resources among competing demands in an optimal way.

The objective of LP is to determine the optimal allocation of scarce resources among competing products or activities. That is, it is concerned with the problem of optimizing (minimizing or maximizing) a linear function subject to a set of constraints in the form of inequalities. Economic activities call for optimizing a function subject to several inequality constraints. For optimization subject to a single inequality constraint, the Lagrangian method is relatively simple. When more than one inequality constraints are involved, Linear Programming is easier. If the constraints, however numerous, are limited to two variables, the easiest solution is the graphical method. If the variables are more than two, then an algebraic method known as the simplex method is used. Requirements of a Linear Programming Problem All LP problems have four properties in common: i. LP problems seek to maximize or minimize some quantity (usually profit and cost). We refer to this property as the objective function of an LP problem. The major objective of a typical firm is to minimize dollar profits in the long run. In the case of a trucking or airline distribution system, the objective might be to minimize shipping cost. ii. The presence of a restriction, or constraint, limits the degree to which we can pursue our objective. For example, deciding how many units of each product in a firms product line to manufacture is restricted by available labour and machinery. We want, therefore to maximize or minimize a quantity (the objective function) subject to limited resources (the constraints) iii. There must be alternative courses of action to choose from. For example, if a company produces three different products, management may use LP to decide how to allocate among them its limited production resources (of labour, machinery, and so on). If there were no alternatives to select from, we would not need LP. iv. The objective and constraints in Linear Programming problems must be expressed in terms of linear equations or inequalities. Definitions of Terminologies i. Decision Variables: The unknown of the problem whose values are to be determined by the solution of the LP. In mathematical statements we give the variables such names as X1, X2, X3, Xn
ii.

Linear function: A function of the decision variables of the form : a1X1 + a2X2 + a3X3 + + an Xn where a1, a2, a3. an are numerical coefficients. Objective Function: The measure by which alternative solutions are compared. The general objective function can be written as : Z = C1 X1 + C2 X2 + C3 X3 + + Cn Xn The measure selected can be either maximized or minimized.

iii.

The first step in LP is to decide what result is required. This may be to minimize cost / time, or to maximize profit/contribution. Having decided upon the objective, it

is now necessary to state mathematically the elements involved in achieving this. This is called the objective function as noted above. EXAMPLE: A factory can produce two products, A and B. The contribution that can be obtained from these products are: A contributes $20 per unit and B contributes $30 per unit and it is required to maximize contribution. Let the decisions be X1 and X2. Then the objective function for the factory can be expressed as: Z = 20X1+30X2 where X1 = number of units of A produced. X2 = number of units of B produced This problem has two (2) unknowns. These are called decision variables. Note that only a single objective (in the above example, to maximize contributions) can be dealt with at a time with an LP problem. iv. Constraint: A linear inequality defining the limitations on the decisions. Circumstances always exist which govern the achievement of the objectives. These factors are known as limitations or constraints. The limitation in any problem must clearly be identified, quantified and expressed mathematically. To be able to use LP, they must, of course, be linear. v. Non-negative restriction: Solution algorithms assume that the variables are constrained to be non-negative ie Xj 0, for j = 1, 2, 3, .., n. vi. Optimal solution: A feasible solution that maximizes/minimizes the objective function. It is the solution that has the most favourable value of the objective function. vii. Alternative optimal solution: If there are more than one optimal solution (with the same value of Z), the model is said to have alternative optimal solution. viii. Feasible solutions: The set of points (solutions) satisfying the LPs constraints. Standard form of the model: The standard form adopted is: Maximize Z = C1 X1 + C2 X2 + C3 X3 + + Cn Xn Objective function Subject to: a11X11 + a12X2 + a13X3 + + a1n Xn b1 a21X11 + a22X2 + a23X3 + + a2n Xn b2 a31X11 + a32X2 + a33X3 + + a3n Xn b3 Constraints am1X11 + am2X2 + am3X3 + + amn Xn b X1 , X2, X3, X4 ,., X j 0 Example:
7

Non-negative restriction

1. Minimize: Z = 30X 1+40X2 +20 X3 Subject to: 3X1 + 2X2 X3 6 4X1 + 5X2 X3 6 6X1 + 2X2 X3 3 X1, X2 , X3 0

Objective function Constraints

Non-negative restriction

Formulating Linear Programming Problems One of the most common linear programming applications is the product mix problem. Two or more products are usually produced using limited resources. The company would like to determine how many units of each product it should produce in order to maximize overall profit given its limited resources. Let us look at an example: Procedure: Formulating Linear Programming problems means selecting out the important elements from the problem and defining how these are related. For real -world problems, this is not an easy task. However, there are some steps that have been found useful in formulating Linear Programming problems: i. Identify and define the /unknown variables in the problem. These are the decision variables. ii. Summarize all the information needed in the problem in a table iii. Define the objective that you want to achieve in solving the problem. For example, it might be to reduce cost (minimization) or increase contribution to profit (maximization). Select only one objective and state it. iv. State the constraint inequalities. Example: The Shader Electronics Company produces two products: The Shader Walkman, a portable AM/FM cassette player, and the Shader Watch-TV, a wrist watch-size black-and white television. The production process for each product is similar in that both require a certain number of hours of electronic work and a certain number of labour-hours in the assembly department. Each Walkman takes 4 hours of electronic work, and 2 hours in the assemble shop. Each Watch-TV requires 3 hours in electronics and1 hour in assembly shop. During the current production period, 240 hours of electronic time are available; 100 hours of assembly department time are available. Each Walkman sold yields a profit $7and each Watch-TV produced may be sold for a profit of $5 Step 1: Identify and define the unknown variables(decision variables) in the problem Let the decision variables be X1 and X2. Then X1= number of Walkmans to be produced X2 = number of Watch-TV to be produced Step 2: Summarize the information needed to formulate and solve this problem in a table.
8

Department Electronic Assemble Profit per unit

Hours required to produce 1 unit Walkman (X1) Watch-TV (X2) 4 3 2 1 $7 $5

Hours available 240 100

Step3: Define the objective that you want to achieve in solving the problem State the LP objective function in terms of X1 and X2 as follows: Maximize profit, P = $7X1 + $5X2 Step 4: State the constraint inequalities Our next step is to develop mathematical relationships to describe the two constraints in this problem. One general relationship is that the amount of a resource used is to be less than or equal to ( ) the amount of resource available. First constraint: Electronic time used is Electronic time available. 4X1 + 3X2 240(hours of electronic time available) Second constraint: Assembly time used is Assembly time available 2X1 + 1X2 100(hours of assembly time available) Both of these constraints represent production capacity restrictions and, of course, affect the total profit. For example, Shader Electronics cannot produce 70 Walkmans during the production period because if X1 = 70, both constraints will be violated. It also cannot make X1 =50 Walkmans and X2 = 10 Watch TVs. This constraint brings out another important aspect of linear programming. That is, certain intersections will exist between variables. The more units of one product that a firm produces, the fewer it can make of other products. The above LP problem is stated as follows: Maximize profit, P = $7X1 + $5X2 Subject to the constraints: 4X1 + 3X2 240(hours of electronic time available) 2X1 + 1X2 100(hours of assembly time available X1, X2 0 In this problem there are two unknowns, X1 and X2, and five constraints. All the constraints are inequalities and they are all linear in the sense that each involves an inequality in some linear function of the variables. The two constraints, x1 0 and x2 0, are special. These are called nonnegativity constraints and are often found in linear programming problems. The other constraints are called the main constraints. The function to be maximized (or minimized) is called the objective function.

SOLVING LINEAR PROGRAMMING: GRAPHICAL METHOD In the last section some examples were presented to illustrate how practical problems can be formulated mathematically as Linear programming problems. The next step after formulation is to solve the problem mathematically to obtain the best possible solution. In this section, a graphical procedure to solve LP problems involving two variables is discussed. Example 1 Maximize Z = 3X1 + 4X2 Subject to: 2.5X1 + X2 20(Constraint A) 3X1 + 3 X2 30(Constraint B) 2X1 + X2 16(Constraint C) X1 , X2, 0 Procedure: Since there are only two variables, we can solve this problem by graphing the set of points in the plane that satisfies all the constraints (called the constraint set) and then finding which point of this set maximizes the value of the objective function. Each inequality constraint is satisfied by a half-plane of points, and the constraint set is the intersection of all the half-planes. Treat the inequality constraints as equations and find the intersections of each on the axes. Constraint A :2.5X1 + X2 = 20 1. On the X1-axis, X2 = 0 2.5X1 + 0 = 20 X1 = 8 So coordinate on the X1-axis is (8, 0) 2. On the X2 , X1 = 0 0 + X2 = 20 X2 = 20 So coordinate on the X2-axis is (0, 20) Constraint B: 3X1 + 3 X2 = 30 1. On the On the X1-axis, X2 = 0 3X1 + 0 = 30 X1 = 10 So coordinate on the X1-axis is (10, 0) 2. On the X2 , X1 = 0 3X1 + 0 = 30 X2 = 10 So coordinate on the X1-axis is (0, 10) Constraint C: 2X1 + X2 16 1. On the On the X1-axis, X2 = 0

10

2X1 + 0 = 16 X1= 8 So coordinate on the X1-axis is (8, 0) 2. On the X2 , X1 = 0 0 + X2 = 16 X2= 16 So coordinate on the X1-axis is (0, 16) Graph the equations using their intersections. (See graph) NOTE i. ii. iii. iv. The shaded area is called the feasible region. It contains all the points that satisfy all three constraints plus the non-negative constraints. The variables are called decision variables or structured variables. Z is maximized at the intersection of the two constraints called an extreme point The co ordinate that maximizes the objective function is called feasible solution

Sensitivity Analysis Operations managers are usually interested in more than the optimal solution to an LP problem. In addition to knowing the value of each decision variable(the X is) and the value of the objective function, they want to know how sensitive these solutions are to input parameter(numerical value that is given in a model) changes. For example, what happens if the coefficients of the objective change by 10% or 15%? What happens if right-hand side values of the constraint change? Because solutions are based on the assumptions that input parameters are constant, the subject of sensitivity analysis comes into play. Sensitivity analysis, or post optimality analysis, is the study of how sensitive solutions are to parameter(decision variable) changes. It is an analysis that projects how much a solution might change if there were changes in the variables or input data. Dual(Shadow) Prices The shadow price, also called the dual, is the value of one (1) additional unit of a resource in the form of one (1) more hour of machine time, labour time, or other scarce resource. It answers the question: Exactly how much should a firm be willing to pay to make additional resources available? Is it worthwhile to pay workers an overtime rate to stay one (1) extra hour each night in order to increase production output? We saw that the optimal solution to the Shader problem(example I) is X1 = 30 walkmans, X2 = 40 Watch-TVs, and profit = $410. Suppose Shader is considering adding an extra assembler at a salary of $5.00 per hour. Should the firm do so? In order to answer this question, we need to formulate the dual problem. Every maximization (minimization) problem in Linear Programming has a corresponding minimization (maximization) problem. The original problem is called the primal and the corresponding problem is called the dual.

11

The following are the rules for transforming the primal to obtain the dual: i. Reverse the inequality sign. That is maximization ( ) in the primal becomes minimization ( ) in the dual and vice versa. The non-negativity constraints on the decision variables is always maintained. ii. The rows of the coefficient matrix of the constraints in the primal are transferred to columns for the coefficient matrix of the constraints in the dual. iii. The row vector of coefficients in the objective function in the primal is transposed to a column vector of constraints for the dual constraints. iv. The column vector of constraints from the primal constraints is transposed to a row vector of coefficients for the objective function of the dual. Recall the LP problem from example 1; Maximize Z= 7X1 + 5X2 Subject to: 4X1 + 3X2 240(hours of electronic time) 2X1 + 1X2 100(hours of assembly time) The above LP problem is the primal. The dual is as follows: Minimize: C = 240e + 100a Subject to: 4e + 2a 7 3e + 1a 5 Note: e = hours of electronic time constraint a = hours of electronic time constraint Solving the dual we have the following: a = 0.5, and e = 1.5 Interpretation: If an extra assembler receives a salary of $5.00 per hour, the firm will lose $4.50 for every hour the new assembler works. So the firm will not be willing to employ additional assembler at a salary of $5.00. SOLVING LINEAR PROGRAMMING: SIMPLEX METHOD The simplex method is the general procedure for solving LP problems It was developed by George Dantzig in 1947.The simplex method is an algorithm. An algorithm is a set of rules or a systematic procedure for finding the solution to a problem. It s simply a process where a systematic procedure is repeated(iterated) over and over again until a desired result is obtained. The Simplex algorithm is a method (or computational procedure) for determining basic feasible solutions to a system of equations and testing the solutions for optimality. STANDARD FORM OF LP LP includes constraints of all types( , , =). The variables may be nonnegative or unrestricted in sign. The standard form of Linear Programming problems is

12

Maximize: Z = C1X 1+ C2X2 + C3X3+ C4X4+ .+ CnXn Subject to: a11X1+ a12X2 + a12X3+ a14X4 ++ a1nXn b1 a21X 1+ a22X2 + a23X3+ a24X4+ .+ a2nXn b1
. .

am1X 1+ a m2X2 + am3X3+ am4X4+ .+ amnXn bm X1, X2 , Xn 0 To develop a general solution method, the LP problem must be put in a common format, which we will call the standard form. The properties of this form are as follows: i. All the constraints are equations. ii. All the variables are nonnegative. iii. The objective function may be maximization or minimization. An LP model can now be put in the standard form as follows: 1. Constraints: i. A constraint of the type ( ) can be converted to an equation by adding a slack variable to(subtracting a surplus variable from) the left side of the constraint. For example, in the constraint X1 + 2X2 6 We add a slack variable S1 0 to the left side to obtain the equation X1 + 2X2 + S1 6 Now consider the constraint 3X1 + 2X2 X3 6 Since the left side is not smaller than the right side, we subtract a surplus variable S2 0 from the left hand side to obtain the equation 3X1 + 2X2 X3 S2 = 6 ii. The right side of an equation can always be made nonnegative by multiplying both sides of the equation by 1(negative one).

For example, 3X1 + 2X2 X3 = -6 is mathematically equivalent to -3X1 -2X2 +X3 = 6 iii. The direction of an inequality is reversed when both sides are multiplied by 1 (negative one) For example, whereas 2<4, -2> -4. Thus, the inequality 5X1 + 2X2 X3 -6 can be replaced by -5X1 - 2X2 +X3 6

13

Example 1: Maximize: Subject to: Z = 5X 1+3X2 6X1+2X2 36 5X1+5X2 40 2X1+4X2 28 X1, X2 0

Procedure: 1. Initialization step: a. Express the objective function in the standard form and convert the inequalities to equations by adding slack variables. Z -5X1-3X2 = 0 6X1 + 2X2 + S1 =36 5X1 + 5X2 + S2= 40 2X1 + 4X2 + S3 =28 b. Express the constraint equations in matrix form X1 X2 S1 S2 S3

6 5 2

2 5 4

1 0 0

0 1 0

0 0 1

36 = 40 28

c. Set up an initial Simplex Tableau composed of the coefficient matrix of the constraint equations and the column vector of the constant set above a row of the indicators which are the coefficients of the objective function in the standard form and a zero coefficient for each slack variable.

Basic Equation/ Variables Row Z 1 S1 2 S2 .3 S3 4

Z 1 0 0 0

X1 -5 6 5 2

Coefficient of X2 S1 -3 0 2 1 5 0 4 0

S2 0 0 1 0

0 0 0 1

Constant 0 36 40 28

d. Read the first initial basic feasible solution. Select the original variables to be the initial non-basic variables (set equal to zero) and the slack variables to be the initial basic variables. That is:X1= 0 and X2 = 0. Hence S1= 36, S2 = 40 and S3 = 28

14

2. Iteration step: a. Determine the entering basic variable by selecting a variable with the negative coefficient having the largest absolute value. That column becomes the pivot column. In this case, the number is 5, and X1 becomes the pivot column. b. Determine the leaving basic variable using the minimum ratio test. This is done by using each coefficient in the pivot column to divide the elements of the constants. The row with the smallest ratio (called pivot row) determines the variable to leave the basis. The process of selecting the variable to be included and the variable to be excluded is called change of basis.. In the current example, 36/6 is the smallest ratio(36/6 < 40/5 >28/2). So row one(10 is the pivot row. Since the unit vector with one(1) in the first row appears under S1, S1 leaves the basis. c. Determine the new basic feasible solution by pivoting.. Pivoting involves converting the pivot element to one(1) and all the other elements in the pivot column to zero(0).The pivot number/element is the number at the intersection of the column of the variable entering the basis(i.e. the element at the intersection of the pivot row and the pivot column) In this case the number is 6 This is done using Gaussian elimination method. As follows: i. Multiply the pivot row by the reciprocal of the pivot element. In this case, multiply by 1/6. Basic Equation/ Variables Row Z 1 S1 2 S2 .3 S3 4 ii. Coefficient of X2 S1 -3 0 1/3 1/6 5 0 4 0

Z 1 0 0 0

X1 -5 1 5 2

S2 0 0 1 0

S3 0 0 0 1

Constant 0 6 40 28

Having reduced the pivot element to one(1), clear the pivot column as follows: Add 5 times row 2 to row 1 Subtract 5 times row 2 from row 3 Subtract 2 times row 1 from row 4 Z 1 0 0 0 X1 0 1 0 0 Coefficient of X2 S1 -4/3 5/6 1/3 1/6 10/3 -5/6 10/3 -1/3 S2 0 0 1 0 S3 0 0 0 1 Constant 30 6 10 16

Basic Equation/ Variables Row Z 1 S1 2 S2 .3 S3 4

15

The second feasible solution can be read from the second tableau by setting X2 =0 and S1 =0 and we will be left with an identity matrix which gives X1 =6, S2 = 10 and S3 = 16 3. Optimization step: The current basic feasible solution is optimal if and only if every coefficient in equation 1(the objective function) is non-negative. If it is then stop, otherwise go to the iteration step to obtain the next basic feasible solution Since there is a negative coefficient in equation 1 (the objective function), we continue the iteration. The only negative indicator is 4/3 in the second column. So X2 is introduced into the basis column 2 becomes the pivot column. Dividing the constants column by the pivot column shows that the smallest ratio is in the third row. Thus, 10/3 becomes the new pinot element (i.e. the number at the intersection of the pivot row and pivot column) Converting the pivot element (10/3) to one and all the other elements in the pivot column to zero(0) as follows: i. Multiply the pivot row by the reciprocal of the pivot element. In this case, multiply by 3/10 Basic Equation/ Coefficient of Variables Row Z X1 X2 S1 S2 S3 Constant Z 1 1 0 -4/3 5/6 0 0 30 S1 2 0 1 1/3 1/6 0 0 6 S2 .3 0 0 1 -1/4 3/10 0 3 S3 4 0 0 10/3 -1/3 0 1 16 ii. Having reduced the pivot element to one(1), clear the pivot column as follows: Add 4/3 times row 3 to row 1 Subtract 1-3 times row 3 from row 2 Subtract 10/3 times row 3 from row 4 Basic Equation/ Variables Row Z 1 S1 2 S2 .3 S3 4 Coefficient of X2 S1 -4/3 1/2 1/3 1/4 1 1/4 10/3 1/2

Z 1 0 0 0

X1 0 1 0 0

S2 2/5 -3/10 3/10 -1

S3 Constant 0 34 0 5 0 3 1 6

The third feasible solution can be read from the tableau. Setting S1 = 0, S2 = 0, we have an identity matrix which gives X1 = 5, X2 = 3 and S3 = 6
16

Since there are no negatives indicators left in the first row(objective function), this is the optimal solution. The maximum value of Z is read from the constant column. In this case, it is 34. There are no slacks in the two constraints, indicating that the first two inputs are all used up. However, 6 units of the third input remain unused.

17

You might also like