You are on page 1of 6

1.

INTRODUCTION

Factorial experiments are experiments that investigate the effects of two or more factors or inputs parameters on the output response of a process. Factorial experiment design, or factorial design, is a systematic method for formulating the steps needed to successfully implement a factorial experiment. Estimating the effects of various factors on the output of a process with a minimal number of observations is crucial to being able to optimize the output of the process. Effective factorial design ensures that the least number of experiment runs are conducted to generate the maximum amount of information about how input variables affect the output of a process. Analysis of Variance (ANOVA) is a statistical technique used to investigate and model the relationship between a response variable and one or more independent variables (factors). It is used to determine if more than two population means are equal. The technique uses the Fdistribution (probability distribution) function and information about the variances of each (within) and grouping of populations (between) to help decide if variability between and within each populations are significantly different. Factorial ANOVA is a flexible data analytical technique that allows the experimenter to test hypotheses about means when there are two or more independent variables in the design. Factorial ANOVA is appropriate for studying the treatment effects and their interactions. 1.1 Research Objective and Scope

The aim is to apply the ANOVA technique to evaluate the equality of sets of population (or treatment) means by examining the variance of samples (experimental data) that were taken. It aims at determining whether the differences between the data are simply due to random error or systematic treatment effects that cause the mean in one group to differ from the mean in another.

This work dealt on factorial design having two independent variables; concentration and time and one dependent variable time. 2.0 2.1 THEORITECAL BACKGROUND Factorial Experiment Design

Factorial design is a tool that allows an investigator/researcher to experiment on many factors simultaneously. In a factorial design, the effects of varying the levels of the various factors affecting the process output are investigated. Each complete trial or replication of the experiment takes into account all the possible combinations of the varying levels of these factors. Key steps in designing a factorial experiment include: i. Identify factors of interest and a response variable. ii. Determine appropriate levels for each explanatory variable. iii. Determine a design structure. iv. Randomize the order in which each set of conditions is run and collect the data. v. Organize the results in order to draw appropriate conclusions. 2.2 Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) refers to statistical models and associated procedures, in which the observed variance is partitioned into components due to different explanatory variables. It provides a statistical test concerning if the means of these several groups are all equal. In its simplest form, ANOVA is equivalent to Students t-test when only two groups are involved. ANOVA were first developed by R. A. Fisher in the 1920s and 1930s. Thus, it is also known as Fishers analysis of variance, or Fishers ANOVA. 2.2.1 Types of ANOVA

In ANOVA, an explanatory variable is categorical and thus referred to as a factor. Each factor can have two or more levels (treatments). The methods of ANOVA are typically categorized by the number of factors and the number of dependent variables involved. One way ANOVA is the simplest form which involves only a single factor in the experiment. One-way analysis is commonly used to test for differences among at least three independent groups (because a two-group case can be conveniently handled by a t-test), and the treatment effects are estimated for the population as a whole. A one-way analysis of variance is used when the data are divided into groups according to only one factor. The questions of interest are usually: (a) Is there a significant difference between the groups?, and (b) If so, which groups are significantly different from which others? Statistical tests are provided to compare group means, group medians, and group standard deviations. For equal size samples, significant group differences can be determined by examining the means plot and identifying those intervals that do not overlap. Multi-way or Multifactor ANOVA involves two or more independent factors, and if there is replication at each combination of levels in a multi-way ANOVA, we have a factorial design. A factorial ANOVA has at least two independent variables which cross with each other. Typically, these treatment effects and their interactions are assumed to be fixed in the model. Both main effects and interactions between the factors may be estimated. The output includes an ANOVA table and a new graphical ANOVA. In a graphical ANOVA, the points are scaled so that any levels that differ by more than exhibited in the distribution of the residuals are significantly different. Multivariate analysis of variance (MANOVA) is used when there is more than one dependent variable is involved in the analysis.

Mixed-design ANOVA applies to so-called a factorial mixed-design, in which one factor is a between-subjects variables and the other is within-subject variable. In this type of ANOVA models, both fixed effects and random effects are involved. For treatments (factor levels) as a random effect variable, they are considered to be samples from a large population. Typically, the interest of studying such random variables is not on the estimation of effect size for some particular sampled factors, but on the interference of the whole population, such as their variability and the overall mean. ANOVA assumes that the populations involved follow a normal distribution, thus, it falls into a category of hypothesis tests known as parametric tests. If the populations involved did not follow a normal distribution, an ANOVA test could not be used o examine the equality of the sample means. Instead, one would have o use a non-parametric test (or distribution-free test), which is a more general form of hypothesis testing that does not rely on distributional assumptions. ANOVA tests the null hypothesis that the population means of each level are equal, versus the alternative hypothesis that at least one level means are not all equal. The idea of the analysis of variance is to take a summary of the variability in all the observations and partition it into separate sources. That is, when formulated as a statistical model, ANOVA refers to an additive decomposition of data into grand mean, main effects, possible interactions, and an error term. From this standpoint, one may use the 5-step procedures: 1. State the null and alternative hypothesis. The null hypothesis for an ANOVA always assumes the population means are equal. So the method of ANOVA test the hypotheses that

or

Not all the means are equal

2. Calculate the appropriate test statistic: The test statistic in ANOVA is the ratio between and within variation in the data. It follows an F distribution.

where F = computed F-value, = Pooled sample variance, = Population variance, 3. Obtain the critical value from an F distribution, along with the significance level: (3) where Numerator degrees of freedom (df1) = Denominator degrees of freedom (df2) = J = Number of columns, I = Number of rows = significance level = 0.05 4. State the decision rule. The Decision rule is to reject the null hypothesis if F (computed) is greater than FT (table) with numerator and denominator degrees of freedom. That is, reject H0 if 5. Interpret the result and make conclusions. Apart from the assumption that the distributions of data in each of the groups are normal, it is also assumed that causes are independent. Examples of causes that can lead to lack of independence include, use of adjacent plots in a field experiment (which tends to give similar yields), systematic error (i.e. processing experimental units sequentially instead of in random order), use of repeated measures, etc. The homogeneity of variances is assumed (i.e. the variance

of data should be the same in all the groups. The Levenes test is typically used to confirm the homogeneity of variances. The analysis of variance is similar to regression which is used to investigate and model the relationship between a response variable and one or more independent variables. However, ANOVA differs from regression in two ways: the independent variables are qualitative and no assumption is made about the nature of the relationship. The analysis of variance is used widely in the biological, social and physical sciences and engineering.

You might also like