You are on page 1of 41

Name S.

BALAGOPAL

Roll No. 520945973

Program MBA

Subject RESEARCH METHODOLOGY

Code MB 0034

Learning
INSOFT - NOIDA [1822]
Centre

DATE OF
29TH NOVEMBER 2010
SUBMISSION
MB0034 Research Methodology

Assignment Set- 1

Q 1. Give examples of specific situations that would call for the following types of
research, explaining why – a) Exploratory research b) Descriptive research c)
Diagnostic research d) Evaluation research.

Ans.

a) EXPLANATORY RESEARCH:

Exploratory research provides insights into and comprehension of an issue or


situation. It should draw definitive conclusions only with extreme caution.
Exploratory research is a type of research conducted because a problem has not
been clearly defined. Exploratory research helps determine the best research
design, data collection method and selection of subjects. Given its fundamental
nature, exploratory research often concludes that a perceived problem does not
actually exist. Exploratory research often relies on secondary research such as
reviewing available literature and/or data, or qualitative approaches such as
informal discussions with consumers, employees, management or competitors, and
more formal approaches through in-depth interviews, focus groups, projective
methods, case studies or pilot studies. The Internet allows for research methods
that are more interactive in nature: E.g., RSS feeds efficiently supply researchers
with up-to-date information; major search engine search results may be sent by
email to researchers by services such as Google Alerts; comprehensive search
results are tracked over lengthy periods of time by services such as Google Trends;
and Web sites may be created to attract worldwide feedback on any subject. The
results of exploratory research are not usually useful for decision-making by
themselves, but they can provide significant insight into a given situation. Although
the results of qualitative research can give some indication as to the "why", "how"
and "when" something occurs, it cannot tell us "how often" or "how many."

b) DESCRIPTIVE RESEARCH:

Descriptive research, also known as statistical research, describes data and


characteristics about the population or phenomenon being studied. Descriptive
research answers the questions who, what, where, when and how...

Although the data description is factual, accurate and systematic, the research
cannot describe what caused a situation. Thus, Descriptive research cannot be used
to create a causal relationship, where one variable affects another. In other words,
descriptive research can be said to have a low requirement for internal validity.

The description is used for frequencies, averages and other statistical calculations.
Often the best approach, prior to writing descriptive research, is to conduct a
survey investigation. Qualitative research often has the aim of description and
researchers may follow-up with examinations of why the observations exist and
what the implications of the findings are.

In short descriptive research deals with everything that can be counted and
studied. But there are always restrictions to that. Your research must have an
impact to the lives of the people around you. For example, finding the most
frequent disease that affects the children of a town. The reader of the research will
know what to do to prevent that disease thus, more people will live a healthy life.

c) DIAGNOSTIC RESEARCH:

Diagnostic research has been focusing primarily on sensitivity and specificity of


individual tests. In this highly interactive course we will challenge the usefulness of
this so called ‘test research’, by showing that diagnostic test results can, and
should, only be interpreted in the context of other diagnostic test results and
clinical parameters. We will extend your horizon by proposing modern methods of
diagnostic research design and analysis. You will be actively involved by designing
diagnostic studies and analyzing real life data sets. At the end of this course, you
should be able to directly estimate individual probabilities of the presence or
absence of disease, based on integrated clinical and diagnostic information. You will
be able to evaluate the true added value of any diagnostic test in a clinical context.
Finally, you will be equipped with the skills to conceptualize, design and analyse
modern diagnostic research.

d) Evaluation research:

Evaluation research is of particular interest here. The Introduction to Evaluation


Research presents an overview of what evaluation is and how it differs from social
research generally. We also introduce several evaluation models to give you some
perspective on the evaluation endeavor. Evaluation should not be considered in a
vacuum. Here, we consider evaluation as embedded within a larger Planning-
Evaluation Cycle. Evaluation can be a threatening activity. Many groups and
organizations struggle with how to build a good evaluation capability into their
everyday activities and procedures. This is essentially an organizational culture
issue. Here we consider some of the issues a group or organization needs to
address in order to develop an evaluation culture that works in their context.

Q 2.In the context of hypothesis testing, briefly explain the difference


between a) Null and alternative hypothesis b) Type 1 and type 2 error c)
Two tailed and one tailed test d) Parametric and non parametric tests.

Ans.
a) The logic of traditional hypothesis testing requires that we set up two competing
statements or hypotheses referred to as the null hypothesis and the alternative
hypothesis. These hypotheses are mutually exclusive and exhaustive.

Ho: The finding occurred by chance

H1: The finding did not occur by chance

The null hypothesis is then assumed to be true unless we find evidence to the
contrary. If we find that the evidence is just too unlikely given the null hypothesis,
we assume the alternative hypothesis is more likely to be correct. In "traditional
statistics" a probability of something occurring of less than .05 (= 5% = 1 chance
in 20) is conventionally considered "unlikely"

b) When an observer makes a Type I error in evaluating a sample against its


parent population, he or she is mistakenly thinking that a statistical difference
exists when in truth there is no statistical difference (or, to put another way, the
null hypothesis should not be rejected but was mistakenly rejected). For example,
imagine that a pregnancy test has produced a "positive" result (indicating that the
woman taking the test is pregnant); if the woman is actually not pregnant though,
then we say the test produced a "false positive" (assuming the null hypothesis, Ho,
was that she is not pregnant). A Type II error, or a "false negative", is the error of
failing to reject a null hypothesis when the alternative hypothesis is the true state
of nature. For example, a type II error occurs if a pregnancy test reports "negative"
when the woman is, in fact, pregnant.

From the Bayesian point of view, a type one error is one that looks at information
that should not substantially change one's prior estimate of probability, but does. A
type two error is that one looks at information which should change one's estimate,
but does not. (Though the null hypothesis is not quite the same thing as one's prior
estimate, it is, rather, one's pro forma prior estimate.)

Rejecting a null-hypothesis when it should not have been rejected creates a type I
error.

failing to reject a null-hypothesis when it should have been rejected creates a type
II error.

(In either case, a wrong decision or error in judgment has occurred.)

Decision rules (or tests of hypotheses), in order to be good, must be designed to


minimize errors of decision.

Minimizing errors of decision is not a simple issue—for any given sample size the
effort to reduce one type of error generally results in increasing the other type of
error.

Based on the real-life application of the error, one type may be more serious than
the other.
(In such cases, a compromise should be reached in favor of limiting the more
serious type of error.)

The only way to minimize both types of error is to increase the sample size, and
this may or may not be feasible.

Hypothesis testing is the art of testing whether a variation between two sample
distributions can be explained by chance or not. In many practical applications type
I errors are more delicate than type II errors. In these cases, care is usually
focused on minimizing the occurrence of this statistical error. Suppose, the
probability for a type I error is 1% , then there is a 1% chance that the observed
variation is not true. This is called the level of significance. While 1% might be an
acceptable level of significance for one application, a different application can
require a very different level. For example, the standard goal of six sigma is to
achieve precision to 4.5 standard deviations above or below the mean. This means
that only 3.4 parts per million are allowed to be deficient in a normally distributed
process. The probability of type I error is generally denoted with the Greek letter
alpha, α.

To state it simply, a type I error can usually be interpreted as a false alarm or


under-active specificity. A type II error could be similarly interpreted as an
oversight, but is more akin to a lapse in attention or under-active sensitivity. The
probability of type II error is generally denoted with the Greek letter beta, β.

c) There are two different types of tests that can be performed. A one-tailed test
looks for an increase or decrease in the parameter whereas a two-tailed test looks
for any change in the parameter (which can be any change- increase or decrease).

We can perform the test at any level (usually 1%, 5% or 10%). For example,
performing the test at a 5% level means that there is a 5% chance of wrongly
rejecting H0.

If we perform the test at the 5% level and decide to reject the null hypothesis, we
say "there is significant evidence at the 5% level to suggest the hypothesis is
false".

One-Tailed Test

We choose a critical region. In a one-tailed test, the critical region will have just
one part (the red area below). If our sample value lies in this region, we reject the
null hypothesis in favour of the alternative.

Suppose we are looking for a definite decrease. Then the critical region will be to
the left. Note, however, that in the one-tailed test the value of the parameter can
be as high as you like.
Example

Suppose we are given that X has a Poisson distribution and we want to carry out a
hypothesis test on the mean, , based upon a sample observation of 3.

Suppose the hypotheses are:


H0:  = 9
H1:  < 9

We want to test if it is "reasonable" for the observed value of 3 to have come from
a Poisson distribution with parameter 9. So what is the probability that a value as
low as 3 has come from a Po(9)?

P(X ≤ 3) = 0.0212 (this has come from a Poisson table)

The probability is less than 0.05, so there is less than a 5% chance that the value
has come from a Poisson(3) distribution. We therefore reject the null hypothesis in
favour of the alternative at the 5% level.

However, the probability is greater than 0.01, so we would not reject the null
hypothesis in favour of the alternative at the 1% level.

Two-Tailed Test

In a two-tailed test, we are looking for either an increase or a decrease. So, for
example, H0 might be that the mean is equal to 9 (as before). This time, however,
H1 would be that the mean is not equal to 9. In this case, therefore, the critical
region has two parts:
Example

Lets test the parameter p of a Binomial distribution at the 10% level.

Suppose a coin is tossed 10 times and we get 7 heads. We want to test whether or
not the coin is fair. If the coin is fair, p = 0.5 . Put this as the null hypothesis:

H0: p = 0.5

H1: p ≠ 0.5

Now, because the test is 2-tailed, the critical region has two parts. Half of the
critical region is to the right and half is to the left. So the critical region contains
both the top 5% of the distribution and the bottom 5% of the distribution (since we
are testing at the 10% level).

If H0 is true, X ~ Bin(10, 0.5).

If the null hypothesis is true, what is the probability that X is 7 or above?


P(X ≥ 7) = 1 - P(X < 7) = 1 - P(X ≤ 6) = 1 - 0.8281 = 0.1719

Is this in the critical region? No- because the probability that X is at least 7 is not
less than 0.05 (5%), which is what we need it to be.

So there is not significant evidence at the 10% level to reject the null hypothesis.

d) There are two types of test data and consequently different types of analysis.
As the table below shows, parametric data has an underlying normal distribution
which allows for more conclusions to be drawn as the shape can be
mathematically described. Anything else is non-parametric.

Parametric Non-parametric
Assumed
Normal Any
distribution
Assumed variance Homogeneous Any
Typical data Ratio or Interval Ordinal or Nominal
Data set
Independent Any
relationships
Usual central
Mean Median
measure
Can draw more Simplicity; Less
Benefits
conclusions affected by outliers
Tests
Choosing parametric Choosing a non-
Choosing
test parametric test
Correlation test Pearson Spearman
Independent Independent-
Mann-Whitney test
measures, 2 groups measures t-test
Independent One-way,
measures, >2 independent- Kruskal-Wallis test
groups measures ANOVA
Repeated measures,
Matched-pair t-test Wilcoxon test
2 conditions
Repeated measures, One-way, repeated
Friedman's test
>2 conditions measures ANOVA

As the table shows, there are different tests for parametric and non-parametric
data.

Q 3. Explain the difference between a causal relationship and correlation,


with an example of each. What are the possible reasons for a correlation
between two variables?

Ans.

Cause and correlation are terms that are often confused or used incorrectly,
particularly the former. This is an unfortunate thing for people who ever listen to a
news report or read a newspaper. If you’ve followed the many things that have
been reported as causes of cancer, you might never eat, drink, or leave your home
again. When we hear that there might be a link between one thing and another, we
often mistakenly assume that one thing causes the other.

The main difference between cause and correlation is the strength and degree to
which two things are related and the certainty with which anyone can establish a
causal relationship. Essentially when you say one thing causes another, you are
saying that there is a direct line between that one thing and the result. Cause
means that an action will always have a predictable reaction.

When you define correlation, the terms cause and correlation become easier to
understand. If you see a correlation between two things, you can see that there is
a relationship between those two things. One thing doesn’t necessarily result in the
other thing occurring, but it may increase likelihood that something will occur.

Understanding the difference of cause and correlation can be helped by an


example. You can, perhaps, examine the statement: “Violent video games cause
violent behavior.” According to all research on this matter, this statement is not
true, due to the use of the word causes in the sentence. Research has shown that
violent video games may influence violent behavior.

It also shows that a number of different factors may be responsible for a person
being violent, among them, poorer socioeconomic status, mental illness, abusive
childhoods, and bad parenting. You cannot say violent video games are the cause
of violence. In order to make the above statement, you’d have to be able to prove
that everyone who ever played a violent video game subsequently exhibited
violence.

Instead, what you can say, and what has been studied, is the correlation between
violent video games and violent behavior. Researchers have shown that there is a
connection/correlation there. Such games may influence others to act in more
aggressive ways but they are not the sole factor and sometimes not even a factor
for predicting violence. Thus there’s a correlation there, which should be
considered, but there is no cause factor. Plenty of people were violent, prior to the
advent of video games, thus if you’re deciding between cause and correlation here,
you must choose correlation.

In some ways, it can be almost impossible, except in extremely controlled


circumstances to say any one thing causes something else, especially when you’re
dealing with human health or behavior. You can, in limited ways, make blanket
cause/effect statements about some things. For example, heating water to a
certain temperature causes it to boil. This is a specific cause/effect relationship that
no one would dispute.

Yet it can be helpful to understand the difference between cause and correlation
since we are often barraged with information about things that may pose health
risks to us. What most researchers arrive at in research is that some things, for
instance, alcoholism and cancer are connected or co-related. Alcoholism may
increase your risk of getting cancer, but it does not, in and of itself, cause cancer.

When you hear about the causes of disease, it’s important to be skeptical.
Scientists define correlations all the time, and unfortunately, news media loves to
call these causes, since they then translate to a much more dramatic story. Read
or listen carefully for qualifying words that suggest correlation like “may,” “might
increase,” “could have an effect,” to separate the differences between cause and
correlation

The correlation is one of the most common and most useful statistics. A correlation
is a single number that describes the degree of relationship between two variables.
Let's work through an example to show you how this statistic is computed.

Correlation Example

Let's assume that we want to look at the relationship between two variables, height
(in inches) and self esteem. Perhaps we have a hypothesis that how tall you are
effects your self esteem (incidentally, I don't think we have to worry about the
direction of causality here -- it's not likely that self esteem causes your height!).
Let's say we collect some information on twenty individuals (all male -- we know
that the average height differs for males and females so, to keep this example
simple we'll just use males). Height is measured in inches. Self esteem is measured
based on the average of 10 1-to-5 rating items (where higher scores mean higher
self esteem). Here's the data for the 20 cases (don't take this too seriously -- I
made this data up to illustrate what a correlation is):

Person Height Self Esteem


1 68 4.1
2 71 4.6
3 62 3.8
4 75 4.4
5 58 3.2
6 60 3.1
7 67 3.8
8 68 4.1
9 71 4.3
10 69 3.7
11 68 3.5
12 67 3.2
13 63 3.7
14 62 3.3
15 60 3.4
16 63 4.0
17 65 4.1
18 67 3.8
19 63 3.4
20 61 3.6
Now, let's take a quick look at the histogram for each variable:

And, here are the descriptive statistics:

Variable Mean StDev Variance Sum Minimum Maximum Range


Height 65.4 4.40574 19.4105 1308 58 75 17
Self Esteem 3.755 0.426090 0.181553 75.1 3.1 4.6 1.5
Finally, we'll look at the simple bivariate (i.e., two-variable) plot:

You should immediately see in the bivariate plot that the relationship between the
variables is a positive one (if you can't see that, review the section on types of
relationships) because if you were to fit a single straight line through the dots it
would have a positive slope or move up from left to right. Since the correlation is
nothing more than a quantitative estimate of the relationship, we would expect a
positive correlation.

What does a "positive relationship" mean in this context? It means that, in general,
higher scores on one variable tend to be paired with higher scores on the other and
that lower scores on one variable tend to be paired with lower scores on the other.
You should confirm visually that this is generally true in the plot above.

Calculating the Correlation

Now we're ready to compute the correlation value. The formula for the correlation
is:
We use the symbol r to stand for the correlation. Through the magic of
mathematics it turns out that r will always be between -1.0 and +1.0. if the
correlation is negative, we have a negative relationship; if it's positive, the
relationship is positive. You don't need to know how we came up with this formula
unless you want to be a statistician. But you probably will need to know how the
formula relates to real data -- how you can use the formula to compute the
correlation. Let's look at the data we need for the formula. Here's the original data
with the other necessary columns:

Self
Height
Person Esteem x*y x*x y*y
(x)
(y)
1 68 4.1 278.8 4624 16.81
2 71 4.6 326.6 5041 21.16
3 62 3.8 235.6 3844 14.44
4 75 4.4 330 5625 19.36
5 58 3.2 185.6 3364 10.24
6 60 3.1 186 3600 9.61
7 67 3.8 254.6 4489 14.44
8 68 4.1 278.8 4624 16.81
9 71 4.3 305.3 5041 18.49
10 69 3.7 255.3 4761 13.69
11 68 3.5 238 4624 12.25
12 67 3.2 214.4 4489 10.24
13 63 3.7 233.1 3969 13.69
14 62 3.3 204.6 3844 10.89
15 60 3.4 204 3600 11.56
16 63 4 252 3969 16
17 65 4.1 266.5 4225 16.81
18 67 3.8 254.6 4489 14.44
19 63 3.4 214.2 3969 11.56
20 61 3.6 219.6 3721 12.96
Sum = 1308 75.1 4937.6 85912 285.45

The first three columns are the same as in the table above. The next three columns
are simple computations based on the height and self esteem data. The bottom
row consists of the sum of each column. This is all the information we need to
compute the correlation. Here are the values from the bottom row of the table
(where N is 20 people) as they are related to the symbols in the formula:

Now, when we plug these values into the formula given above, we get the following
(I show it here tediously, one step at a time):

So, the correlation for our twenty cases is .73, which is a fairly strong positive
relationship. I guess there is a relationship between height and self esteem, at
least in this made up data!

Testing the Significance of a Correlation

Once you've computed a correlation, you can determine the probability that the
observed correlation occurred by chance. That is, you can conduct a significance
test. Most often you are interested in determining the probability that the
correlation is a real one and not a chance occurrence. In this case, you are testing
the mutually exclusive hypotheses:

Null Hypothesis: r=0


Alternative Hypothesis: r <> 0

The easiest way to test this hypothesis is to find a statistics book that has a table
of critical values of r. Most introductory statistics texts would have a table like this.
As in all hypothesis testing, you need to first determine the significance level. Here,
I'll use the common significance level of alpha = .05. This means that I am
conducting a test where the odds that the correlation is a chance occurrence is no
more than 5 out of 100. Before I look up the critical value in a table I also have to
compute the degrees of freedom or df. The df is simply equal to N-2 or, in this
example, is 20-2 = 18. Finally, I have to decide whether I am doing a one-tailed or
two-tailed test. In this example, since I have no strong prior theory to suggest
whether the relationship between height and self esteem would be positive or
negative, I'll opt for the two-tailed test. With these three pieces of information --
the significance level (alpha = .05)), degrees of freedom (df = 18), and type of test
(two-tailed) -- I can now test the significance of the correlation I found. When I
look up this value in the handy little table at the back of my statistics book I find
that the critical value is .4438. This means that if my correlation is greater than .
4438 or less than -.4438 (remember, this is a two-tailed test) I can conclude that
the odds are less than 5 out of 100 that this is a chance occurrence. Since my
correlation 0f .73 is actually quite a bit higher, I conclude that it is not a chance
finding and that the correlation is "statistically significant" (given the parameters of
the test). I can reject the null hypothesis and accept the alternative.

The Correlation Matrix

All I've shown you so far is how to compute a correlation between two variables. In
most studies we have considerably more than two variables. Let's say we have a
study with 10 interval-level variables and we want to estimate the relationships
among all of them (i.e., between all possible pairs of variables). In this instance,
we have 45 unique correlations to estimate (more later on how I knew that!). We
could do the above computations 45 times to obtain the correlations. Or we could
use just about any statistics program to automatically compute all 45 with a simple
click of the mouse.

I used a simple statistics program to generate random data for 10 variables with
20 cases (i.e., persons) for each variable. Then, I told the program to compute the
correlations among these variables. Here's the result:

C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
C1 1.000
C2 0.274 1.000
C3 -0.134 -0.269 1.000
C4 0.201 -0.153 0.075 1.000
C5 -0.129 -0.166 0.278 -0.011 1.000
C6 -0.095 0.280 -0.348 -0.378 -0.009 1.000
C7 0.171 -0.122 0.288 0.086 0.193 0.002 1.000
C8 0.219 0.242 -0.380 -0.227 -0.551 0.324 -0.082 1.000
C9 0.518 0.238 0.002 0.082 -0.015 0.304 0.347 -0.013 1.000
C10 0.299 0.568 0.165 -0.122 -0.106 -0.169 0.243 0.014 0.352 1.000
This type of table is called a correlation matrix. It lists the variable names (C1-C10)
down the first column and across the first row. The diagonal of a correlation matrix
(i.e., the numbers that go from the upper left corner to the lower right) always
consists of ones. That's because these are the correlations between each variable
and itself (and a variable is always perfectly correlated with itself). This statistical
program only shows the lower triangle of the correlation matrix. In every
correlation matrix there are two triangles that are the values below and to the left
of the diagonal (lower triangle) and above and to the right of the diagonal (upper
triangle). There is no reason to print both triangles because the two triangles of a
correlation matrix are always mirror images of each other (the correlation of
variable x with variable y is always equal to the correlation of variable y with
variable x). When a matrix has this mirror-image quality above and below the
diagonal we refer to it as a symmetric matrix. A correlation matrix is always a
symmetric matrix.

To locate the correlation for any pair of variables, find the value in the table for the
row and column intersection for those two variables. For instance, to find the
correlation between variables C5 and C2, I look for where row C2 and column C5 is
(in this case it's blank because it falls in the upper triangle area) and where row C5
and column C2 is and, in the second case, I find that the correlation is -.166.

OK, so how did I know that there are 45 unique correlations when we have 10
variables? There's a handy simple little formula that tells how many pairs (e.g.,
correlations) there are for any number of variables:

where N is the number of variables. In the example, I had 10 variables, so I know I


have (10 * 9)/2 = 90/2 = 45 pairs.

Q 4. Briefly explain any two factors that affect the choice of a sampling
technique. What are the characteristics of a good sample?

Ans.

Sample is a finite part of a statistical population whose properties are studied to


gain information about the whole(Webster, 1985). When dealing with people, it can
be defined as a set of respondents(people) selected from a larger population for
the purpose of a survey.

A population is a group of individuals persons, objects, or items from which


samples are taken for measurement for example a population of presidents or
professors, books or students.

What is sampling? Sampling is the act, process, or technique of selecting a suitable


sample, or a representative part of a population for the purpose of determining
parameters or characteristics of the whole population.

What is the purpose of sampling? To draw conclusions about populations from


samples, we must use inferential statistics which enables us to determine a
population`s characteristics by directly observing only a portion (or sample) of the
population. We obtain a sample rather than a complete enumeration (a census ) of
the population for many reasons. Obviously, it is cheaper to observe a part rather
than the whole, but we should prepare ourselves to cope with the dangers of using
samples. In this tutorial, we will investigate various kinds of sampling procedures.
Some are better than others but all may yield samples that are inaccurate and
unreliable. We will learn how to minimize these dangers, but some potential error
is the price we must pay for the convenience and savings the samples provide.

There would be no need for statistical theory if a census rather than a sample was
always used to obtain information about populations. But a census may not be
practical and is almost never economical. There are six main reasons for sampling
instead of doing a census. These are; -Economy -Timeliness -The large size of
many populations -Inaccessibility of some of the population -Destructiveness of the
observation -accuracy

The economic advantage of using a sample in research Obviously, taking a sample


requires fewer resources than a census. For example, let us assume that you are
one of the very curious students around. You have heard so much about the
famous Cornell and now that you are there, you want to hear from the insiders.
You want to know what all the students at Cornell think about the quality of
teaching they receive, you know that all the students are different so they are
likely to have different perceptions and you believe you must get all these
perceptions so you decide because you want an indepth view of every student, you
will conduct personal interviews with each one of them and you want the results in
20 days only, let us assume this particular time you are doing your research
Cornell has only 20,000 students and those who are helping are so fast at the
interviewing art that together you can interview at least 10 students per person per
day in addition to your 18 credit hours of course work. You will require 100
research assistants for 20 days and since you are paying them minimum wage of
$5.00 per hour for ten hours ($50.00) per person per day, you will require
$100000.00 just to complete the interviews, analysis will just be impossible. You
may decide to hire additional assistants to help with the analysis at another
$100000.00 and so on assuming you have that amount on your account.

As unrealistic as this example is, it does illustrate the very high cost of census. For
the type of information desired, a small wisely selected sample of Cornell students
can serve the purpose. You don`t even have to hire a single assistant. You can
complete the interviews and analysis on your own. Rarely does a circumstance
require a census of the population, and even more rarely does one justify the
expense.

The time factor.

A sample may provide you with needed information quickly. For example, you are a
Doctor and a disease has broken out in a village within your area of jurisdiction,
the disease is contagious and it is killing within hours nobody knows what it is. You
are required to conduct quick tests to help save the situation. If you try a census of
those affected, they may be long dead when you arrive with your results. In such a
case just a few of those already infected could be used to provide the required
information.

The very large populations

Many populations about which inferences must be made are quite large. For
example, Consider the population of high school seniors in United States of
America, a group numbering 4,000,000. The responsible agency in the government
has to plan for how they will be absorbed into the different departments and even
the private sector. The employers would like to have specific knowledge about the
student’s plans in order to make compatible plans to absorb them during the
coming year. But the big size of the population makes it physically impossible to
conduct a census. In such a case, selecting a representative sample may be the
only way to get the information required from high school seniors.

The partly accessible populations

There are Some populations that are so difficult to get access to that only a sample
can be used. Like people in prison, like crashed aero planes in the deep seas,
presidents etc. The inaccessibility may be economic or time related. Like a
particular study population may be so costly to reach like the population of planets
that only a sample can be used. In other cases, a population of some events may
be taking too long to occur that only sample information can be relied on. For
example natural disasters like a flood that occurs every 100 years or take the
example of the flood that occurred in Noah’s days. It has never occurred again.

The destructive nature of the observation sometimes the very act of observing the
desired characteristic of a unit of the population destroys it for the intended use.
Good examples of this occur in quality control. For example to test the quality of a
fuse, to determine whether it is defective, it must be destroyed. To obtain a census
of the quality of a lorry load of fuses, you have to destroy all of them. This is
contrary to the purpose served by quality-control testing. In this case, only a
sample should be used to assess the quality of the fuses

Accuracy and sampling. A sample may be more accurate than a census. A sloppily
conducted census can provide less reliable information than a carefully obtained
sample.

Criteria governing the choice of the sampling technique

1. Purpose of the survey- What does the researcher aim at? If he intends to
generalize the findings based on the sample survey to the population, then an
appropriate probability sampling method must be selected. The choice of a
particular type of probability sampling depends on the geographical area of the
survey and the size and the nature of the population under study.
2. Measurability- the application of statistical interference theory requires
computation of the sampling error from the sample itself. Probability samples only
allow such computation. Hence, where the research objective requires statistical
interference, the sample should be drawn by applying simple random sampling
method, depending on whether the population is homogenous or heterogeneous.

3.Degree of precision- should the results of the survey be very precise or even
rough results could serve the purpose? The desired level of precision as one of the
criteria of sampling method selection. Where a high degree of precision of results is
desired, probability sampling should be used. Where even crude results would
serve the purpose (E.g., marketing surveys, readership surveys etc) any
convenient non-random sampling like quota sampling would be enough.

4. Information about population: How much information is available about the


population to be studied? Where no list of population and no information about its
exploratory study with non-probability sampling may be made to gain a better idea
of population. After gaining sufficient knowledge about the population through the
exploratory study, appropriate probability sampling design may be adopted.

5. The nature of the population: In terms of the variables to be studied, is the


population homogenous or heterogeneous? In the case of a homogenous
population, heterogeneous, stratified random sampling is appropriate.

6. Geographical Area of the Study and the Size of the Population: If the area
covered by a survey is very large and the size of the population is quite large,
multi-stage cluster sampling would be appropriate. But if the area and the size of
the population are small, single stage probability sampling methods could be used.

7. Financial resources: If the available finance is limited, it may become necessary


to choose a less costly sampling plan like multistage cluster sampling or even
quota sampling as a compromise. However, if the objectives of the study and the
desired level of precision cannot be attained within the stipulated budget, there is
not alternative than to give up the proposed survey. Where the finance is not a
constraint, a researcher can choose the most appropriate method of sampling that
fits the research objective and the nature of population.

8. Time Limitation: The time limit within which the research project should be
completed restricts the choice of a sampling method. Then, as a compromise , it
may become necessary to choose less time consuming methods like simple random
sampling instead of stratified sampling/sampling with probability proportional to
size; multi-stage cluster sampling instead of single-stage sampling of elements. Of
course, the precision has to be scarified to some extent.

9. Economy: It should be another criterion in choosing the sampling method. It


means achieving the desired level of precision at minimum cost. A sample is
economical if the precision per unit cost is high or the cost per unit of variance is
low.
The above criteria frequently conflict and the researcher must balance and blend
them to obtain to obtain a good sampling plan. The chosen plan thus represents an
adaptation of the sampling theory to the available facilities and resources. That is,
it represents a compromise between idealism and feasibility. One should use simple
workable methods instead of unduly elaborate and complicated techniques.

The Characteristics of a good sample:

Representativeness: a sample must be representative of the population.


Probability sampling technique yield representative sample.

Accuracy: accuracy is defined as the degree to which bias is absent from the
sample. An accuracy sample is the one which exactly represents the population.

Precision: The sample must yield precise estimate. Precision is measured by


standard error.

Size: a good sample must be adequate in size in order to be reliable.

Q 5. Select any topic for research and explain how you will use both
secondary and primary sources to gather the required information.

Ans.

After assessing the problems and the needs of the audience, and develop their
profiles, the next step is to collect information. The programme planners and
producers will use the information only when they are sure about the quality of
information. Therefore, one should be concerned not only about the type and the
amount of information but also about the quality of information. Some key criteria
for quality information are given below:

Accuracy or Validity: It should show the true situation. For this, plan in advance,
be clear and specific with regard to information needed, simplify your samples and
research methods, use more than one method/ source for the same data and
develop guidelines for analysis of the data.

Relevance: It should be relevant to the information users. Should reflect the


commitment for the cause of the community, engage the target population in the
process of information collection, try to know in advance who needs what
information and how it will be used.

Significance: It should be important. Many a time researchers collect a lot of


information, which is irrelevant, unnecessary and insignificant for the purpose.

Credibility: The information should be collected in a scientific manner to be


believable. Researchers should be objective while gathering, analyzing and
interpreting information, be transparent about the methods used to obtain
information and draw conclusions.

Timeliness: Information should be available in time to make necessary decisions.


There is little use in providing information after programming has already made a
significant headway. For this you should plan in advance, use simple tools for
collection and analysis, create a schedule with deadlines and stick to it.

Representativeness: It should represent the entire target audience and not just
some part of it.

Sources of Information

You can collect the required information from various sources. If possible, use
more than source and method for the same set of information. This enables you to
verify accuracy and gives more credence to the data.

Sources of information may broadly be classified as primary sources and secondary


sources.

Primary sources refer to people or places where you can obtain new and raw
information that does not exist. The new information that you gather from primary
sources is referred to as primary data.

Secondary sources refer to sources that have already gathered information,


possibly for reasons other than the purposes of your present concern. The
information is already available, and is referred to as secondary data.

Primary Data

Primary data are obtained by going to the field to collect new information for the
purpose of your specific requirement. Typically, primary data are often needed for
baseline study, assessment of needs and development of audience profiles.
Examples of primary sources include:

Target population

Extension functionaries of government and non-government agencies

Social workers, activists etc.

Other interested parties working with the same or similar populations

By using primary sources, you can have full control over what, when and how the
information is collected. In this way, it is easier to maintain control over quality of
information and to do follow-up for any critical findings or missing information.

However, primary data have also certain limitations:

Primary sources may not be easily accessible. For example, farmers during the
sowing season would not be available to give you the information you need.

Skills needed for successfully designing study and implementation of primary data
collection are substantially greater than those needed for working with secondary
data.

There may be errors of judgment in selecting the respondents or places from which
to gather information e.g. contacting persons of high socio-economic status from
the villages along the main road only and thereby not reflecting the issues of the
target populations of lower socio-economic status who live in remote areas.

Costs of primary data collection can be high.

It is more time consuming also.

Despite limitations, primary sources are essential and important for audience
research. Extensive interaction with them, particularly the target audience
themselves, yields rich dividends. In fact, nothing can replace the information
collected from primary sources. However, keep your data collection sharply focused
for the reasons of time and costs.

Secondary Data

The term secondary data refers to information that already exists and that has
been previously gathered by some other person or organization. You may find it
useful for your purpose. Secondary data include many kinds of written and visual
materials such as:

Previous research reports

Project reports

Historical accounts

Books and materials describing the region and the people

Documentary films/photographs.

Statistical reports/digests of various government agencies and other institutions

Maps and other materials

Obtaining data from secondary sources is obviously cheaper and easier to access
than going out to the field to gather fresh information. Therefore, gathering and
using secondary data should generally be considered as a first option when it is
available. You can use secondary data for various purposes:

To serve as an independent piece of information.


To select areas for further intensive study by you.

To supplement the information gathered by you from the primary sources.

However, use the secondary data with caution, because these, too, have certain
inherent limitations:

May be out-dated and old.

May be inadequate.

If the methods and circumstances of data collection are not recorded, you cannot
be sure of their quality.

The definition of concepts used in the data may be different. For example, your
concept of a small family may be different than what has been adopted there.

Nevertheless it is always a good idea to exploit the potential of the secondary data
to your best advantage.

A central principle to keep in mind for undertaking audience research, therefore, is


that after examining the available secondary data, primary data collection may be
done by focusing only on most significant issues and using simple and
straightforward research methods such as observations, in-depth inter views and
focused group discussions.

Q 6. Case Study: You are engaged to carry out a market survey on behalf
of a leading Newspaper that is keen to increase its circulation in Bangalore
City, in order to ascertain reader habits and interests. Develop a title for
the study; define the research problem and the objectives or questions to
be answered by the study.

Ans.

Newspaper publishers spend millions of dollars annually to ensure that the


newspaper arrives at the newspaper stand or the subscriber’s doorstep every day.
Reporters track down stories and editors diligently maintain the editorial integrity
of the newspaper. The production department meticulously guarantees that
advertisements make it onto the right page.

It is no small feat that this daily production process has continued for centuries
across every city and town in the world. Therein lies the rub. With a resolute focus
on both the published newspaper and production efficiencies, newspapers have
become true stalwarts of the industrial age. The last decade has ushered in a new
era, the information age, which is characterized by an unwavering focus on
customers. A newspaper’s most valuable asset is customer acceptance. Today,
customer service means more than delivering the newspaper on time, every time.
Many newspapers are transforming their organizations from manufacturing-
oriented enterprises to customer-centric businesses and relying on customer
relationship management solutions to help catapult newspapers into the new age.

Objectives of the Study

The primary objective of the study is to identify and describe the use of various
elements of marketing mix in the newspaper industry of Bangalore through
focusing the marketing practices of the highest circulated newspaper, Prothom Alo.
This paper has been carried out with the following specific objectives:

i. To cite the price determination process of a leading newspaper.

ii. To describe the marketing cost of Prothom Alo.

iii. To narrate the distribution channel of a national newspaper.

iv. To illustrate the promotional activities of Prothom Alo.

v. To identify the current marketing problems of a daily newspaper.

vi. To find out the ways of increasing the marketing efficiency of Prothom Alo.

Research Problem

Despite of the level best effort of the researchers, this article is not fully free of
certain obvious limitations. The basic limitation of this article is its sole dependence
on secondary data. Secondly, the sources of secondary data were very limited.
Relevant data is not available regarding this field. For this reason the accuracy of
this report depends on the accuracy of the information furnished by the secondary
sources.
MB0034 Research Methodology

Assignment Set- 2

Q 1.Discuss the relative advantages and disadvantages of the different


methods of distributing questionnaires to the respondents of a study.

Ans.

Questionnaires are easy to analyze, and most statistical analysis software can
easily process them. They are cost effective when compared to face-to-face
interviews, mostly because of the costs associated with travel time. Questionnaires
are familiar to most people (Berdie, Anderson, and Niebuhr, 1986). Nearly
everyone has had some experience completing questionnaires and they generally
do not make people apprehensive. They are less intrusive than telephone or face-
to-face surveys. When respondents receive a questionnaire in the mail, they are
free to complete it on their own time-table. Unlike other research methods, the
respondent is not interrupted by the research instrument. On the other hand,
questionnaires are simply not suited for some people. For example, a written
survey to a group of poorly educated people might not work because of reading
skill problems. More frequently, some people are turned off by written
questionnaires because of misuse. Questionnaires should leave adequate space for
respondents to make comments. One criticism of questionnaires is their inability to
retain the "flavor" of a response. Leaving space for comments will provide valuable
information not captured by the response categories. Leaving white space also
makes the questionnaire look easier and this might increase response. Researchers
should design the questionnaire so it holds the respondent's interest. The goal is to
make the respondent want to complete the questionnaire. One way to keep a
questionnaire interesting is to provide variety in the type of items used. Likewise,
the most important items should appear in the first half of the questionnaire.
Respondents often send back partially completed questionnaires. By putting the
most important items near the beginning, the partially completed questionnaires
will still contain important information.

An anonymous study is one in which nobody (not even the study directors) can
identify who provided data on completed questionnaires." (Berdie, Anderson,
Niebuhr, 1986, p. 47) It is generally not possible to conduct an anonymous
questionnaire through the mail because of the need to follow-up on nonresponders.
However, it is possible to guarantee confidentiality, where the those conducting the
study promise not to reveal the information to anyone. For the purpose of follow-
up, identifying numbers on questionnaires are generally preferred to using
respondents' names. It is important, however, to explain why the number is there
and what it will be used for.
A good questionnaire makes it convenient for the respondent to reply. Mail surveys
that include a self-addressed stamped reply envelope get better response than
business reply envelopes, although they are more expensive since you also pay for
the non-respondents. One important area of question wording is the effect of the
interrogation and assertion question formats. The interrogation format asks a
question directly, where the assertion format asks subjects to indicate their level of
agreement or disagreement with a statement.

The pre-notification letter should address five items (Walonick, 1993):

1. Briefly describe why the study is being done.

2. Identify the sponsors.

3. Explain why the person receiving the pre-letter was chosen.

4. Justify why the respondent should complete the questionnaire.

5. Explain how the results will be used.

The researcher should prepare a mailing list of the selected respondents by


collecting the addresses from the telephone directory of the association or
organization to which they belong. A covering letter should accompany a copy of
the questionnaire.

Other modes of sending quetionnaries.

Methods of distributing quetionnaries to the respondents. They are: (1) Personal


delivery (2) attaching questionnaire to a product (3) advertising questionnaire in a
newspaper of magazine, and (4) news stand insets.

Personal Delivery

The researcher or his assistant may deliver the questionnaires to the potential
respondents with a request to complete them at their convenience. After a day or
two he can collect the questionnaire method, it combines the advantages of the
personal interview and the mail survey. Alternatively, the questionnaires may be
delivered in person and the completed questionnaires may be returned by mail by
the respondent.

Attaching Questionnaire to a product

A firm test marketing a product may attch a questionnaire to a product and request
the buyer to complete it and mail it back to the firm. The respondent is usually
rewarded by a gift or a discount coupon.

Advertising the Questionnaires

The questionnaire with the instructions for completion may be advertised on a page
of magazine or in section of newspapers. The potential respondent completes it
tears it out and mails it to the advertiser. For example, the committee of Banks
customer services used this method. Management studies for collecting information
from the customers of commercial banks in India. This method may be used for
large-scale on topics of common interest.

News-stand Inserts

This method involves inserting the covering letter, quetionnarie and self addressed
reply-paid envelope into a random sample of news-stand copies of a newspaper or
magazine.

Improving the Response Rate in a Mail survey.

The response rate in mail survey is generally very low more so in developing
countries like India. Certain techniques have to be adopted to increase the
response rate. They are:

1. Quality Printing: The questionnaire may be neatly printed in quality light


coloured paper, so as to attract the attention of the respondent.
2. Covering Letter: The covering letter should be couched in a pleasant style
so as to attract and hold the interest of the respondent. It must anticipate
objections and answer them briefly. It is a desirable to address the
respondent by name.
3. Advance Information: Advance information can be provided to potential
respondents by a telephone call or advance notice in the newsletter of the
concerned organization or by a letter. Such preliminary contact with potential
respondents is more successful than follow up efforts.
4. Incentives: Money, stamps for collection and other incentives are also used
to induce respondents to complete and return mail questionarie.
5. Follow-up-contacts: In the case of respondents belonging to an
organization, they may be approached through some one in that organization
known as the researcher.
6. Larger sample size: A large sample may be drawn than the estimated
sample size. For example, if the required sample size is 1000, a sample of
1500 may be drawn. This may help the researcher to secure an effective
sample size closer to the required size.
Advantages of Questionnaires

The advantages of mail surveys are:

1. They are less costly than personal interviews, as cost of mailing is the same
through out the country, irrespective of distance.
2. They can cover extensive geographical areas.
3. Mailing is useful in contacting persons such as senior business executives
who are difficult to reach in any other way.
4. The respondents can complete the questionnaires at their convenience. Mail
surveys, being more impersonal, provide more anonymity than
5. personal interviews
6. Mail surveys are totally free from the interviewer’s bias, as there is no
personal contact between the respondents and the investigator.
7. Certain personal and economic data may be given accurately in an unsigned
mail questionnaire.

Disadvantages of Questionnaires

The disadvantages of mail surveys are:

1. The scope for mail survey is very limited in a country like India where the
percentage of literacy is very low.
2. The response rate of mail survey is low. Hence, the resulting sample will not
be a representative one.

Q 2. In processing data, what is the difference between measures of


central tendency and measures of dispersion? What is the most important
measure of central tendency and dispersion?

Ans.

These are the most familiar measurements of dispersion. Variance is the


arithmetic mean (average) of the square of the difference between the value of an
observation and the arithmetic mean of the value of all observations. It is also
referred to as the second moment about the mean. The formal definition of
variance being:

For computation purposes, the formula can be used in the form shown below which
allows the variance to be derived without first calculating the mean:
Standard Deviation: Standard deviation is the square root of the variance.

Normalized Standard Deviation: It is often useful to express the difference between


the mean and a given value in units of standard deviation.

The normalized standard deviation is often referred to as z. Probability tables for


the normal distribution are usually based on z.

Mean Absolute Deviation (MAD)

A weakness of standard deviation as a measure of dispersion is its sensitivity to


anomalous values which are a feature of real life data. This is a result of the
square of the difference between a value and the mean, this conveniently gets rid
of negative values, but at the expense of increasing the significance of extreme
ones. An alternative is based the absolute value of the difference between a given
value and the mean:

The downside is that the use of absolute values makes the analytical treatment of
functions difficult, but this is a small price to pay for such an acronym.

In situations where the median is a more stable measure of central tendency, it is


used in place of the mean.

The example below compares the standard deviation and the MAD for a small
sample which contains an anomalous extreme value. The measures of central
tendency for the sample are:

Mean 1.7

Median 1.5

1.2 0.5 0.3 0.25

1.4 0.3 0.1 0.09

1.5 0.2 0.0 0.04

1.6 0.1 0.1 0.01

2.8 1.1 1.3 1.21

Totals 2.2 1.8 1.60


Mean Absolute Deviation 0.44

Median Absolute Deviation 0.36

Standard Deviation 0.57

The MAD statistics are less sensitive to extreme anomalous values, however, it is
important to use the statistic which is best suited for a given analysis.

Easily telling people about your data

Collecting data can be easy and fun. But sometimes it can be hard to tell other
people about what you have found. That’s why we use statistics. Two kinds of
statistics are frequently used to describe data. They are measures of central
tendency and dispersion. These are often called descriptive statistics because they
can help you describe your data.

Mean, median and mode

These are all measures of central tendency. They help summarize a bunch of
scores with a single number. Suppose you want to describe a bunch of data that
you collected to a friend for a particular variable like height of students in your
class. One way would be to read each height you recorded to your friend. Your
friend would listen to all of the heights and then come to a conclusion about how
tall students generally are in your class But this would take too much time.
Especially if you are in a class of 200 or 300 students! Another way to
communicate with your friend would be to use measures of central tendency like
the mean, median and mode. They help you summarize bunches of numbers with
one or just a few numbers. They make telling people about your data easy.

Range, variance and standard deviation

These are all measures of dispersion. These help you to know the spread of scores
within a bunch of scores. Are the scores really close together or are they really far
apart? For example, if you were describing the heights of students in your class to
a friend, they might want to know how much the heights vary. Are all the men
about 5 feet 11 inches within a few centimeters or so? Or is there a lot of variation
where some men are 5 feet and others are 6 foot 5 inches? Measures of dispersion
like the range, variance and standard deviation tell you about the spread of scores
in a data set. Like central tendency, they help you summarize a bunch of numbers
with one or just a few numbers.
Q 3. What are the characteristics of a good research design? Explain how
the research design for exploratory studies is different from the research
design for descriptive and diagnostic studies.

Ans.

Design research investigates the process of designing in all its many fields. It is
thus related to Design methods in general or for particular disciplines. A primary
interpretation of design research is that it is concerned with undertaking research
into the design process. Secondary interpretations would refer to undertaking
research within the process of design. The overall intention is to better understand
and to improve the design process.

Throughout the design construction task, it is important to have in mind some


endpoint, some criteria which we should try to achieve before finally accepting a
design strategy. The criteria discussed below are only meant to be suggestive of
the characteristics found in good research design. It is worth noting that all of
these criteria point to the need to individually tailor research designs rather than
accepting standard textbook strategies as is Theory-Grounded. Good research
strategies reflect the theories which are being investigated. Where specific
theoretical expectations can be hypothesized these are incorporated into the
design. For example, where theory predicts a specific treatment effect on one
measure but not on another, the inclusion of both in the design improves
discriminant validity and demonstrates the predictive power of the theory.

Situational. Good research designs reflect the settings of the investigation. This
was illustrated above where a particular need of teachers and administrators was
explicitly addressed in the design strategy. Similarly, intergroup rivalry,
demoralization, and competition might be assessed through the use of additional
comparison groups who are not in direct contact with the original group.

Feasible. Good designs can be implemented. The sequence and timing of events are
carefully thought out. Potential problems in measurement, adherence to assignment,
database construction and the like, are anticipated. Where needed, additional groups or
measurements are included in the design to explicitly correct for such problems.

Redundant. Good research designs have some flexibility built into them. Often,
this flexibility results from duplication of essential design features. For example,
multiple replications of a treatment help to insure that failure to implement the
treatment in one setting will not invalidate the entire study.

Efficient. Good designs strike a balance between redundancy and the tendency to
overdesign. Where it is reasonable, other, less costly, strategies for ruling out
potential threats to validity are utilized.

This is by no means an exhaustive list of the criteria by which we can judge good
research design. Nevertheless, goals of this sort help to guide the researcher
toward a final design choice and emphasize important components which should be
included.
The development of a theory of research methodology for the social sciences has
largely occurred over the past half century and most intensively within the past two
decades. It is not surprising, in such a relatively recent effort, that an emphasis on
a few standard research designs has occurred. Nevertheless, by moving away from
the notion of "design selection" and towards an emphasis on design construction,
there is much to be gained in our understanding of design principles and in the
quality of our research.

Exploratory research provides insights into and comprehension of an issue or


situation. It should draw definitive conclusions only with extreme caution.
Exploratory research is a type of research conducted because a problem has not
been clearly defined. Exploratory research helps determine the best research
design, data collection method and selection of subjects. Given its fundamental
nature, exploratory research often concludes that a perceived problem does not
actually exist.

Exploratory research often relies on secondary research such as reviewing available


literature and/or data, or qualitative approaches such as informal discussions with
consumers, employees, management or competitors, and more formal approaches
through in-depth interviews, focus groups, projective methods, case studies or pilot
studies. The Internet allows for research methods that are more interactive in
nature: E.g., RSS feeds efficiently supply researchers with up-to-date information;
major search engine search results may be sent by email to researchers by
services such as Google Alerts; comprehensive search results are tracked over
lengthy periods of time by services such as Google Trends; and Web sites may be
created to attract worldwide feedback on any subject.

The results of exploratory research are not usually useful for decision-making by
themselves, but they can provide significant insight into a given situation. Although
the results of qualitative research can give some indication as to the "why", "how"
and "when" something occurs, it cannot tell us "how often" or "how many."

Exploratory research is not typically generalizable to the population at large.

Applied research in administration is often exploratory because there is need for


flexibility in approaching the problem. In addition there are often data limitations
and a need to make a decision within a short time period. Qualitative research
methods such as case study or field research are often used in Exploratory
research.

There are three types of objective in a marketing research project, Exploratory


Research or Formulative Research, Descriptive research and Causal research

The objective of exploratory research is to gather preliminary information that will


help define problems and suggest hypotheses.

The objective of descriptive research is to describe things, such as the market


potential for a product or the demographics and attitudes of consumers who buy
the product.
Q 4. How is the Case Study method useful in Business Research? Give two
specific examples of how the case study method can be applied to
business research.

Ans.

A case study is a research methodology common in social science. It is based on an


in-depth investigation of a single individual, group, or event. Case studies may be
descriptive or explanatory. The latter type is used to explore causation in order to
find underlying principles.

Rather than using samples and following a rigid protocol (strict set of rules) to
examine limited number of variables, case study methods involve an in-depth,
longitudinal (over a long period of time) examination of a single instance or event:
a case. They provide a systematic way of looking at events, collecting data,
analyzing information, and reporting the results. As a result the researcher may
gain a sharpened understanding of why the instance happened as it did, and what
might become important to look at more extensively in future research. Case
studies lend themselves to both generating and testing hypotheses.

Another suggestion is that case study should be defined as a research strategy, an


empirical inquiry that investigates a phenomenon within its real-life context. Case
study research means single and multiple case studies, can include quantitative
evidence, relies on multiple sources of evidence and benefits from the prior
development of theoretical propositions. Case studies should not be confused with
qualitative research and they can be based on any mix of quantitative and
qualitative evidence. Single-subject research provides the statistical framework for
making inferences from quantitative case-study data. This is also supported and
well-formulated in (Lamnek, 2005): "The case study is a research approach,
situated between concrete data taking techniques and methodologic paradigms."

When selecting a case for a case study, researchers often use information-oriented
sampling, as opposed to random sampling. This is because an average case is
often not the richest in information. Extreme or atypical cases reveal more
information because they activate more basic mechanisms and more actors in the
situation studied. In addition, from both an understanding-oriented and an action-
oriented perspective, it is often more important to clarify the deeper causes behind
a given problem and its consequences than to describe the symptoms of the
problem and how frequently they occur. Random samples emphasizing
representativeness will seldom be able to produce this kind of insight; it is more
appropriate to select some few cases chosen for their validity, but this isn't always
the case.

Three types of information-oriented cases may be distinguished:

Critical cases, Extreme or deviant cases and Paradigmatic cases


Yin (2005) suggested that researchers should decide whether to do single-case or
multiple-case studies and chose to keep the case holistic or have embedded sub-
cases. This two-by-two combination can produce four basic designs for case
studies.

In business research, Case study research excels at bringing us to an


understanding of a complex issue or object and can extend experience or add
strength to what is already known through previous research. Case studies
emphasize detailed contextual analysis of a limited number of events or conditions
and their relationships. Researchers have used the case study research method for
many years across a variety of disciplines. Social scientists, in particular, have
made wide use of this qualitative research method to examine contemporary real-
life situations and provide the basis for the application of ideas and extension of
methods. Researcher Robert K. Yin defines the case study research method as an
empirical inquiry that investigates a contemporary phenomenon within its real-life
context; when the boundaries between phenomenon and context are not clearly
evident; and in which multiple sources of evidence are used.

Critics of the case study method believe that the study of a small number of cases
can offer no grounds for establishing reliability or generality of findings. Others feel
that the intense exposure to study of the case biases the findings. Some dismiss
case study research as useful only as an exploratory tool. Yet researchers continue
to use the case study research method with success in carefully planned and
crafted studies of real-life situations, issues, and problems. Reports on case studies
from many disciplines are widely available in the literature.

How to use the case study method and then applies the method to an example
case study project designed to examine how one set of users, non-profit
organizations, make use of an electronic community network. The study examines
the issue of whether or not the electronic community network is beneficial in some
way to non-profit organizations and what those benefits might be.

Many well-known case study researchers such as Robert E. Stake, Helen Simons,
and Robert K. Yin have written about case study research and suggested
techniques for organizing and conducting the research successfully. This
introduction to case study research draws upon their work and proposes six steps
that should be used to:

Determine and define the research questions

Select the cases and determine data gathering and analysis techniques

Prepare to collect the data

Collect data in the field

Evaluate and analyze the data


Prepare the report

Q5. What are the differences between observation and interviewing as


methods of data collection? Give two specific examples of situations where
either observation or interviewing would be more appropriate.

Ans.

The most complete form of the sociological datum, after all, is the form in which
the participant observer gathers it: An observation of some social event, the events
which precede and follow it, and explanations of its meaning by participants and
spectators, before, during, and after its occurrence. Such a datum gives us more
information about the event under study than data gathered by any other
sociological method. Participant observation can thus provide us with a yardstick
against which to measure the completeness of data gathered in other ways, a
model which can serve to let us know what orders of information escape us when
we use other methods.' By participant observation we mean that method in which
the observer participates in the daily life of the people under study, either openly in
the role of researcher or covertly in some disguised role, observing things that
happen, listening to what is said, and questioning people, over some length of
time. We want, in this paper, to compare the results of intensive field work with
what might be regarded as the first step in the other direction along this
continuum: the detailed and conversational interview (often referred to as the
unstructured or undirected interview).3 In this kind of interview, the interviewer
explores many facets of his interviewee's concerns, treating subjects as they come
up in conversation, pursuing interesting leads, allowing his imagination and
ingenuity full rein as he tries to develop new hypotheses and test them in the
course of the interview. In the course of our current participant observation among
medical students? we have thought a good deal about the kinds of things we were
discovering which might ordinarily be missed or misunderstood in such an
interview. W e have no intention of denigrating the interview or even such less
precise modes of data gathering as the questionnaire, for there can always be good
reasons of practicality, economy, or research design for their use. We simply wish
to make explicit the difference in the data gathered by one or the other method
and to suggest the differing uses to which they can legitimately be put. In general,
the shortcomings we attribute to the interview exist when it is used as a source of
information about events that have occurred elsewhere and are described to us by
informants. Our criticisms are not relevant when analysis is restricted to
interpretation of the interviewee's conduct during the interview, in which case the
researcher has in fact observed the behavior he is talking about. The differences
we consider between the two methods involve two interacting factors: the kinds of
words and acts of the people under study that the researcher has access to, and
the kind of sensitivity to problems and data produced in him. Our comparison may
prove useful by suggestive areas in which interviewing (the more widely used
method at present and likely to continue so) can improve its accuracy by taking
account of suggestions made from the perspective of the participant observer. We
begin by considering some concrete problems: learning the native language, or the
problem of the degree to which the interviewer really understands what is said to
him; matters interviewees are unable or unwilling to talk about; and getting
information on matters people see through distorting lenses. We then consider
some more general differences between the two methods.

Observation as a method of data collection has certain characteristics.

1. It is both a physical and a mental activity: The observing eye catches many
things that are present. But attention is focused on data that are pertinent to
the given study.
2. Observation is selective: A researcher does not observe anything and
everything, but selects the range of things to be observed on the basis of the
nature, scope and objectives of his study. For example, suppose a researcher
desires to study the causes of city road accidents and also formulated a
tentative hypothesis that accidents are caused by violation of traffic rules and
over speeding. When he observed the movements of the vehicles, the
persons sitting in them, their hair style, etc. all such things which are not
relevant to his study are ignored and only over speeding and traffic violation
are keenly observed by him.
3. Observation is purposive and not casual: it is made for the specific
purpose of noting things relevant to the study. It captures the natural social
context in which persons behavior occur. It grasps the significant events and
occurrences that affect social relations of the participants.
4. Observation should be exact and be based on standardized tools of research
and such as observation schedule, social metric scale etc., and precision
instruments, if any.

Observation has following advantages:

 The main virtue of observation is its directness: It makes it possible to study


behavior as it occurs. The researcher need not ask people about their
behavior and interactions he can simply watch what they do and say.
 Data collected by observation may describe the observed phenomena as they
occur in their natural settings. Other methods introduce elements or
artificiality into the researchers situation for instance, in interviews, the
respondent may not behave in a natural way. There is no such artificiality
into the researched situation for instance, in interview; the respondent may
not behave in a natural way. There is not such artificiality in observational
studies, especially when the observed persons are not aware of their being
observed
 Observations is more suitable for studying subjects who are unable to
articulate meaningfully, e.g. studies of children, tribal, animals, birds etc.
 Observations improve the opportunities for analyzing the contextual back
ground of behavior. Furthermore verbal resorts can be validated and
compared with behavior through observation. The validity of what men of
position and authority say can be verified by observing what they actually do.

Observation is less demanding of the subjects and has less biasing effect on their
conduct than questioning.
Interview method

Interviewing is one of the prominent methods of data collection. It may be defined


as a two way systematic conversation between an investigator and an informant,
initiated for obtaining information relevant to a specific study. It involves not only
conversation, but also learning from the respondent’s gesture, facial expressions
and pauses, and his environment. Interviewing requires face to face contact or
contact over telephone and calls for interviewing skills. It is done by using a
structured schedule or an unstructured guide.

Interviewing may be used either as a main method or as a supplementary one in


studies of persons. Interviewing is the only suitable method for gathering
information from illiterate or less educated respondents. It is useful for collecting a
wide range of data from factual demographic data to highly personal and intimate
information relating to a person’s opinions, attitudes, values, beliefs past
experience and future intentions. When qualitative information is required or
probing is necessary to draw out fully, and then interviewing is required. Where the
area covered for the survey is a compact, or when a sufficient number of qualified
interviewers are available, personal interview is feasible.

Interview is often superior to other data-gathering methods. People are usually


more willing to talk than to write. Once report is established, even confidential
information may be obtained. It permits probing into the context and reasons for
answers to questions.

Interview can add flesh to statistical information. It enables the investigator to


grasp the behavioral context of the data furnished by the respondents.

There are several advantages to personal interviewing.

First the greatest value of this method is the depth and detail of information that
can be secured. When used with well conceived schedules, an interview can obtain
a great deal of information. It far exceeds mail survey in amount and quality of
data that can be secured.

Second, the interviewer can do more to improve the percentage of responses and
the quality of information received than other method. He can note the conditions
of the interview situation, and adopt appropriate approaches to overcome such
problems as the respondent’s unwillingness, incorrect understanding of questions,
suspicion, etc.

Third, the interviewer can gather other supplemental information like economic
level, living conditions etc. through observation of the respondents environment.

Fourth, the interviewer can use special scoring devices, visual materials and the
like in order to improve the quality of interviewing.

Fifth, the accuracy and dependability of the answers given by the respondent can
be checked by observation and probing.

Last, interview is flexible and adaptable to individual situations. Even more, control
can be exercised over the interview situation.

Demerits of interview method

Interviewing is not free limitations. Its greatest drawback is that it is costly both in
money and time. Second, the interview results are often adversely affected by
interviewer’s mode of asking questions and interactions, and incorrect recording
and also by the respondent’s faulty perception, faulty memory, inability to
articulate etc.

Third, certain types of personal and financial information may be refused in face-to
face interviews. Such information might be supplied more willingly on mail
questionnaires, especially if they are to be unsigned.

Fourth, interview poses the problem of recording information obtained from the
respondents. No full proof system is available. Note taking in invariably distracting
to both the respondent and the interviewer and affects the trhead of the
conversation.

Last interview calls for highly interviewers. The availability of such persons is
limited and the training of interviewers is often a long and costly process.

Situation where observation is appropriate:

Observations make it possible to capture the whole event as it occurs. For example
only observation can provide an insight into all the aspects of the process of
negotiation between union and management representatives.

Situation where interview method is appropriate: to study the Reading habits of


newspaper/magazines readers.

6. Case study- you are engaged to carry out a market survey on behalf of a
leading Newspaper that is keen to increase its circulation in Bangalore city, in order
to ascertain reader habits and interests. What type of Research report would be
most appropriate? Develop an outline of the research report with the main
sections.

Ans-

Popular Reports would be most appropriate for this study.

In popular report the reader is less interested in the methodological details, but
more interested in the findings of the study. Complicated statistics are avoided and
pictorial devices are used. After a brief introduction to the problem and the
objectives of the study, and abstract of the findings of the study, conclusion and
recommendations are presented. More headline, underlining pictures and graphs
may be used. Sentences and paragraphs should be short.

Title of the study: A study of Reader’s Choice of Topics on the front page
and depth of coverage of various categories of news.

Objectives of the study/ Questions to be answered by the study:

• To understand what the readers wish to see on the front page of the
newspaper.
• To understand how much depth of the news is to be covered.
• To understand the kind of images the readers like.
• To understand what proportion of politics, sports, cinema and health etc is to
be covered in the newspaper.
• What readers want to read in the newpaper other that the current topics.

And outline of a research report is given below:

Prefatory items

Title page
Declaration
Preface/acknowledgement
Table of contents
List of tables
List of graphs/figures/ charts
Abstract of synopsis.
Body of the report
• Introduction
• Theoretical background of the topic
• Statement of the problem
• Review of literature
• The scope of the study
• The objectives of the study
• Hypothesis to be tested
• Definition of the concepts
• Models if any
• Design of the study
• Methodology
• Method of data collection
• Sources of data
• Sampling plan
• Data collection instruments
• Field work
• Data processing and analysis plan
• Overview of the report
• Limitation of the study
• Results: findings and discussions
• Summary, conclusions and recommendations.

You might also like