You are on page 1of 270

RESEARCH METHODOLOGY

Research Methodology

COURSE OVERVIEW

Management Research is increasingly an important tool in all areas of management activity


and it support in business decision process. This course aims to comprehensively equip
students on all areas related to the design, analysis and solving the management research
problem. Specifically it would enable students to select and define a research problem, develop
an appropriate research plan, write a research proposal, collect data, carry out analysis and
submit a report of main findings.
The course would provide an exposure to students of different analytical techniques including
quantitative multivariate techniques of data analysis as well as qualitative techniques The course
would cover the foundations of research, sampling, data collection, data analysis and presenta-
tion of main findings.
The students on completion of the course shall develop the following skills and competencies:
A. Writing of research proposal
B. Preparing research design
C. Techniques of data collection & sampling
D. Data analysis using software package SPSS
E. Report writing

© Copy Right: Rai University


11.556 i
RESEARCH METHODOLOGY

SYLLABUS

Unit: Research Methodology


Unit value: 4 Credits
Unit level: S2
Unit code: MBA-208

Description of unit
The aim of this unit is to introduce students to the practical techniques in research, limitations of quantitative techniques in
aspects of management research. The focus of this unit will be business research Steps in the research process: selection of
to integrate techniques and concepts learnt at a theoretical level problem, literature survey, formulation of hypotheses, research
in other modules with their practical application to management design, analysis and report writing.
research problems. Research is increasingly an important tool in
Designing the research : defining the research objective, research
all areas of management activity and this course aims to
design: exploratory, descriptive, causal.
comprehensively equip students on all areas related to the
Write-up: writing a research proposal, report: formatting: title
design, analysis and presentation of management research.
page, abstract, body, introduction, methods, sample, measures,
Specifically it would enable students to select and define a
design, procedures, results, conclusions, references: reference
research problem, develop an appropriate research plan, write a
citations in the text of your paper, reference list in reference
research proposal , collect data, carry out analysis and submit a
section; tables, figures, and appendices, presentation of results:
report of main findings. The course would provide an expo-
sure to students of different analytical techniques including 2. Techniques of data collection and sampling
quantitative multivariate techniques of data analysis as well as Sources of marketing data:retail audits, consumer panels,
qualitative techniques .The course will cover the foundations of scanner service and single source system, diary method, internet
research, sampling, data collection, data analysis and presenta- as a source of data: primary and secondary
tion of main findings. Qualitative research techniques: depth interviews, focus groups,
projective techniques, limitations of qualitative methods,
observation, case study Survey design: data sources: primary
Summary of outcomes and secondary, questionnaire layout: structured , unstructured,
To achieve this unit a student must: types of information wanted, sequencing, types of questions:
• Investigate the process of research design for business paired comparison, semantic differential, bias in question, pilot
decision-making testing, types of questionnaires: personal interviews, telephone
• Determine appropriate techniques of data collection and interviews, mail surveys.
sampling for managerial research
Measurement of attitudes: scales of measurement: nominal,
• Investigate application of methods of data analysis and
ordinal, interval and ratio, attitude scales, eg Thurston scale,
multivariate techniques for research
Likert scale, Semantic Differential.
• Undertake a piece of research making effective use of
research methods for business and management Sampling issues in research: definition of universe, determining
sampling units, determining sampling frame, determining
sample size, errors in sampling: sampling error, non sampling
Content error, problem of non response, selecting samples: probability
1. Fundamentals of research process vs non probability methods
Role of research in business decision making: types of research
in business decision making, quantitative and qualitative

© Copy Right: Rai University


ii 11.556
Outcomes and assessment

RESEARCH METHODOLOGY
Probability sampling: simple random sampling, use of random
number tables, stratified sampling, cluster sampling, systematic criteria
sampling Outcomes Assessment criteria

Non probability methods:convenience sampling, judgement


To achieve each outcome a student must demonstrate the
sampling, quota sampling, snowball sampling ability to:
1. Investigate the process of • Describe the importance of management
3. Data analysis research design for research in business
business decision- • Understand steps in the process of design and
Tabulation:coding, simple tabulation, cross tabulation, making implementation of a research
• Demonstrate knowledge and skill in proposal
weighting and report writing
2. Determine appropriate techniques
Applications of hypotheses testing to research situations: of data collection • Assess different techniques of data collection
and sampling for including quantitative and qualitative data
managerial research
factors influencing choice of statistical technique, tests of • Prepare a sample questionnaire to collect data
hypotheses and significance for small samples, paired sample t • Determine survey sample size and
composition based on principles of sampling
tests, tests of hypotheses and significance for large samples, 3. Investigate application of
methods of data • Apply techniques of hypotheses testing to
tests of proportions, choice of level of significance, power of a analysis and data collected
multivariate • Propose use of multivariate techniques of
test, interpretation of of p - values in computer output, non techniques of analysis for sample data
analysis for
parametric tests of hypotheses, bivariate analysis: chi-square test, research
4.Undertake a piece of research • Specify research objectives and write a
cross tabulations and chi square test, analysis of variance: one making effective use research proposal
of research • Collect and analyse data using different
way ANOVA, F statistic, Latin Square Design methods for techniques of data analysis studied in this
business and module
4. Multivariate analysis management • Prepare report and presentation of main
findings
Correlation and multiple regression: Correlation coefficient,
multiple regression: when to use, scatter plots, fitting the Least
Squares model, interpretation of parameter estimates, mean Guidance
square error(MSE), testing significance of independent vari- Generating evidence
ables, goodness of fit, coefficient of determination, multiple Assessment: Component weighting
regression model, Adjusted R2, stepwise multiple regression, In-module coursework: 40%
problems when using regression analysis, interpretation of End-of-module examination: 60%
computer output for multiple regression Factor and cluster The in-module component will require the application of
analysis: application areas, methods, factor interpretation, research methodology to a management problem and will be
interpretation of computer output Multidimensional scaling; completed in small groups of four or five students. Students
application areas, methods, interpretation of computer output will be required to define the research task and objectives;
Discriminant analysis: application areas, methods, interpretation identify the target population and sampling frame; describe
of computer output the research methodology to be used; conduct the research
Clustering Analysis: application areas, methods, interpretation survey; and use an appropriate computer package to analyse the
of computer output data. Student progress will be monitored by a series of five

Conjoint analysis: application areas, methods, interpretation of meetings alongside student logs, with marks awarded to

successful results individuals for each section completed (10% of component


assessment). Each group will present their findings in a group
Computer skills: use of statistical packages such as SPSS for
report and presentation with equal mark allocated to all (20% of
carrying out data analysis
component assessment). A report will be submitted individu-
ally on a topic of strategic nature reviewing available literature.
This component will carry the balance 10% marks.
The end-of-examination component will consist of a 3 hour
practical where the students will be required to define research
issues, apply statistical and quantitative techniques , analyse a
data set and prepare a short report on their findings.

© Copy Right: Rai University


11.556 iii
Links
RESEARCH METHODOLOGY
Nargundkar R – Marketing Research Text and Cases((Tata
The unit is intended to give a good understanding of issues McGraw- Hill 2002)
impacting business organisations. It is part of the MBA Bell J- Doing your Research Project (OU Press, 1993)
management pathway and links with the all finance, mathemat- Diamantopoulos A and Schlegelmilch A- Taking the fear out of
ics and statistics, operations, HRM, marketing, strategic, Data Analysis (Dryden Press, 1997)
accounting, and business modules
Easterby-Smith M et al- Management Research-an introduction
Resources (Sage Publications, 1991)
The business press can be a significant source of information. Miller D C- Handbook of Research Design and Social Measure-
Companies such as Video Arts produce a variety of videos, ment (Sage Publications, 1991)
which may be useful in covering international finance topics.
Trochim W M K- Research Methods (Atomic Dog, 2003)
However, one of the best sources for information is the World
Wide Web sites that can be used for providing information and Delivery
case studies (eg http://www.bized.ac.uk/ which provides A mixture of lectures to cover basic theory and case studies for
business case studies appropriate for educational purposes). class discussion will be used as reinforcement of key concepts.
Others are http:// www. businesscases.org/, www. Much of the module is statistical and students will be expected
Businesscase.com/, http:// www.icongrouponline.com, www. to understand the underlying concepts but emphasis will be
3.ibm.com/e-business, www. 5.ibm.com/e-business/uk/ placed on the ability to use statistical software on a PC (eg
case_studies. Library for secondary research and CD-ROM Minitab or SPSS) to analyse data. Throughout the module,
databases are to be specially targeted. Some are listed for ready creativity and communication skills will be emphasised.
reference www.faust.information.com, www.un.org/Pubs/
whatsnew, www.un.org/Pub/whatsnew/electron.htm, www.
businessmonitor.com, and Global CD-ROM Directories gives
the business listing on CD-ROM. CD-ROM Software is
available on www.sba.gov/bi/bics/biccdrom.html.
Important journals available for consultation are: British J of
Management, European Finance Review, Global Finance J, J of
International Business Studies, International J of Human
Resource, International J of Intercultural Relations, Interna-
tional Trade J, J of World Business, J of International
Economic Law, J of International Economic, Thunderbird
International Business Review, and Law and Policy in Interna-
tional Business. Other relevant material is found in Financial
Press, Financial Times, Investors’ Chronicle, financial pages of
quality newspapers, and Annual Reports of organisations

Suggested Reading
There are a large number of textbooks available covering the
areas contained within the unit.
Examples are:
Kothari C R – Quantitative Techniques(Vikas Publishing
House 3rd ed.)
Levin R I & Rubin DS - Statistics for Managemen(Prentice Hall
of India, 2002)
Aaker D A , Kumar V & Day G S - Marketing Research( John
Wiley &Sons Inc, 6th ed.)

© Copy Right: Rai University


iv 11.556
RESEARCH METHODOLOGY
RBS RESEARCH METHODOLOGY (MBA) 11.556

CONTENT
Unit No. Lesson No. Topic Page No.
Lesson Plan vii

Lesson 1 Role of research in business decision making 1


Lesson 2 Steps in Research Process 10
Lesson 3 Research Proposal 20
Lesson 4 Tutorials 27
Lesson 5 Research Design and Experimental Designs 28
Lesson 6 Tutorials 36
Lesson 7 Writing The Research 37
Lesson 8 Techniques of Data collection 48
Lesson 9 Tutorial 64
Lesson 10 Questionnaire Design 65
Lesson 11 Issues in Questionnaire 69
Lesson 12 Measurement and Scaling 74
Lesson 13 Sampling Issues in Research 79
Lesson 14 Designing Sample 84
Lesson 15 Applications of Market Research 90
Lesson 16 Data coding and Analysis 98
Lesson 17 Principles of Statistical Inference and Confidence Intervals 101
Lesson 18 Statistical Inferences and Sampling Distribution 105
Lesson 19 Model Building And Decision Making 108
Lesson 20 Principle of Hypothesis Testing 114
Lesson 21 Testing of Hypothesis – Large Samples 120
Lesson 22 Tutorial 128
Lesson 23 Tests Of Hypotheses – Small Samples 129
Lesson 24 Non –Parametric Tests 134
Lesson 25 Chi-square Test 140
Lesson 26 Analysis of Variance (ANOVA) 152
Lesson 27 Applications of Anova 158
Lesson 28 Application of Correlation Technique in Research Methodology 162

© Copy Right: Rai University


11.556 v
RESEARCH METHODOLOGY

RBS RESEARCH METHODOLOGY (MBA) 11.556

CONTENT
Unit No. Lesson No. Topic Page No.
Lesson 29 Multicollinearity in Multiple Regression 166
Lesson 30 Multiple Regression 169
Lesson 31 Making inferences about Population parameters 173
Lesson 32 Multicollinearity in Multiple Regression 175
Lesson 33 Applications of regression Analysis in Research 178
Lesson 34 Regression Analysis using SPSS Package 181
Lesson 35 Factor Analysis 190
Lesson 36 Principal Component Analysis 195
Lesson 37 Multidimensional Scaling 199
Lesson 38 Further applications and theory the Multidimensional Scaling
using state software 204
Lesson 39 Conjoint Analysis 207
Lesson 40 Discriminant Analysis 213
Lesson 41 Cluster Analysis 218
Lesson 42 Interpolation and Extrapolation 224
Lesson 43 Case Study 231
Lesson 44 Case Study 247

© Copy Right: Rai University


vi 11.556
RAI UNIVERSITY

RESEARCH METHODOLOGY
RAI BUSINESS SCHOOL

LESSON PLAN
Program: MBA Session Year Semester
2 3

Subject Title : Research Methodology

Title
Name of the Book
Daily & Author Remarks
Name of The Topic As Given Assi
Lesson LO Lesson Practical Page no. of Course
In RU Syllabus gn
Schedule pack

1
1. Unit 1 Fundamentals of research process
Role of research in business
Lesson 1 1
decision making 1
1
Lesson 2 Steps In Research Process 1
1
Lesson 3 Research Proposal

Lesson 4 Tutorial
Research Design & Experimental 1
Lesson 5 1
Designs
Lesson 6 Tutorial

Writing the Research


Lesson 7
1
Techniques of data collection and
2. UNIT II
sampling 1

Lesson 8 Techniques of Data collection 1 1

1
Lesson -9 Tutorial
1
Lesson 10 Questionnaire Design
1
Lesson 11 Issues in Questionnaire 1
1
Lesson 12 Mesurement & Scaling
1
Lesson 13 Sampling Issues in Research
1
Lesson 14 Designing Sample 1
1
Lesson 15 Application to Market Research

3. UNIT III Data analysis


1
1
Lesson 16 Data coding & Analysis 1

© Copy Right: Rai University


11.556 vii
RESEARCH METHODOLOGY
Title
Name of the Book
Daily & Author Remarks
Name of the topic as given in Assi
Lesson LO Lesson Practical Page no. of Course
RU Syllabus gn
Schedule pack

Principles of statistical inference 1


Lesson 17 and confidence intervals

Statistical Inferences & Sampling 1


Lesson 18 Distribution

Lesson 19 Modeling & Decision Making 1

Lesson 20 Principles of Hypotheses Testing 1 1


Testing of Hypotheses – Large 1
Lesson 21
samples
Lesson 22 Tutorial
Testing of Hypotheses – Small 1
Lesson 23
samples
Lesson 24 Non –Parametric Tests 1
Chi-square Test 1
Lesson 25

Lesson 26 Analysis of Variance (ANOVA) 1

Lesson 27 Applications of Anova 1

4. UNIT IV Multivariate Analysis 1

Application of correlation in 1
Lesson 28
research methodology
Introduction to Regression 1
Lesson 29 Analysis

Lesson 30 Multiple regression 1 1


Making inferences about 1
Lesson 31
Population parameters
Multicollinearity in Multiple 1
Lesson 32
Regression
Applications of regression Analysis 1
Lesson 33
in Research
Regression Analysis using SPSS 1
Lesson 34 1
package
Lesson 35 Factor Analysis 1

Lesson 36 Principal Component Analysis 1

Lesson 37 Multidimensional scaling 1 1

Application of Multidimensional 1
Lesson 38
scaling using statistical software
Lesson 39 Conjoint Analysis 1

Lesson 40 Discriminant analysis 1 1

Cluster analysis 1
Lesson 41

Lesson 42 Interpolation & Extrapolation 2

Lesson 43 Case Studies 2

© Copy Right: Rai University


viii 11.556
UNIT I
FUNDAMENTALS OF RESEARCH
LESSON 1: PROCESS
ROLE OF RESEARCH IN BUSINESS
DECISION MAKING

When research is used for decision-making, it means we are

RESEARCH METHODOLOGY
using the methods of science to the art of management. Every
organization operates under some degree of uncertainty. This
uncertainty cannot be eliminated completely, although its can be
minimized with the help of research methodology. Research in
particularly important in the decision making process of various
business organizations. To choose the best line of action (in the
light of growing competition and increasing uncertainty).
Research in common context refers to a search for knowledge. It
can also be defined as a scientific and systematic search for
gaining information and knowledge on a specific topic or
phenomena.
In management research is extensively used in various areas.
For example, We all know that, Marketing is the process of • Collecting the fact or data,
Planning & Executing the concepts, pricing, promotion & • Analyzing the facts and
distribution of ideas, goods, and services to create exchange
• Reaching certain conclusions either in the form of solutions
that satisfy individual & organizational objectives. Thus, we can
towards the concerned problem or in certain generals for
say that, the Marketing Concept requires Customer Satisfac-
some theoretical formulation.
tion rather than Profit Maximization to be the goal of an
organization. The organization should be Consumer oriented Market Research has become an important part in management
and should try to understand consumer’s requirements & decision-making.
satisfy them quickly and efficiently, in ways that are beneficial to Marketing research is a critical part of such a Market intelligence
both the consumer & the organization. system; it helps to improve management decision making by
This means that any organization should try to obtain providing relevant, accurate, & timely information. Every
information on consumer needs and gather market intelligence decision poses unique needs for information gathered through
to help satisfy these needs efficiently. This can only be done only marketing research.
only by research. Thus, we can say that marketing research is the function that
In this lecture we will be discussing the role of research in links the
management and its key ingredients. But first let us understand • Consumer,
the meaning of research. It will be clear after going through
• Customer, and
some important definitions of research.
• The public to the marketer through information
Definition:
Information used to identify and define marketing opportuni-
Some of the definitions of Research are:
ties and problems;
1. Redman and Mory define research as a “systematized effort
• Generate,
to gain new knowledge”.
• Refine and evaluate marketing actions;
2. Some people consider research as a movement, a movement
from known to unknown. It is actually a voyage to discovery. • Monitor marketing performance; and

3. According to Clifford Woody • Improve understanding of marketing as a process.


“Research comprises of defining and redefining problems, In the nut-shell we see that Marketing research specifies
formulating hypothesis or suggested solutions; making • Τhe information required to address these issues,
deductions and reaching conclusions; and at last carefully testing • Designs the method for collecting information,
the conclusions to determine whether they fit the formulating
• Manages and implements the data collection process,
hypothesis”.
• Analyses,
On evaluating these definitions we can conclude that-Research
refers to the systematic method consisting of • Communicates the findings and their implications

• Enunciating the problem, Intelligent use of Research Tools is the key to


• Formulating a hypothesis,
business achievement

© Copy Right: Rai University


11.556 1
• It enables to see whether the research is proceeding in the
RESEARCH METHODOLOGY

right direction
e. Control - Research is not only affected by the factors, which
one is investigating but some other extraneous factors also. It is
impossible to control all the factors. All the factors that we
think may affect the study have to be controlled and accounted
for.
For example
Suppose we are studying the relationship between incomes and
I hope the meaning of research is clear. Research provides a base shopping behaviour, without controlling for education and age,
for your business sound decision - making. it will be a height of folly, since our findings may reflect the
There are three parts involved in any of your effect of education and age rather than income.
systematic finding: Control Must Consider
1. Implicit question posed • All the factors, which are under control, must be varied as per
2. Explicit answer proposed the study demands
3. Collection, analysis, and interpretation of the information • All those variables beyond the control should be recorded
leading from the question to answer. Structure of Research
Illustration Most research projects share the same general structure. You
Consider the statement: might think of this structure as following the shape of an
hourglass.
“We recommend that Model X-240 of music system be priced
at Rs.10, 000/-” Marketing Research Manager forwarded this The research process usually starts with a broad area of interest,
recommendation to Marketing Vice-President. the initial problem that the researcher wishes to study. For
instance, the researcher could be interested in how to use
Implicit Question computers to improve the performance of students in math-
What should be the selling price of model X-240? ematics. But this initial interest is far too broad to study in any
Explicit Answer single research project (it might not even be addressable in a
lifetime of research). The researcher has to narrow the question
The explicit answer is Rs.15000/=.
down to one that can reasonably be studied in a research project.
The third part deals with the collection, analysis, and interpreta- This might involve formulating a hypothesis or a focus
tion of the information leading from the question to the question. For instance, the researcher might hypothesize that a
answer of Rs.15000/. particular method of computer instruction in math will
Research is a systematic approach to gather information improve the ability of elementary school students in a specific
required for sound management decisions. district. At the narrowest point of the research hourglass, the
Research is not synonymous to common sense. The difference researcher is engaged in direct measurement or observation of
lies in the methods and procedures adopted to reach a conclu- the question of interest.
sion.
Characteristics of Research
a. Systematic Approach -Each step must of your investigation
be so planned that it leads to the next step. Planning and
organization are part of this approach. A planned and orga-
nized research saves your time and money.
b. Objectivity - It implies that True Research should attempt to
find an unbiased answer to the decision-making problem.
c. Reproducible - A reproducible research procedure is one,
which an equally competent researcher could duplicate, and from
it deduces approximately the same results. Precise information
regarding samples– methods, collection etc., should be
specified.
d. Relevancy - It furnishes three important tasks:
• It avoids collection of irrelevant information and saves time
and money
• It compares the information to be collected with researcher’s
criteria for action Once the basic data is collected, the researcher begins to try to
understand it, usually by analyzing it in a variety of ways.

© Copy Right: Rai University


2 11.556
Even for a single hypothesis there are a number of analyses a entrepreneur has to make sound decisions, he must know who

RESEARCH METHODOLOGY
researcher might typically conduct. At this point, the researcher has customers are and what they want. To a certain extent he
begins to formulate some initial conclusions about what relies on his salesmen and his dealers to supply him with
happened as a result of the computerized math program. market information but in recent years, more and more firms/
Finally, the researcher often will attempt to address the original executives have turned to research methodology as a medium
broad question of interest by generalizing from the results of of communication between the customer and the company.
this specific study to other related situations. For instance, on Marketing research is the link between the manufacturer and the
the basis of strong results indicating that the math program consumer and the means of providing consumer-orientation in
had a positive effect on student performance, the researcher all aspects of the marketing function. It is the instrument of
might conclude that other school districts similar to the one in obtaining the knowledge about the market and consumer
the study might expect similar results. through objective methods, which guard against the
manufacturer’s subjective bias.
Importance of Research in Management
Decision Many Researchers define marketing research as gathering,
recording and analyzing of all facts about problems relating to
the transfer and sale of goods and services from producer to
consumer.
Research methodology is an essential prerequisite for consumer-
oriented marketing. It is necessary for developing the marketing
strategy where in factors under the control of the organization,
viz., product distribution system, advertising, promotion and
price can be utilized so as to obtain maximum results in the
context of the factors outside the control of the organization
viz., economic environment, competitor and laws of land.
I hope you have the clear picture of the functions of the
The role of research has greatly increased in the field of manager in an organization and the role of research in decision-
business and economy as a whole. The study of research making.
methods provides you with the knowledge and skills you need On the basis of the functions we can state some of the general
to solve the problems and meet the challenges of today’s objectives of Managerial Research:
modern pace of development. Three factors stimulate the • Decision-making objectives
interest in a scientific research to decision making.
• Economic and business objectives
i. The manager’s increased need for more and better
• Policy objectives
information.
• Product development
ii. The availability of improved techniques and tools to meet
this need. • Profit objectives

iii. The resulting information overload • Human Resource Development objectives

The usefulness and contribution of research in assisting • Market objectives:


marketing decisions is so crucial that it has given rise to the i. Innovation objectives
opening of a new field altogether called ‘marketing research’. ii. Customer satisfaction objectives
Market research is basically the systematic gathering, recording
• Promotional objectives
and analyzing of the facts about business problems with a view
to investigate the structure and development of a market for • Corporate change objectives
the purpose of formulating efficient policies for purchasing, Role Of Research in Important Areas
production and sales. Research with regard to demand and Through research, an executive can quickly get a synopsis of the
market factors has great utility in business. Market analysis has current scenario, which improves his information base for
become an integral tool of business policy. Once sales forecast- making sound decisions affecting future operations of the
ing is done, the Master Production Schedule (MPS) and Material enterprise. The following are the major areas in which research
Requirement Planning (MRP) can be efficiently done within the plays a key role in making effective decisions.
limits of the projected capacity based on the MPS Budgetary
control can be made more efficient, thus replacing subjective Marketing
business decisions with more logical and scientific decisions. Marketing research is undertaken to assist the marketing
function. Marketing research stimulates the flow of marketing
Modern industry with its large-scale operations tends to create a data from the consumer and his environment to marketing
gulf between the customer and the manufacturer. Particularly information system of the enterprise. Market research involves
when business is too big and operations are too far-flung, one the process of
cannot depend upon casual contacts and personal impressions.
• Systematic collection
Research methodology has been developed as the tool by which
business executives keep in touch with their customers. If an • Compilation

© Copy Right: Rai University


11.556 3
• Analysis Production
RESEARCH METHODOLOGY

• Interpretation of relevant data for marketing decisions Research helps you in an enterprise to decide in the field of
This information goes to the executive in the form of data. production on:
On the basis of this data the executive develop plans and • What to produce
programmers. • How much to produce
Advertising research, packaging research, performance evaluation • When to produce
research, sales analysis, distribution channel, etc., may also be • For whom to produce
considered in management research.
Some of the areas you can apply research are:
Research tools are applied effectively for studies involving:
• Product development
1. Demand forecasting
• Cost reduction
2. Consumer buying behaviour
• Work simplification
3. Measuring advertising effectiveness
• Profitability improvement
4. Media selection for advertising
• Inventory control
5. Test marketing
6. Product positioning
Materials
7. Product potential The materials department uses
Marketing Research research to frame suitable
i. Product Research: Assessment of suitability of goods with policies regarding:
respect to design and price. • Where to buy

ii. Market Characteristics Research (Qualitative): Who uses the • How much to buy
product? Relationship between buyer and user, buying • When to buy
motive, how a product is used, analysis of consumption • At what prices to buy.
rates, units in which product is purchased, customs and
habits affecting the use of a product, consumer attitudes,
shopping habits of consumers, brand loyalty, research of Human Resource Development
special consumer groups, survey of local markets, basic You must be aware that the Human Resource Development
economic analysis of the consumer market, etc. department uses research to study wage rates, incentive schemes,
iii. Size of Market (Quantitative): Market potential, total sales cost of living, employee turnover rates, employment trends,
quota, territorial sales quota, quota for individuals, and performance appraisal. It also uses research effectively for its
concentration of sales and advertising efforts; appraisal of most important activity namely manpower planning.
efficiency, etc. Solving Various Operational and Planning Problems
iv. Competitive position and Trends Research of Business and Industry
v. Sales Research: Analysis of sales records.
vi. Distribution Research: Channels of distribution,
distribution costs.
vii. Advertising and Promotion Research: Testing and
evaluating, advertising and promotion
viii. New product launching and Product Positioning.

Various types of researches, e.g., market research, operations


research and motivational research, when combined together,
help in solving various complex problems of business and
industry in a number of ways. These techniques help in
replacing intuitive business decisions by more logical and
scientific decisions.
Government and Economic System
Research helps a decision maker in a number of ways, e.g., it can
help in examining the consequences of each alternative and help
in bringing out the effect on economic conditions. Various
examples can be quoted such as’ problems of big and small
industries due to various factors–up gradation of technology

© Copy Right: Rai University


4 11.556
Hospital Management

RESEARCH METHODOLOGY
and its impact on lab our and supervisory deployment, effect of
government’s liberal policy, WTO and its new guidances, ISO
9000/14000 standards and their impact on our exports
allocation of national resources on national priority basis, etc.
Research lays the foundation for all Government Policies in our
economic system.
We all are aware of the fact that research is applied for brining
out union finance budget and railway budget every year.
Government also uses research for economic planning and
optimum utilization of resources for the development of the
country. For systematic collection of information on the
economic and social structure of the country you need Research.
Such types of information indicate what is happening to the
national economy and what changes are taking place.

Railway

Social Relationships
Research in social sciences is concerned with both – knowledge
for self and knowledge for helping in solving immediate
problems of human relations. It is a sort of formal training,
which helps an individual in a better way, e.g.
• It helps professionals to earn their livelihood
• It helps students to know how to write and report various
findings.
• It helps philosophers and thinkers in their new thin kings
and ideas.
• It helps in developing new styles for creative work.
• It may help researchers, in general, to generalize new theories.

Now let us do a certain activity


List out the uses of research in the field of

© Copy Right: Rai University


11.556 5
RESEARCH METHODOLOGY

Temple Management

Traffic Control

© Copy Right: Rai University


6 11.556
Types of Research Exploratory research gives rise to several hypotheses, which you

RESEARCH METHODOLOGY
On the basis of the fundamental objectives of the research, we will have to tested for drawing definite conclusions. These
can classify research into two types: conclusions when tested for validity lay the structure for your
decision-making.
Conclusive research is used for this purpose of testing the
hypotheses generated by exploratory research. Conclusive
research can further be classified as:
• Descriptive
• Experimental.
Exploratory Research
Descriptive Research
Many times a decision maker is grappling with broad and
Descriptive research as the name suggests is designed to
poorly defined problems. If you attempt to secure better
describe something- for example, the characteristics of users of
definitions by analytic thinking, it may be the wrong approach
a given product; the degree to which product use varies with
and may even be counter productive–counter productive in the
income, age, sex or other characteristics; or the number who saw
sense that this approach may lead to a definitive answer to the
a specific television commercial.
wrong question.
Exploratory research uses a less formal approach. It pursues To be of maximum benefit, a descriptive study must only
several, possibilities simultaneously and in a sense it is not quite collect, data for a definite purpose. Your objective and under-
sure of its objective. standing should be clear and specific.
Exploratory research is designed to provide a background, to Descriptive studies vary in the degree to which a specific
familiarize and, as the word implies, just “explore”, the general hypothesis is the guide. It allows both implicit and explicit
subject. hypotheses to be tested depending on the research problem.
A part of exploratory research is the investigation of relation- For example:
ships among variables without knowing why they are studied. A cereal company may find its sales declining. On the basis of
It borders on an, idle curiosity approach, differing from it only market feedback the company may hypothesise that teenage
in that the investigator thinks there may be a payoff in the children do not eat its cereal for breakfast. A descriptive study
application some where in the forest of questions. can then be designed to test this hypothesis.
Three typical approaches in exploratory research are: Experimental Research
a. The literature survey, Experimentation will refer to that process of research in which
b. The experience survey, and one or more variables are manipulated under conditions, which
permit the collection of data, which show the effects.
c. The analysis of “insight-stimulating” examples
Experiments will create situation so that you as a researcher can
obtain the particular data needed and can measure the data
accurately.
Experiments are artificial in the sense that the situations are
usually created for testing purposes.
This artificiality is the essence of the experimental method, since
it gives you more control over the factors you are studying. If
you can control the factors, which are present in a given
situation, you can obtain more conclusive evidence of cause and
effect relationships between any two of them.
The literature search is fast, economical way to develop a better Thus, the ability to set up a situation for the express purpose
understanding of a problem area in which you are investigating of observing and recording accurately the effect on one factor
and have limited experience and knowledge. It also familiarizes when another is deliberately changed permits you to accept or
you with past research results, data sources and the type of data reject hypothesis beyond reasonable doubt. If the objective is to
available. validate in a resounding manner the cause and effect relation-
The experience survey concentrates on persons who are ship among variables, then undoubtedly experiments are much
particularly knowledgeable in the particular area. In this more effective than descriptive technique
representative samples are not desired. A covering of widely Thus, we can conclude that:
divergent views is better. Researchers are not looking for • Research methodology minimizes the degree of uncertainty
conclusions; they are looking for ideas. involved in management decisions. Research lays the
The analysis of specific examples is a sort of case study structure for decision-making.
approach, but again, researchers are looking for some fresh • Research is not synonymous with common sense.
possible divergent views. • Research is characterized by-systematic, objective,
Conclusive research reproducible, relevance and control.

© Copy Right: Rai University


11.556 7
• The role of research in the important areas of management Activity
RESEARCH METHODOLOGY

has been briefly covered. The areas include marketing, You are Product manager for brand EXCELLENCE of
production, banking, materials, human resource Vanaspati, a nationally distributed brand.
development and government. For the last four consequent months, brand EXCELLENCE
• Research process involves the five important steps-problem shows a declining trend in sales. You ask the research depart-
definition, research design, data collection, data analysis, and ment to do a study to determine why sales have declined.
interpretation of results. All these steps have been explained Is this an exploratory or conclusive research? Explain Your
in detail with their key elements. Reasons.
• We have dichotomized the types of research into-
exploratory, and conclusive.
• While exploratory research enables the researcher to generate
hypotheses, they are tested for validity by the conclusive
research.
• Conclusive research could be further divided intodescriptive,
and experimental.
• While the descriptive procedures merely test the hypotheses,
the experimental research establishes in a more effective
manner the cause and effect relationships among variables.

Activity
Research can be classified into:

© Copy Right: Rai University


8 11.556
Self-Assessment

RESEARCH METHODOLOGY
1. Creative management, whether in public administration or
private industry, depends on methods of inquiry that maintain
objectivity, clarity, accuracy and consistency.” Discuss this
statement and examine the significance of research.
2. It is often said that there is not a proper link between some
of the activities under way in the world of academics and in
most business in our country. Account for this state of affairs
and give suggestions for improvement.
3. Discuss with examples “Exploratory Research”, “Descriptive
Research” and “Experimental Research”
4. Briefly explain the meaning and importance of each of the
following in research
• Systematic
• Objectivity
• Relevance
• Reproducible
5. Analyse, criticize, and explain:
Research “control” of the environment introduces artificial
conditions. Objective research is best achieved recording what
happens without “disturbing” the environment.

Notes -

© Copy Right: Rai University


11.556 9
RESEARCH METHODOLOGY

LESSON 2:
STEPS IN RESEARCH PROCESS

Students, in our introduction to Descriptive Research


the subject lecture we had Descriptive research is more rigid than exploratory research and
conquered the areas where seeks to describe users of a product, determine the proportion
research is used as a tool for of the population that uses a product, or predict future
decision-making. Also we know demand for a product or describes the happening of a certain
that all business operates in the phenomenon.
world of uncertainty. Research As opposed to exploratory research, if you are doing descriptive
methodology minimizes the research you should define questions, people surveyed, and the
degree of uncertainty involved in method of analysis prior to beginning data collection.
management decisions. Research lays the structure for decision-
In other words, the who, what, where, when, why, and how
making
aspects of the research should be defined. Such preparation
Let us recapitulate what we have studied in the last lecture. We allows you the opportunity to make any required changes before
saw that research plays a dominant role in the field of the costly process of data collection has begun.
• Marketing There are two basic types of descriptive research:
• Production
Longitudinal Studies
• Banking Longitudinal studies are time series analyses that make repeated
• Materials measurements of the same individuals, thus allowing you to
• Human Resource Development monitor behaviour such as brand switching. However, longitu-
dinal studies are not necessarily representative since many people
• Government
may refuse to participate because of the commitment required.
You can classify Research in one of three categories:
Cross-sectional Studies
1. Exploratory research Cross-sectional studies sample the population to make
2. Descriptive research measurements at a specific point in time. A special type of
3. Causal research cross-sectional analysis is a cohort analysis, which tracks an
These classifications are made according to your objective of the aggregate of individuals who experience the same event within
research. In some cases the research will fall into one of these the same time interval over time. You can use Cohort analyses
categories, but in other cases different phases of the same for long- forecasting of product demand.
research project will fall into different categories. Causal Research
Now we will discuss these categories in detail: Casual Research seeks to find cause and affect relationships
between variables. It accomplishes this goal through laboratory
Exploratory Research
and field experiments. Research process involves important
Exploratory research has the goal of formulating problems
steps-
more precisely, clarifying concepts, gathering explanations,
gaining insight, eliminating impractical ideas, and forming • Problem definition
hypotheses. Exploratory research can be performed using a • Research proposal
literature search, surveying certain people about their experi- • Research Design
ences, focus groups, and case studies. • Data Collection
When you will be surveying people, exploratory research • Data Analysis & interpretation
studies would not try to acquire a representative sample, but
• Report writing
rather, seek to interview those who are knowledgeable and who
might be able to provide you the insight concerning the • Interpretation of Research
relationship among variables. Refer diagram below to understand each steps clearly
Case studies can include contrasting situations or benchmarking
against an organization known for its excellence. Exploratory
research may develop hypotheses, but it does not seek to test
them.
Exploratory research is characterized by its flexibility.

© Copy Right: Rai University


10 11.556
RESEARCH METHODOLOGY
The Research Process

Discover Management Dilemma

Define Management Question

Define Research Question (s)

Redefine research Questions

Exploration
Exploration

Research
proposal

Research Design

Design Strategy
(type, purpose, time frame, scope, environment)

Data Collection Sampling design

Question & Instrument Testing

Instrument Revision

Data Collection & Preparation

Research Data Analysis & Interpretation


planning

Data Research Reporting


gathering

Analysis
interpretation, Management
and reporting
Decision

© Copy Right: Rai University


11.556 11
Problem Definition We will discuss the research proposal in detail in lesson 3.
RESEARCH METHODOLOGY

First of all one should be clear as what exactly the problem is, If Research Design
you are working in a company, the problem is assign by the top
management; usually you get broad ideas regarding the Data Collection-Types and Sources
Problem. Then with the broad concept from the top manage- Once the researcher has decided the ‘Research Design’, the next
ment you define the specific problem statement. job is of data collection. For data to be useful, our observa-
tions need to be organized so that we can get some patterns
Or if you are doing some research project then you have to
and come to logical conclusions.
identify your problem statement of your own. Remember your
problem statement should be specific. Statistical investigation requires systematic collection of data, so
that all relevant groups are represented in the data.
Let us suppose that we want to know which of the two
methods can be employed To determine the potential market for a new product, for
example, the researcher might study 500 consumers in a certain
Method I or Method II.
geographical area. It must be ascertained that the group
The researcher should be clear with the alternative choices he contains people representing variables such as income level, race,
has. education and neighborhood. The quality of data will greatly
Should other Methods be also considered? affect the conclusions and hence, utmost importance must be
Let us assume that the alternative choices are clearly specified: given to this process and every possible precaution should be
either II or I be employed. taken to ensure accuracy, while gathering and collecting data.
We have identified the alternative choices but not completely Depending upon the sources utilized, whether the data has
specified the problem. come from actual observations or from records that are kept for
normal purposes, statistical data can be classified into two
The complete problem is concerned with the criterion that will
categories, primary and secondary.
determine the superiority of the two methods.
The criterion could be: Primary Data
Primary data is one, which is collected by the investigator
• Cost
himself for the purpose of a specific inquiry or study. Such data
• Efficiency of materials is original in character and is generated by surveys conducted by
• Availability of resources, etc. individuals or research institutions.
There are three aspects of research problem Some common types of primary data are:
a. The specification of units to be studied • Demographic and socioeconomic characteristics
b. The identification of the particular units within the scope of • Psychological and lifestyle characteristics
study • Attitudes and opinions
c. The specification of the kind of • Awareness and knowledge - for example, brand awareness
information to be sought.
• Intentions - for example, purchase intentions. While useful,
What would you like to know if intentions are not a reliable indication of actual future
information is free and without error? behavior.
A complete answer to this question • Motivation - a person’s motives are more stable than his/her
defines the initial research problem. It behavior, so motive is a better predictor of future behavior
can be redefined later if some difficulty arises. than is past behavior.
Research Proposal • Behaviour
Research proposal are necessary for all business research, it may Primary data can be obtained by
be the internal proposal or it may be the external proposal. But Communication - Involves questioning respondents either
research proposal is not required in case of research studies for verbally or in writing. This method is versatile, since you need
P. hd., or paper presentation as concerned. only to ask for the information; however, the response may not
A proposal is known as a work plan, prospectus, outline, be accurate. Communication usually is quicker and cheaper than
statement of intent, or draft plan. The proposal tells us what, observation.
why, how, where, and to whom it will be done. Observation - Involves the recording of actions and is per-
The proposal of research is – formed by either a person or some mechanical or electronic
1. To present the management question to be researched and device. Observation is less versatile than communication since
its importance some attributes of a person may not be readily observable, such
as attitudes, awareness, knowledge, intentions, and motivation.
2. To discuss the research efforts of others who have worked
Observation also might take longer since observers may have to
on related management questions.
wait for appropriate events to occur, though observation using
3. To suggest the data necessary for solving the management scanner data might be quicker and more cost effective. Observa-
question and how the data will be gathered, treated, and tion typically is more accurate than communication.
interpreted.

© Copy Right: Rai University


12 11.556
Personal interviews: have an interviewer bias that mail-in i. Service Stations: Pose as a customer, go to a service station

RESEARCH METHODOLOGY
questionnaires do not have. For example, in a personal and observe
interview the respondent’s perception of the interviewer may ii. To evaluate the effectiveness of display of the Dunlop
affect the responses cushions in a departmental store, an observer notes:
Questionnaire - The questionnaire is an important tool for • How many pass by?
gathering primary data. Poorly constructed questions can result
• How many stopped to look at the display?
in large errors and invalidate the research data, so significant
effort should be put into the Questionnaire • How many decide to buy?
iii. Super Market: What is the best location in the shelf? Hidden
Secondary Data
cameras are used.
When an investigator uses the data, which has already been
collected by others, such data is called secondary data. This data iv. Concealed tape recorder with the investigator helps to
is primary data for the agency that collects it and becomes determine typical sales arguments and find out sales
secondary data for someone else who uses this data for his own enthusiasm shown by various salesmen. By this method,
purposes. The secondary data can be obtained from journals, response bias is eliminated.
reports, government publications, publication of professional The method can be used to study sales techniques, customer
and research organization and so on. For example, if a movements, customer response, etc. However, the customer’s/
researcher desires to analyze the weather conditions of different consumer’s state of mind, their buying motives, their images
regions, he can get the required information or data from the are not revealed. Their income and education is also not
records of the meteorology department. known. It also takes time for the investigator to wait for
There are several criteria that you should use to evaluate particular sections to take place.
secondary data. Experimental Method
• Whether the data is useful in the research study. Many of the important decisions facing the marketing execu-
• How current the data is and whether it applies to time period tives cannot be settled by secondary research, observation or by
of interest. surveying the opinions of customers or experts.
• Errors and accuracy-whether the data is dependable and can Experimental method may be used in the following situations.
be verified. i. What is the best method for training salesmen?
• Presence of bias in the data. ii. What is the best remuneration plan for salesmen?
• Specifications and methodologies used, including data iii. What is the best shelf arrangement for displaying a product?
collection method, response rate, quality and analysis of the iv. What is the effectiveness of a point-of-purchase display?
data, sample size and sampling technique, and questionnaire
v. What package design should be used?
design.
vi. Which copy is the most effective?
• Objective of the original data collection.
vii. What media are the most effective?
• Nature of the data, including definition of variables, units
of measure, categories used, and relationships examined. viii.Which version of a product would consumers like best?
In a marketing experiment, the experimental units may be
Data Collection Procedure for Primary Data
consumers, stores, sales territories, etc.\Factors or marketing
Planning the Study variables under the control of the researcher which can be
Since the quality of results gained from statistical data depends studied are price, packaging, display, sales incentive plan, flavor,
upon the quality of information collected, it is important that a color shape,
sound investigative process be established to ensure that the Competitor’s actions, weather changes, in cooperative dealers,
data is highly representative and unbiased. This requires a high etc. are environmental factors.
degree of skill and also certain precautionary measures may have
to be taken. To study the effect of the marketing variables in the presence of
environmental factors, a sufficiently large sample should be
Modes of Data Collection used. Or sometimes a control group is set up. A Control
Following are widely used methods for collection of primary group is a group equivalent to the experimental group and
data: differing only in not receiving any treatment.
• Observation The result/ response of a marketing experiment will be in the
• Experimentation form of sales, attitudes or behaviour.
• Questionnaire Data Analysis–Preliminary Steps
• Interviewing • Before the analysis can be performed, raw data must be
• Case Study Method transformed into the right format.
Observation Process • Then, it must be edited so that errors can be corrected or
Observing the process at work collects information. The omitted.
following are a few examples.

© Copy Right: Rai University


11.556 13
• The data must then be coded; this procedure converts the
RESEARCH METHODOLOGY

• Type I error: occurs when one rejects the null hypothesis and
edited raw data into numbers or symbols. A codebook is accepts the alternative, when in fact the null hypothesis is
created to document how the data was coded. true.
• Finally, the data is tabulated to count the number of samples • Type II error: occurs when one accepts the null hypothesis
falling into various categories. when in fact the null hypothesis is false.
Simple tabulations count the occurrences of each variable Because their names are not very descriptive, these types of
independently of the other variables. errors sometimes are confused. Some people jokingly define a
Cross tabulations, also known as contingency tables or cross Type III error to occur when one confuses Type I and Type II.
tabs, treats two or more variables simultaneously. However, To illustrate the difference, it is useful to consider a trial by jury
since the variables are in a two-dimensional table, cross tabbing in which the null hypothesis is that the defendant is innocent.
more than two variables is difficult to visualize since more than If the jury convicts a truly innocent defendant, a Type I error has
two dimensions would be required. Cross tabulation can be occurred. If, on the other hand, the jury declares a truly guilty
performed for nominal and ordinal variables. Cross tabulation defendant to be innocent, a Type II error has occurred.
is the most commonly utilized data analysis method in Hypothesis testing involves the following steps:
marketing research. Many studies take the analysis no further • Formulate the null and alternative hypotheses.
than cross tabulation. This technique divides the sample into
• Choose the appropriate test.
sub-groups to show how the dependent variable varies from
one subgroup to another. A third variable can be introduced to • Choose a level of significance (alpha) - determine the
uncover a relationship that initially was not evident. rejection region.
• Gather the data and calculate the test statistic.
• Determine the probability of the observed value of the test
statistic under the null hypothesis given the sampling
distribution that applies to the chosen test.
• Compare the value of the test statistic to the rejection
threshold.
• Based on the comparison, reject or do not reject the null
hypothesis.
• Make the marketing research conclusion.
In order to analyze whether research results are statistically
significant or simply by chance, a test of statistical significance
can be run.
Tests of Statistical Significance
Conjoint Analysis The chi-square goodness-of-fit test is used to determine
The Conjoint Analysis is a powerful technique for determining whether a set of proportions have specified numerical values. It
consumer preferences for product attributes. often is used to analyze bivariate cross-tabulated data.
Some examples of situations that are well suited for this test
Hypothesis Testing
are:
A basic fact about testing hypotheses is that a hypothesis may
be rejected but that the hypothesis never can be unconditionally • A manufacturer of packaged products test markets a new
accepted until all possible evidence is evaluated. product and wants to know if sales of the new product will
be in the same relative proportion of package sizes as sales
In the case of sampled data, the information set cannot be
of existing products.
complete. So if a test using such data does not reject a hypoth-
esis, the conclusion is not necessarily that the hypothesis should • A company’s sales revenue comes from Product A (50%),
be accepted. Product B (30%), and Product C (20%). The firm wants to
know whether recent fluctuations in these proportions are
The null hypothesis in an experiment is the hypothesis that the random or whether they represent a real shift in sales.
independent variable has no effect on the dependent variable.
Defining k categories and observing the number of cases falling
The null hypothesis is expressed as H0. This hypothesis is
into each category perform the chi-square test. Knowing the
assumed to be true unless proven otherwise. The alternative to
expected number of cases falling in each category, one can define
the null hypothesis is the hypothesis that the independent
chi-squared as:
variable does have an effect on the dependent variable. This
?2 = SOi - Ei )2 / Ei
hypothesis is known as the alternative, research, or experimental
hypothesis and is expressed as H1. This alternative hypothesis Where
states that the relationship observed between the variables Oi = the number of observed cases in category i,
cannot be explained by chance alone. E i = the number of expected cases in category i,
There are two types of errors in evaluating hypotheses: k = the number of categories,
the summation runs from i = 1 to i = k.

© Copy Right: Rai University


14 11.556
Before calculating the chi-square value, one needs to determine Discriminant Analysis

RESEARCH METHODOLOGY
the expected frequency for each cell. This is done by dividing the Analysis of the difference in means between groups provides
number of samples by the number of cells in the table. information about individual variables, it is not useful for
To use the output of the chi-square function, one uses a chi- determine their individual impacts when the variables are used
square table. To do so, one needs to know the number of in combination. Since some variables will not be independent
degrees of freedom (df). For chi-square applied to cross- from one another, one needs a test that can consider them
tabulated data, the number of degrees of freedom is equal to simultaneously in order to take into account their interrelation-
(number of columns - 1) (number of rows - 1) ship. One such test is to construct a linear combination,
essentially a weighted sum of the variables. To determine which
This is equal to the number of categories minus one. The
variables discriminate between two or more naturally occurring
conventional critical level of 0.05 normally is used. If the
groups, discriminant analysis is used. Discriminant analysis can
calculated output value from the function is greater than the chi-
determine which variables are the best predictors of group
square look-up table value, the null hypothesis is rejected.
membership. It determines which groups differ with respect to
Anova the mean of a variable, and then uses that variable to predict
Another test of significance is the Analysis of Variance new cases of group membership. Essentially, the discriminant
(ANOVA) test. The primary purpose of ANOVA is to test for function problem is a one-way ANOVA problem in that one
differences between multiple means. Whereas the t-test can be can determine whether multiple groups are significantly
used to compare two means, ANOVA is needed to compare different from one another with respect to the mean of a
three or more means. If multiple t-tests were applied, the particular variable.
probability of a TYPE I error (rejecting a true null hypothesis) A discriminant analysis consists of the following steps:
increases as the number of comparisons increases. 1. Formulate the problem.
One-way ANOVA examines whether multiple means differ. 2. Determine the discriminant function coefficients that result
The test is called an F-test. ANOVA calculates the ratio of the in the highest ratio of between-group variation to within-
variation between groups to the variation within groups (the F group variation.
ratio). While ANOVA was designed for comparing several
3. Test the significance of the discriminant function.
means, it also can be used to compare two means. Two-way
ANOVA allows for a second independent variable and ad- 4. Interpret the results.
dresses interaction. 5. Determine the validity of the analysis.
To run a one-way ANOVA, use the following steps: Discriminant analysis analyzes the dependency relationship,
• Identify the independent and dependent variables. whereas factor analysis and cluster analysis address the interde-
pendency among variables.
• Describe the variation by breaking it into three parts - the
total variation, the portion that is within groups, and the Factor Analysis
portion that is between groups (or among groups for more Factor analysis is a very popular technique to analyze interdepen-
than two groups). dence. Factor analysis studies the entire set of interrelationships
The total variation (SStotal) is the sum of the squares of the without defining variables to be dependent or independent.
differences between each value and the grand mean of all the Factor analysis combines variables to create a smaller set of
values in all the groups. The in-group variation (SSwithin) is the factors.
sum of the squares of the differences in each element’s value Mathematically, a factor is a linear combination of variables. A
and the group mean. The variation between group means factor is not directly observable; it is inferred from the variables.
(SSbetween) is the total variation minus the in-group variation The technique identifies underlying structure among the
(SStotal - SSwithin). variables, reducing the number of variables to a more manage-
1. Measure the difference between each group’s mean and the able set. Factor analysis groups variables according to their
grand mean. correlation
2. Perform a significance test on the differences. The factor loading can be defined as the correlations between
the factors and their underlying variables.
3. Interpret the results.
A factor - loading matrix is a key output of the factor analysis.
This F-test assumes that the group variances are approximately
An example matrix is shown below.
equal and that the observations are independent. It also
assumes normally distributed data; however, since this is a test
Factor 1 Factor 2 Factor 3
on means the Central Limit Theorem holds as long as the
sample size is not too small Variable 1
ANOVA is efficient for analyzing data using relatively few Variable 2
observations and can be used with categorical variables. Note Variable 3
that regression can perform a similar analysis to that of Column's Sum of Squares:
ANOVA.
Each cell in the matrix represents correlation between the
variable and the factor associated with that cell. The square of

© Copy Right: Rai University


11.556 15
this correlation represents the proportion of the variation in the • Limitations
RESEARCH METHODOLOGY

variable explained by the factor. The sum of the squares of the • Conclusions and recommendations
factor loadings in each column is called an eigen value. An eigen
• Appendices containing copies of the questionnaires, etc
value represents the amount of variance in the original variables
that is associated with that factor. The communality is the Data is then processed in order to summarise the results
amount of the variable variance explained by common factors It seeks to determine how units covered in the research project
A rule of thumb for deciding on the number of factors is that respond to the items under investigation.
each included factor must explain at least as much variance as It could be
does an average variable. In other words, only factors for which • Univariate
the eigen value is greater than one are used. Other criteria for
• Bivariate
determining the number of factors include the Scree plot criteria
and the percentage of variance criteria. • Multivariate
To facilitate interpretation, the axis can be rotated. Rotation of Interpretation of Results
the axis is equivalent to forming linear combinations of the
It is the “so what”? of research.
factors. A commonly used rotation strategy is the varimax
Research is wasted if it is not used in decision making or
rotation. Varimax attempts to force the column entries to be
influencing actions. Not only the results should be interpreted
either close to zero or one.
into action recommendations but the recommendations must
Cluster Analysis also be communicated in an understandable manner. Results
Market segmentation usually is based not on one factor but on should be presented in as simple manner as possible.
multiple factors. Initially, each variable represents its own cluster.
The challenge is to find a way to combine variables so that
relatively homogenous clusters can be formed. Such clusters
should be internally homogenous and externally heteroge-
neous. Cluster analysis is one way to accomplish this goal.
Rather than being a statistical test, it is more of a collection of
algorithms for grouping objects, or in the case of marketing
research, grouping people. Cluster analysis is useful in the
exploratory phase of research when there are no a-priori
hypotheses.
Cluster Analysis Steps
Formulate the problem, collecting data and choosing the
variables to analyze.
Basic requirement
Choose a distance measure. The most common is the Euclid- 1.Researcher must understand the proper interpretation of
ean distance. Other possibilities include the squared Euclidean research results and the assumptions embodied in them.
distance, city-block (Manhattan) distance, Chebychev distance,
power distance, and percent disagreement. 2.Researcher should understand the kinds of questions research
can handle and the type of structure required making a problem
Choose a clustering procedure (linkage, nodal, or factor “researchable”
procedures).
3.Researcher must be capable of appraising the feasibility of
Determine the number of clusters. They should be well research proposals
separated and ideally they should be distinct enough to give
them descriptive names such as professionals, buffs, etc. Special Case
In management maximum research is done in the field of
Profile the clusters.
marketing. We conceptualize the marketing research process as
Assess the validity of the clustering. consisting of six steps. Each and every step in market research
The format of the marketing research report varies with the we will be doing in detail. First let us discuss these briefly now.
needs of the organization.
Step 1: Problem Definition
The report often contains the following sections: The first step in any marketing research project is to define the
• Authorization letter for the research problem. In defining the problem, the you should take into
• Table of Contents account
• List of illustrations • The purpose of the study,

• Executive summary • The relevant background information,

• Research objectives • The information needed, and

• Methodology • How it will be used in decision-making.

• Results

© Copy Right: Rai University


16 11.556
Problem definition involves discussion with the mail panel surveys with pre-recruited households), or electroni-

RESEARCH METHODOLOGY
• Decision makers, cally (e-mail or Internet).
• Interviews with industry experts, Proper selection, training, supervision, and evaluation of the
field force help minimize data-collection errors.
• Analysis of secondary data, and
• Perhaps, some qualitative research, such as focus groups. Step 5: Data Preparation and Analysis
Data preparation includes the editing, coding, transcription and
Once the problem of investigation has been precisely defined,
verification of data. Each questionnaire or observation form is
the research can be designed and conducted properly
inspected or edited and, if necessary, corrected. Number or
Step 2: Development of an Approach to the Problem letter codes are assigned to represent each response to each
Development of an approach to the problem includes question in the questionnaire. The data from the questionnaire
are transcribed or key punched onto magnetic tape or disks, or
• Formulating an objective or theoretical framework,
input directly into the computer. The data are analyzed to
• Analytical models, derive information related to the components of the marketing
• Research questions, and research problem and, thus, provide input in to the manage-
• Hypotheses and identifying the information needed. ment decision problem.
This process is guided by Step 6: Report Preparation and Presentation
• Discussion with management and industry experts, The entire project should be documented in a written report
that addresses the specific research questions identified,
• Analysis of secondary data,
describes the approach, the research design, data collection, and
• Qualitative research, and data analysis procedures adopted, and presents the results and
• Pragmatic considerations the major findings. The findings should be presented in a
comprehensible format so that management can readily use
Step 3: Research Design Formulation
them in the decision making process. In addition, an oral
A research design is a framework or blueprint for conducting
presentation should be made to management using tables,
the marketing research project. It details the procedures
figures, and graphs to enhance clarity and impact
necessary for obtaining the required information, and its
purpose is to design a study that will test the hypotheses of References and Further Readings
interest, determine possible answers to the research questions, 1. Boyd, westfall, and stasch, “Marketing Research Text and
and provide the information needed for decision making. Cases”, All India Traveller Bookseller, New Delhi.
Conducting exploratory research precisely defines the variables, 2. Brown, F.E. “Marketing Research, a structure for decision
and designing appropriate scales to measure them are also a part making”, Addison – Wesley publishing company
of the research design.
3. Kothari, C.R. “Research Methodology-Methods and Techniques”,
The issue of how the data should be obtained from the Wiley Eastern Ltd.
respondents (for example, by conducting a survey or an
4. Stockton and Clark, “Introduction to Business and
experiment) must be addressed. It is also necessary to design a
Economic Statistics”, D.B.Taraporevala Sons andCo. Private
questionnaire and a sampling plan to select respondents for the
Limited, Bombay.
study.
5. Dunn Olive Jean and Virginia A Clarck, “Applied Statistics”
More formally, formulating the research design involves the
John Wiley and Sons.
following steps:
6. Green Paul E and Donald S. Tull, “Research for Marketing
1. Definition of the information needed
Decisions” Prentice Hall of India, New Delhi
2. Secondary data analysis
3. Qualitative research
Activity 1
4. Methods of collecting quantitative data (survey, observation, The definition of the problem comprise
and experimentation)
5. Measurement and scaling procedures
6. Questionnaire design
7. Sampling process and sample size
8. Plan of data analysis
Step 4: Fieldwork or Data Collection
Data collection involves a field force or staff that operates either
in the field, as in the case of personal interviewing (in-home,
mall intercept, or computer-assisted personal interviewing),
from an office by telephone (telephone or computer-assisted
telephone interviewing), through mail (traditional mail and

© Copy Right: Rai University


11.556 17
Activity 2 Activity 4
RESEARCH METHODOLOGY

Explain the importance of interpretation of results? Explain with the help of a suitable example the need for
introducing two types of environmental conditions in a
research problem?
Clue: The environmental conditions specified in the research
problem are of two types:
(i) Those beyond firm’s control
(ii)Those within the firm’s control

Activity 5
A local supermarket has experienced a decline in unit _sales and
Activity 3 little change in rupee value sales. Profits have almost vanished.
Name and briefly discuss the five steps of research process? The chief executive in searching for ways to revitalize the
operation was advised to increase the number of hours the
market is open for business. He comes to you for _advice in
structuring a research problem that will provide relevant
information for decision-making.
Define the problem, taking care to:
a. State the relevant question
b. Enumerate the alternative answers,
c. Clearly define the units of analysis and characteristics of
interest.
What are the relevant “ states of nature” which would lead to
the selection each alternative answer?

© Copy Right: Rai University


18 11.556
RESEARCH METHODOLOGY
Activity 7
Define and state the research problem for the following case:
“Why is the productivity in Japan so much higher than in
India”?
Think about problem in a broader sense and narrow down the
research problem.

Activity 6
A shampoo manufacturing company wishes to test two types
of makes in order to determine which is the best one .
a. Propose and defend a precise definition of “best”
b. What is the set of hypothesis that should be tested?
c. What action would be associated with each hypothesis?

© Copy Right: Rai University


11.556 19
RESEARCH METHODOLOGY

LESSON 3:
RESEARCH PROPOSAL

Dear friends, after completion of this lesson you will be able to- text. Also, the measuring instrument and project management
• Prepare internal research proposal modules are not required. Managers will typically leave this detail
for others.
• Prepare external research proposal
Depending on the type of project, the sponsoring individual or External Proposals
institution, and the cost of the project, different levels of An external proposal is either solicited or unsolicited. A solicited
complexity are required for a proposal to be judged complete. proposal is often in response to a request for proposal (RFP).
The proposal is likely competing against several others for a
For example the government agencies demand the most
contract or grant. An unsolicited proposal has the advantage of
complex proposals for their funding analyses. On the other
not competing against others but the disadvantage of having to
extreme, an exploratory study done within a manager’s depart-
speculate on the ramifications of a problem facing the firm’s
ment may need merely a one-to three-page memo outlining the
management.
objectives, approach, and time allotted to the project.
Even more difficult, the writer of an unso-licited proposal
In general, business proposals can be divided between those
must decide to whom the document should be sent.
generated internally and externally. An internal proposal is
done for the corporation by staff special-ists or by the research The most important sections of the external proposal are the
department of the firm. objectives, design, qualifications, schedule, and budget. The
executive summary of an external proposal may be included
External proposals are either solicited or unsolicited. Sponsors
within the letter of transmittal.
can be university grant committees, government agencies,
government contractors, corporations, and so forth. With few As the complexity of the project in-creases, more information is
exceptions, the larger the project, the more complex is the required about project management and the facilities and special
proposal. In public sector work, the complexity is generally resources. In contract research, the results and objectives sections
greater than in a comparable private sector proposal. are the standards against which the completed project is
measured. As we move toward government-sponsored
There are three general levels of complexity. The exploratory
research, particular attention must be paid to each specifica-tion
study is the first, most simple business proposal. More
in the RFP.
complex and common in business is the small-scale study either
an internal study or an external contract research project Contents of Research Proposal
Now let us discuss difference Internal proposal & External 1. Executive Summary
proposal The executive summary allows a busy manager or sponsor to
Internal Proposals understand quickly the thrust of the proposal. It is essentially
Internal proposals are a memo from the researcher to manage- an informative abstract, giving executives the chance to grasp the
ment outlining the prob-lem statement, study objectives, essentials of the proposal without having to read the details.
research design, and schedule is enough to start an exploratory The goal of the summary is to secure a positive evaluation by
study. the executive who will pass the proposal on to the staff for a
full evaluation. As such, the executive summary should include
Privately and publicly held firms are concerned with how to brief statements of the management dilemma and manage-
solve a particular problem, make a decision, or improve an ment ques-tion, the research objectives/research questions(s),
aspect of their business. Seldom do businesses begin research and the benefits of your approach. If the proposal is unsolic-
studies for other reasons. ited, a brief description of your qualifications is also
In the small-scale proposal, the literature review and bibliogra- appro-priate.
phy are consequently not stressed and can often be stated briefly
2. Problem Statement
in the research design.
This section needs to convince the sponsor to continue reading
Since management insists on brevity, an executive summary is the proposal. You should capture the reader’s attention by
mandatory for all but the most simple of propos-als (projects stating the management dilemma, its back-ground, and
that can be proposed in a two-page memo do not need an consequences, and the resulting management question. The
executive summary). Schedules and budgets are necessary for management question starts the research task. The importance
funds to be committed. For the smaller-scale projects, descrip- of re-searching the management question should be empha-
tions are not required for facilities and special re-sources, nor is sized here if a separate module on the importance/ benefits of
there a need for a glossary. Since small projects are sponsored by study is not included later in the proposal.
managers familiar with the problem, the associated jargon,
In addition, this section should include any restrictions or areas
requirements, and defi-nitions should be included directly in the
of the management question that will not be addressed.

© Copy Right: Rai University


20 11.556
Problem statements too broadly defined cannot be addressed 5. Importance/ Benefits of the Study

RESEARCH METHODOLOGY
adequately in one study. It is important that the management This section allows you to describe explicit benefits that will
question be distinct from related problems and that the accrue from your study. The importance of “doing the study
sponsor see the delimitations clearly. Be sure your problem now” should be emphasized. Usually, this sec-tion is not more
statement is clear without the use of idioms or clinches. than a few paragraphs. If you find it difficult to write, then you
3. Research Objectives have probably not understood the problem adequately. Return
to the analysis of the prob-lem and ensure, through additional
This module addresses the purpose of the investigation. It is
discussions with your sponsor or your research team, or by a
here that you layout ex-actly what is being planned by the
reexamination of the literature, that you have captured the
proposed research. In a descriptive study, the ob-jectives can be
essence of the problem.
stated as the research question. Recall that the research question
can be further broken down into investigative questions. If the This section also requires you to understand what is most
proposal is for a causal study, then the objectives can be restated troubling to your sponsor. If it is a potential union activity, you
as a hypothesis. cannot promise that an employee sur-vey will prevent unioniza-
tion. You can, however, show the importance of this
The objectives module flows naturally from the problem
infor-mation and its implications. This benefit may allow
statement, giving the sponsor specific, concrete, and achievable
management to respond to employee concerns and forge a
goals. It is best to list the objectives either in order of impor-
linkage between those concerns and unionization.
tance or in general terms first, moving to specific terms (i.e.,
re-search question followed by underlying investigative ques- The importance/benefits section is particularly important to the
tions). The research ques-tions (or hypotheses, if appropriate) unsolicited ex-ternal proposal. You must convince the sponsor-
should be set off from the flow of the text so they can be ing organization that your plan will meet its needs.
found easily. 6. Research Design
The research objectives section is the basis for judging the Up to now, you have told the sponsor what the problem is,
remainder of the pro-posal and, ultimately, the final report. what your study goals are, and why it is important for you to
Verify the consistency of the proposal by checking to see that do the study. The proposal has presented the study’s value and
each objective is discussed in the research design, data analysis, benefits. The design module describes what you are going to do
and results sections. in technical terms. This section should include as many
4. Literature Review subsections as needed to show the phases of the project.
The literature review section examines recent (or historically Provide information on your proposed design for tasks such as
significant) research studies, company data, or industry reports sample selection and size, data collection method, instrumenta-
that act as a basis for the proposed study. Begin your discussion tion, procedures, and ethical requirements. When more than
of the related literature and relevant secondary data from a one way exists to approach the design, discuss the methods you
comprehensive perspective, moving to more specific studies rejected and why your selected approach is superior.
that are associated with your problem. If the problem has a 7. Data Analysis
historical background, begin with the earliest references. A brief section on the methods used for analyzing the data is
Avoid the extraneous details of the literature; do a brief review appropriate for large-scale contract research projects and doctoral
of the informa-tion, not a comprehensive report. Always refer theses. With smaller projects, the pro-posed data analysis would
to the original source. If you find something of interest in a be included within the research design section. Describe your
quotation, find the original publication and ensure you un- proposed treatment and the theoretical basis for using the
derstand it. In this way, you will avoid any errors of selected techniques. The object of this section is to assure the
interpretation or transcription. sponsor you are following correct assumptions and using
theoretically sound data analysis procedures.
Emphasize the important results and conclusions of other
studies, the relevant data and trends from previous research, This is often an arduous section to write. By use of sample
and particular methods or designs that could be duplicated or charts and dummy tables, you can make it easier to understand
should be avoided. Discuss how the literature applies to the your data analysis. This will make the section easier to write and
study you are proposing; show the weaknesses or faults in the easier to read. The data analysis section is important enough to
design, discussing how you would avoid similar problems. If contract research that you should contact an expert to review the
your proposal deals solely with secondary data, discuss the latest techniques available for your use. If there is no statistical
relevance of the data and the bias or lack of bias inherent in it. or analytical expertise within your company, be prepared to hire
a professional to help with this activity,
The literature review may also explain the need for the proposed
work to ap-praise the shortcomings and informational gaps in 8. Nature and Form of Results
secondary data sources. This analysis may go beyond scrutiniz- Upon finishing this section, the sponsor should be able to go
ing the availability or conclusions of past studies and their data, back to the problem statement and research objectives and
to examining the accuracy of secondary sources, the credibility discover that each goal of the study has been covered. One
of these sources and the appropriateness of earlier studies. should also specify the types of data to be obtained and the
interpreta-tions that will be made in the analysis. If the data are
to be turned over to the sponsor for proprietary reasons, make

© Copy Right: Rai University


11.556 21
sure this is reflected. Alternatively, if the report will go to more The budget statement in an internal research proposal is based
RESEARCH METHODOLOGY

than one sponsor, that should be noted. on employee and overhead costs. The budget presented by an
This section also contains the contractual statement telling the external research organization is not just the wages or salaries of
sponsor exactly what types of information will be received. their employees but the person-hour price that the contract-ing
Statistical conclusions, applied findings, recommendations, firm charges.
action plans, models, strategic plans, and so forth are examples The detail presented may vary depending on both the sponsors’
of the forms of results. requirements and the contracting research company’s policy.
9. Qualifications of Researchers One reason why external research agencies avoid giving detailed
This section should begin with the principal investigator. It is budgets is the possibility that disclosures of their costing
also customary to begin qualifications with the highest academic practices will make their calculations public knowledge, reducing
degree held. Experience in carrying out previous research is their negotiating flexibility. Since budget statements embody a
important, especially in the corporate marketplace, so a con-cise financial work strategy that could be used by the recipient of the
description of similar projects should be included. Also bid to develop an independent work plan, vendors are often
important to business sponsors is experience as an executive or doubly careful.
employee of an organization involved in a related field. Often The budget section of an external agency’s proposal states the
businesses are reluctant to hire individuals to solve operational total fee payable for the assignment. When it is accompanied by
problems if they do not have practical experience. Finally, a proposed schedule of payment, this is frequently detailed in a
relevant business and technical societies to which the researcher purchase order. Unlike most product sale environments, re-
belongs can be included where this infor-mation is particularly search payments can be divided and paid at stages of
relevant to the research project. completion. Sometimes a re-tainer is scheduled for the begin-
The entire curriculum vitae of each researcher should not be ning of the contract, then a percentage at an intermediate stage,
included unless re-quired by the RFP. Instead, refer to the and the balance on completion of the project. .
relevant areas of experience and expertise that make the It is extremely important that you retain all information you use
researchers the best selection for the task. to generate your budget. If you use quotes from external
10. Budget contractors, get the quotation in writing for your file. If you
The budget should be presented in the form the sponsor estimate time for interviews, keep explicit notes on how you
requests. For example, some organizations require secretarial made the estimate. When the time comes to do the work, you
assistance to be individually budgeted, whereas oth-ers insist it should know exactly how much money is budgeted for each
be included in the research director’s fees or the overhead of the particular task.
opera-tion. In addition, limitations on travel, per diem rates, Some costs are more elusive than others. Do not forget to build
and capital equipment purchases can change the way in which the cost of pro-posal writing into your fee. Publication and
you prepare a budget delivery of final reports can be a last -minute expense that can
Typically, the budget should be no more than one to two pages. easily be overlooked in preliminary budgets.
Diagram below shows a format that can be used for small
contract research projects. Additional in-formation, backup
details, quotes from vendors, and hourly time and payment
calcu-lations should be put into an appendix if required or kept
in the researcher’s file for future reference.

© Copy Right: Rai University


22 11.556
RESEARCH METHODOLOGY
D (1)
8
1 L (4)
A (6) C (10)

F (3) H (8) K (8)


Start 3 4 7 9 End
M (3)
B (5) G (3)
J (2)
E (3)
2
I (3)
5 6

11. Schedule Examples of management and technical reports


Your schedule should include the major phases of the project, • Research team relationship with the sponsor
their timetables, and the milestones that signify completion of
• Financial and legal responsibility
a phase.
• Management competence
For example, major phases may be (I) exploratory interviews,
(2) final research proposal, (3) questionnaire revision, (4) field Tables and charts are most helpful in presenting the master
interviews, (5) editing and coding, (6) data analysis, and (7) plan. The relation-ships between researchers and assistants need
report generation. Each of these phases should have an to be shown when several researchers are part of the team.
estimated time schedule and people assigned to the work. Sponsors must know that the director is an individual capable
of leading the team and being a useful liaison to the sponsor.
It may be helpful to you and your sponsor if you chart your
In addition, procedures for information processing, record
schedule. You can use a Gantt chart. Alternatively, if the project
control, and expense control are critical to large oper-ations and
is large and complex, a critical path method (CPM) of schedul-
should be shown as part of the management procedures.
ing may be included. In a CPM chart the nodes represent major
milestones, and the arrows suggest the work needed to get to The type and frequency of progress reports should be recorded
the milestone. More than one arrow pointing to a node indi- so the sponsor can expect to be kept up-to-date and the
cates all those tasks must be completed before the milestone researchers can expect to be left alone to do research. The
has been met. sponsor’s limits on control during the process should be
delineated. Details such as printing facilities, clerical help, or
Usual1y a number is placed along the arrow showing the
information-processing capa-bilities that are to be provided by
number of days or weeks required for that task to be com-
the sponsor are discussed. In addition, right’s to the data, the
pleted. The pathway from start to end that takes the longest
results, and authority to speak for the researcher and for the
time to complete is called the critical path, because any delay in
sponsor are in-cluded. Payment frequency and timing are also
an activity along that path will delay the end of the entire
covered in the master plan. Finally, proof of financial responsi-
project. An example of a CPM chart is shown below.
bility and overall management competence are provided.
Critical Path: S –1 –3-4-7-8-9-E Time to complete: 40 days
14 Bibliography
12 Facilities and Special Resources For all projects that require literature review, a bibliography is
Often, projects will require special facilities or resources that necessary. Use the bibliographic format required by the sponsor.
should be described in detail. For example, a contract explor- If none is specified, a standard style manual (e.g., Kate L.
atory study may need specialized facilities for focus group Turabian, A Manual for Writers of Term Papers, Theses, and
sessions. Computer-assisted telephone or other interviewing Dissertations; Joseph Gibaldi and Walter S. Achtert, MIA
facilities may be required. Handbook for Writers of Research Papers; or the Publication
13 Project Management Manual of the American Psychological Associ-ation) will
The purpose of the project management section is to show provide the details necessary to prepare the bibliography. Many
the sponsor that the research team is organized in a way to do of these sources also make suggestions for successful proposal
the project efficiently. A master plan is required for complex writing.
projects to show how the phases will all be brought together. 15 Appendices
The plan includes Glossary A glossary of terms should be included whenever
• The research team’s organization there are many words unique to the research topic and not
• Management procedures and controls for executing the understood by the general management commu-nity. This is a
research plan. simple section consisting of terms and definitions. Also, define
any acronyms that you use, even if they are defined within the
text.

© Copy Right: Rai University


11.556 23
16 Measurement Instrument Small-scale contracts are more prone to informal evaluation.
RESEARCH METHODOLOGY

For large projects, it is appropriate to include samples of the With informal evalua-tion, the project needs, and thus the
measurement instruments if they are available when you criteria, are well understood but not usually well documented.
assemble the proposal. This allows the sponsor to discuss In contrast to the formal method, a system of points is not
particular changes in one or more of the instru-ments. If used and the criteria are not ranked. The process is more
exploratory work precedes the selection of the measurement qualitative and impressionistic. In practice, many items contrib-
instruments you will not use this appendix section. ute to a proposal’s acceptance and funding. Pri-marily, the
Other Any detail that reinforces the body of the proposal can be content discussed above must be included to the level of detail
included in an ap-pendix. This includes researcher vitae, budget required by the sponsor. Beyond the required modules, there
details, and lengthy descriptions of special facilities or resources. are factors that can quickly elim-inate a proposal from consider-
ation and factors that improve the sponsor’s reception of the
Evaluating The Research Proposal proposal.
Proposals are subjected to formal and informal reviews. The
First, the proposal must be neatly presented. Although a
formal method has some variations, but its essence is described
proposal produced on a word processor and bound with an
as follows. Before the proposal is re-ceived, criteria are estab-
expensive cover will not overcome design or analysis deficien-
lished and each is given weights or points. The proposal is
cies, a poorly presented, unclear, or disorganized proposal will
evaluated with a checklist of criteria in hand. Points are recorded
not get serious attention from the reviewing sponsors. Second,
for each category reflecting the sponsor’s assessment of how
the proposal’s major top-ics should be easily found and logically
well the proposal meets the category’s established criteria. Several
organized. The reviewer should be able to page through the
people, each of whom is assigned to a particular sec-tion,
proposal to any section of interest. The proposal also must
typically review long and complex proposals. After the review,
meet specific guidelines set by1he sponsoring company or
the category scores are added to provide a cumulative total. The
agency. These include budgetary restrictions and schedule
proposal with the highest num-ber of points will win the
deadlines.
contract. The formal method is most likely to be used for
competitive government, university, or public sector grants and A fourth important aspect is the technical writing style of the
also for large-scale contracts. proposal. The problem statement must be easily understood.
The research design should be clearly outlined and the method-
ology explained. The importance/befits of the study must
allow the sponsor to see why the research should be funded.

Proposal Management Govern Student


types Internal External ment
Expl Explo Small Large Large Term Master Doctor
orat Small Large ratory scale scale scale paper ’s al
Proposal ory scale scale contra contrac contract contract thesis thesis
Modules stud study study cts ts s s
y

Executive √ √ √ √ √ √
summary
Problem √ √ √ √ √ √ √ √ √ √
Statement
Research √ √ √ √ √ √ √ √ √ √
objectives
Literature √ √ √ √ √
review
Importance/b √ √ √ √ √ √
enefits of
study
Research √ √ √ √ √ √ √ √ √
Design
Data Analysis √ √ √
Nature and √ √ √ √ √ √ √
form of
results
Qualification √ √ √ √
of researchers
Budget √ √ √ √ √ √
Schedule √ √ √ √ √ √ √
Facilities & √ √ √ √ √ √ √
special
resources
Project √ √ √
management
Bibliography √ √ √ √ √ √
Appendixes/ √ √ √ √ √
glossary of
terms
Measurement √ √ √ √
instrument

© Copy Right: Rai University


24 11.556
Points to Ponder We will also discover the importance of types of product failure

RESEARCH METHODOLOGY
• A proposal is an offer to produce a product or render a on customer satisfaction levels.
service to the potential buyer or sponsor. Importance/Benefits
• The research proposal presents a problem, discusses related High levels of user satisfaction translate into positive word of-
research ef-forts, outlines the data needed for solving the mouth product endorsements. These endorsements influence
problem, and shows the design used to gather and analyze the purchase outcomes for (1) friends and relatives and (2)
the data. business associates.
• Proposals are valuable to both the research sponsor and the Critical incidents, such as product failures, have the potential to
researcher. The sponsor uses the proposal to evaluate a either undermine existing satisfaction levels or preserve and
research idea. even increase the resulting levels of product satisfaction. The
• The proposal is also a useful tool to ensure the sponsor and outcome of the episode depends on the quality of the
investigator agrees on the research question. manufacturer’s response.
• In addition, the completed proposal provides a logical guide An extraordinary response by the manufacturer to such
for the investigation. incidents will preserve and enhance user satisfaction levels to the
point that direct and indirect benefits derived from such
• Two types of proposals: internal and external. Internal and
programs will justify their costs.
external proposals have a problem-solving orientation. The
staff of a company generates in-ternal proposals. This research has the potential for connecting to ongoing ABC
customer satisfaction programs and measuring the long-term
• External proposals are prepared by an outside firm to obtain
effects of ‘CompleteCare’ (and product failure incidents) on
con-tract research. External proposals emphasize
customer satisfaction.
qualifications of the researcher, special facilities and resources,
and project management aspects such as budgets and sched- Research Design
ules. Exploration - Qualitative We will augment our knowledge of
‘CompleteCare’ by interviewing the service manager, the call
center manager, and the independent package company’s
Case Study: Research Proposal account executive. Based on a thorough inventory of
Repair Process Satisfaction Proposal CompleteCare’s internal, external processes, we propose to
ABC Corporation ‘CompleteCare’ Program develop a mail survey.
Problem Statement Questionnaire Design - A self-administered questionnaire
ABC Corporation has recently created a service and repair (postcard size) offers the most cost-effective method for
program, ‘CompleteCare’, for its portable/laptop/note-book securing feedback on the effectiveness of CompleteCare. The
computers. This program promises to provide a rapid response introduction on the postcard will be a variation of ABC current
to customers’ service problems. advertising campaign.
ABC is currently experiencing a shortage obtained technical Some questions for this instrument will be based on the’
operators in its telephone center. The package courier, contracted investigative questions we presented to you previously, and
to pick up and deliver customers’ machines to ‘CompleteCare’, others will be drawn from the executive interviews. We
has provided irregular execution ABC has also experienced parts anticipate a maximum of 10 questions. A new five-point
availability problems for some machine types. expectation scale, compatible with your exiting customer
Recent phone logs at the call center show complaints about satisfaction scales, is being designed.
‘CompleteCare’ it is unknown how representative these Although we are not convinced that open-ended questions are
complaints are and what implications they may have for appropriate for postcard questionnaires, we understand that
satisfaction with ABC products. Management desires informa- you and Mr. Malraison like them. A comments/suggestions
tion on the program’s effectiveness and its impact on customer question will be included. In addition, we will work out a code
satisfaction to determine what should-be done to improve the block that captures the call center’s reference number, model,
‘CompleteCare’ program for ABC product repair and servicing. and item(s) serviced.
Research Objectives Logistics - The postal arrangements are: box rental, permit, and
The purpose of this research is to discover the level of satisfac- “business reply” privileges to be arranged in a few days. The
tion with the ‘CompleteCare’ service program. Specifically, we approval for a reduced postage rate will take one to two weeks.
intend to identify the component and overall levels of satisfac- The budget section itemizes these costs.
tion with ‘CompleteCare’. Components of the repair process Pilot Test - We will test the questionnaire with a small sample
are important targets for investigation because they reveal: of customers using your tech-line operators. This will contain
I. How customer tolerance levels for repair performance affect your costs. We will then revise the questions and forward them
overall satisfaction, and to our graphics designer for layout. The instrument will then be
submitted to you for final approval.
2. Which process components should be immediately
improved to elevate the overall satisfaction of those ABC Evaluation of Non response bias - a random sample of 100
customers experiencing product failures. names will be secured from the list of customers who do not

© Copy Right: Rai University


11.556 25
return the questionnaire. Call center records will be used for Data entry (monthly) 430.00
RESEARCH METHODOLOGY

establishing the sampling frame. Non responders will be Monthly data files (each) 50.00
interviewed on the telephone and their responses compared
Monthly reports (each) 1850.00
statistically to those of the responders.
Total start-up costs $ 11150.00
Data Analysis
Monthly run costs $ 1030.00
We will review the postcards returned and send you a weekly
report listing customers who are dissatisfied (score a “1” or “2”) * An additional fee of 0.21 per card will be assessed by the post
with any item of the questionnaire or who submit a negative office for business reply mail. At approximately a 30 percent
comment. This will improve your timeliness in resolving return rate, we estimate the monthly cost to be less than $50.
customer complaints. Each month, we will provide you with a report
Note -
consisting of frequencies and category percentages far each
questions. Visual displays of the data will be in bar chart/
histogram form. We propose to include at least one
question dealing with overall satisfaction (with CompleteCare and/or
MindWriter). This Overall question would be regressed on the
individual items to determine each item’s importance. A
performance grid will identify items needing improvement with
an evaluation of priority. Other analyses can be prepared on a
time and materials basis.
The open-ended questions will be summarized and reported by
model code. If you wish, we also can provide content analysis
for these questions.
Results: Deliverables
1. Development and production of a postcard survey. ABC
employees will package the questionnaire with the returned
merchandis.
2. Weekly exception reports (transmitted electronically) listing
customers who met the dissatisfied customer criteria.
3. Monthly reports as described in the data analysis section.
4. An ASCII diskette with each month’s data shipped to
Austin by the fifth working day of each month.
Budget
Card Layout and Printing Based on your card estimate, our
designer will layout and print 2,000 cards in the first run ($500).
The specifications are as follows: 7-point Williamsburg offset
hi-bulk with one-over-one black ink. A gray-scale layer with a
ABC or CompleteCare logo can be positioned under the
printed material at a nominal charge. The two-sided cards
measure 4 1/4 by 5 1/2.
This allows us to print four cards per page. The opposite side
will have the business reply logo, postage paid symbol, and
address.
Cost Summary
Interviews $ 1550.00
Travel costs 2500.00
Questionnaire development 1850.00
Equipment/supplies 1325.00
Graphics design 800.00
Permit fee (annual) 75.00
Business reply fee (annual) 185.00
Box rental (annual) 35.00
Printing costs 500.00

© Copy Right: Rai University


26 11.556
RESEARCH METHODOLOGY
LESSON 4:
TUTORIALS

1. What, if any, are the differences between solicited and


unsolicited proposals?
2. Select a research report from a management journal. Outline
a proposal for the research as if it had not yet been
performed. Make estimates of time and costs.
3. You are the new manager of market intelligence in a rapidly
expanding soft-ware firm. Many product managers and
corporate officers have requested market surveys from you
on various products. Design a form for a research proposal
that can be completed easily by your research staff and the
sponsor-ing manager. Discuss how your form improves
communication of the re-search objectives between the
manager and the researcher.
4. What modules would you suggest be included in a proposal
for each of the following cases?
a.. The president of your company has asked for a study of the
company’s health benefits plan and for a comparison of it to
other firms’ plans.
b. You are competing for a university-sponsored student
research grant, awarded to seniors and graduate students.
c. A bank is interested in understanding the population trends
by location so that it can plan its new branch locations for the
next five years. They con-tacted you for a proposal.
d. You are interested in starting a new research service,
providing monthly information about the use of recyclable
items in your state. The proposal will go to several city and
county planning agencies, independent waste service
providers, and independent and government landfill
providers.
4. Consider the new trends in desktop publishing, multimedia
computer author-ing and display capabilities, and
inexpensive video taping and playback possibilities. How
might these be used to enhance research proposals? Give
several examples of appropriate use.
5. You are the manager of a research department in a large
department store chain. Develop a list of criteria for
evaluating the types of research activities listed below.
Include a point scale and weighting algorithm.
a. Market research
b. Advertising effectiveness
c. Employee opinion surveys
d. Credit card operations
e. Computer service effectiveness at the individual store level

© Copy Right: Rai University


11.556 27
RESEARCH METHODOLOGY

LESSON 5:
RESEARCH DESIGN AND EXPERIMENTAL DESIGNS

Objective We can state the important features of a research design as


In this lesson, you will learn who to design your research under:
project. Research design is the important step in any research
Essentials of research Designs
project. It ensures the systematic and timely completion of your
project. • The design is an activity – and –time –based plan

After completion of this lesson you will be able to – • The design is always based on the research question

1. Design plan for collection of data • The design guides the selection of sources and types of
information
2. Design plan for measurement
• The design is a frame work for specifying the relationships
3. Design plan for the analysis of data
among the study’s variables
Meaning of Research Design • The design outlines procedure for every research activity
The decisions regarding what, where, when, how much, by
what means concerning a research project constitute a research Need for Research Design (Why Research
design. “A research design is the arrangement of conditions for Design is Required?)
collection and analysis of data in a manner that aims to Research design is needed because it facilitates the smooth
combine relevance to the research purpose with economy in sailing of the various research operations, thereby making
procedure”. research as efficient as possible yielding maximal information
with minimal expenditure of effort, time and money.
In fact, the research design is the conceptual structure within
which research is conducted; it constitutes the blueprint for the For example, economical and attractive construction of house
collection, measurement and analysis of data. As such the we need a blueprint (or what is commonly called the map of
design includes an outline of what the researcher will do from the house) well thought out and prepared by an expert architect,
writing the hypothesis and its operational implications to the similarly we need a research design or a plan in advance of data
final analysis of data. More explicitly, the design decisions collection and analysis for our research project.
happen to be in respect of: Research design stands for advance planning of the methods to
• What is the study about? be adopted for collecting the relevant data and the techniques to
be used in their analysis.
• Why is the study being made?
• Where will the study be carried out?
Different Research Design
Different research designs can be conveniently described if we
• What type of data is required?
categorize them as:
• Where can the required data be found?
1. Research design in case of exploratory research studies;
• What periods of time will the study include?
2. Research design in case of descriptive and diagnostic research
• What will be the sample design? studies, and
• What techniques of data collection will be used? 3. Research design in case of hypothesis-testing research
• How will the data be analysed? studies.
• In what style will the report be prepared? We take up each category separately
Keeping in view the above stated design decisions, we may split 1. Research Design in Case of Exploratory Research
the Overall research design into the following parts. Studies
• Sampling design- which deals with the method of As you know from previous lessons that, exploratory research
selecting items It be observed for the given study; studies are also termed as formulative research studies. The
main purpose of such studies is that of formulating a problem
• Observational design - which relates to the conditions
for more precise investigation or of developing the working
under which the observations are to be made;
hypotheses from an operational point of view.
• Statistical design - which concerns with the question of
The major emphasis in such studies is on the discovery of ideas
how many items are to be observed and how the
and insights. The research design appropriate for such studies
information and data gathered are to be analysed;
must be flexible enough to provide opportunity for considering
• Operational design - which deals with the techniques by
different aspects of a problem under study. Inbuilt flexibility in
which the procedures specified in the sampling, statistical and research design is needed because the research problem, broadly
observational designs can be carried out. defined initially, is transformed into one with more precise

© Copy Right: Rai University


28 11.556
meaning in exploratory studies, which fact may necessitate adopted. Attitude of the investigator, the intensity of the study

RESEARCH METHODOLOGY
changes in the research procedure for gathering relevant data. and the ability of the researcher to draw together diverse
Generally, the following three methods in the context of information into a unified interpretation are the main features,
research design for such studies are talked about: which make this method an appropriate procedure for evoking
insights.
a. The survey of concerning literature;
Now, what sort of examples is to be selected and studied?
b. The experience survey and
There is no clear-cut answer to it. Experience indicates that for
c. The analysis of ‘insight-stimulating’.
particular problems certain types of instances are more appro-
We let us discuss each of these methods - priate than others. One can mention few examples of
a. Survey of concerning literature - This method happens to be ‘insight-stimulating’ cases such as the reactions of strangers, the
the most simple and fruitful method of formulating precisely reactions of marginal individuals, the study of individuals who
the research problem or developing hypothesis. Hypotheses are in transition from one stage to another, the reactions of
stated by earlier works may be reviewed and their usefulness be individuals from different social strata and the like.
evaluated as a basis for further research. Thus, in an exploratory or formulative research study which
It may also be considered whether the already stated hypotheses merely leads to insights or hypotheses, whatever method or
suggest new hypothesis. In this way the researcher should research design outlined above is adopted, the only thing
review and build upon the work already done by others, but in essential is that it must continue to remain flexible so that many
cases where hypotheses have not yet been formulated, his task is different facets of a problem may be considered as and when
to review the available material for deriving the relevant they arise and come to the notice of the researcher.
hypotheses from it. Besides, the bibliographical survey of
2. Research Design in Case of Descriptive and
studies, already made in one’s area of interest may as well be
Diagnostic Research Studies
made by the researcher for precisely formulating the problem.
Now another type of research studies are - Descriptive research
He should also make an attempt to apply concepts and theories studies, those studies which are concerned with describing the
developed in different research contexts to the area in which he characteristics of a particular individual, or of a group whereas
is himself working. Sometimes the works of creative writers diagnostic research studies determine the frequency with which
also provide a fertile ground for hypothesis-formulation and as something occurs or its association with something else.
such may be looked into by the researcher.
The studies concerning whether certain variables are associated
b. Experience survey - means the survey of people who have are examples, diagnostic research studies, As against this,
had practical experience with the problem to be studied. The studies concerned with specific predication, with narration of
object of such a survey is to obtain insight into the relation- facts and characteristics concerning individuals or group or
ships between variables and new ideas relating to the research situation are all examples of descriptive research studies.
problem. For such a survey people who are competent and can
Most of the group or search comes under this category
contribute new ideas may be carefully selected as respondents to
From the point of view of the research design, the descriptive
ensure a representation of different types of experience.
as well as diagnostic studies share common requirement, and as
The investigator may then interview the respondents so such we may group together these two types of research
selected. The researcher must prepare an interview schedule for studies. In descriptive as well as in diagnostic studies, the
the systematic questioning of informants. But the Interview researcher must be able to define clearly, what he wants to
must ensure flexibility in the sense that the respondents should measure and must find adequate methods for measuring it
be allowed to raise issues and questions that the investigator along with a clear cut definition of population he wants to
has not previously considered. study. Since the aim is to obtain complete and accurate informa-
Generally, the experience-collecting interview is likely to be long tion the said studies, the procedure to be used must be carefully
and may last for few hours. Hence, it is often considered planned.
desirable to send a copy of the questions to be discussed to the The research design must make enough provision for protec-
respondents well in advance. tion against bias and must maximize reliability, with due
Thus, an experience survey may enable the research to define concern for the economical completion of research study.
problem more concisely and help in the formulation of the The design in such studies must be rigid and not flexible and
research hypothesis. This survey may as well provide informa- must focus attention on the following:
tion about the practical possibilities for doing different types of
a. Formulating the objective of the study
research.
b. Designing the methods of data collection
c. Analysis of ‘insight-stimulating’ – It is also a fruitful method
for suggest hypothesis for research. It is particularly suitable in c. Selecting the sample (how much material will be needed?)
areas where there is little experience to serve as a guide. This d. Collecting the data (where can the required data be found and
method consists of the intensive study of selected instances of with what time period should the data be related?)
the phenomenon in which one is interested. For this purpose e. Processing and analysing the data.
the existing records, if any, may be examined, the unstructured f. Reporting the findings.
interviewing may take place, or some other approach may be

© Copy Right: Rai University


11.556 29
3. Research Design in Case of Hypothesis-testing Principle of Randomization - This principle indicates that we
RESEARCH METHODOLOGY

Research Studies should design or plan the experiment in such a way that the
Hypothesis-testing research studies (generally known as variations caused by extraneous factor can all be combined
experimental studies) are those where the researcher tests the under the general heading of “chance.”
hypotheses of causal relationships between variables. For example - if grow one variety of rice, say, in the first half of
Such studies require procedures that will not only reduce bias the parts of a field and the other variety is grown in the other
and increase reliability, but will permit drawing inferences about half, then it is just possible that the soil fertility may be different
causality. Usually experiments meet this requirement. Hence, in the first half in comparison to the other half. If this is so our
when we talk of research design in such studies, we often mean results would not be realistic. In such a situation, we may assign
the design of experiments. Professor R.A. Fisher’s name is the variety of rice to be grown in different parts of the field on
associated with experimental designs. The study of experimen- the basis of some variety ‘sampling technique, i.e., we may
tal designs has its origin in agricultural research. Professor Fisher apply randomization principle and random ourselves against
found that by dividing agricultural fields or plots into different the effects of the extraneous factors (soil fertility processes in
blocks and then by conducting experiments in each of these the given case.
blocks, whatever information is collected and inferences drawn The Principle of Local Control – is another important
from them, happens to be more reliable. principle of experimental designs. Under it the extraneous
This fact inspired him to develop certain experimental designs factor, the known source of variability, is made to vary deliber-
for testing hypotheses concerning scientific investigations. ately over as wide a range as necessary and this needs to be done
Today, the experimental designs are being used in research in such a way that the variability it causes can be measured and
relating to phenomena of several disciplines. Now let us discuss hence eliminated from the experimental error.
the basic principles of experimental designs. This means that we should plan the experiment in a manner
Basic Principles of Experimental Designs that we can perform a two-way analysis of variance, in which the
There are three principles of experimental designs: total variability of the data is divided into three components
attributed to treatments (varieties of rice in our case), the
1. Principle of Replication;
extraneous factor (soil fertility in our case) and experimental
2. Principle of Randomization error.
3. Principle of Local Control In other words, according to the principle of local control, we
Now let us discuss each one of these experimental design first divide the field into several homogeneous parts, known as
Principle of Replication - In this design, the experiment blocks, and then each such block is divided into parts equal to
should be repeated more than once. Thus, each treatment is the number of treatments. Then the treatments are randomly
applied in many experimental units instead of one. By doing so assigned to these parts of a block.
the statistical accuracy of the experiments is increased. For Important Experimental Designs
example, suppose we are to examine the effect of two varieties Experimental design refers to the framework or structure of an
of rice. experiment and such there are several experimental designs. We
For this purpose we may divide the field into two parts and can classify experimental designs into two broad categories. viz.,
grow one variety in one part and the other variety in the other informal experimental designs and formal experimental
part. We can then compare the yield of the two parts and draw designs. Informal experimental designs are designs that
conclusion on that basis. But if we are to apply the principle of normally use a less sophisticated form of analysis based on
replication to this experiment, then we first divide the field into differences in magnitudes, whereas formal experimental designs
several parts, grow one variety in half of these parts and the offer relatively more control and use precise statistical procedures
other variety in the remaining parts. We can then collect the data for analysis. Important experimental designs are as follows:
of yield of the two varieties and draw conclusion by comparing a. Informal experimental designs:
the same.
I. Before-and-after without control design.
The result so obtained will be more reliable in comparison to
II. After-only with control design.
the conclusion we draw without applying the principle of
replication. The entire experiment can even be repeated several III. Before-and-after with control design.
times for better results. b. Formal experimental designs:
Conceptually replication does not present any difficulty, but I. Completely randomized design (C. R. design)
computationally it does. For example, if, an experiment
II. Randomized block design (R. B. design)
requiring a two-way analysis of variance is replicated, it will then
require a three-way analysis of variance since replication itself III. Latin square design (L.S. design).
may be a source of variation in the data. However, it should be IV. Factorial designs.
remembered that replication is introduced in order to increase We may briefly discuss with each of the above stated informal
the precision of a study; that is to say, to increase the accuracy as well as formal experimental designs.
with which the main effects and interactions can be estimated.
1. Before-and-after without control design - In such a design
a single test group or area is selected and the dependent variable

© Copy Right: Rai University


30 11.556
is measured before the introduction of the treatment The This design is superior to the above two designs for the simple

RESEARCH METHODOLOGY
treatment is then introduced and the dependent variable is reason that it avoids extraneous variation resulting both from
measured again after the treatment has been introduced. the passage of time and from non-comparability of the test
The effect of the treatment would be equal to the level - of the and control areas. But at times, due to lack of historical data,
phenomenon after the treatment minus the level of the time or a comparable control area, we should prefer to select one
phenomenon before the treatment. The design can be repre- of the first two informal designs stated above.
sented thus: 4. Completely randomized design (C.R. design) – It
involves only two principles viz., the principle of replication and
Test Area: Level of phenomenon Treatment Level of phenomenon the principle of randomization of experimental designs. It is
the simplest possible design and its procedure of analysis is
Before treatment (X) introduced After treatment (Y) also easier. The essential characteristic of this design is that
subjects are randomly assigned to experimental treatments (or
Treatment effect = (Y) - (X) vice-versa).
For Example - If we have 10 subjects and if we wish to test 5
The main difficulty of such a design is that with the passage of under treatment A and 5 under treatment B, the randomization
time considerable extraneous variations may be there in its process gives every possible group of 5 subjects selected from a
treatment effect. set of 10 an equal opportunity of being assigned to treatment
A and treatment B. One-way analysis of variance (or one-way
2. After-only with control design - In this design two groups
ANOVA) is used to analyse such a design.
or areas (test area and control area) are selected and the treatment
is introduced into the test area only. The dependent variable is Such a design is generally used when experimental areas happen
then measured in both the areas at the same time. Treatment to be homogeneous. Technically, when all the variations due to
impact is assessed by subtracting the value of the dependent uncontrolled extraneous factors are included under the heading
variable in the control area from its value in the test area. This of chance variation, we refer to the design of experiment as C.
can be exhibited in the following form: R. design.
We can present a brief description of the two forms of such a
design is given below.
Test Area: Treatment Level of Two-group simple randomized design - In a two-group
phenomenon simple randomized design, first of all the population is defined
introduced After treatment (Y)
and then from the population a sample is selected randomly.
Control Area: Level of Further, requirement of this design is that items, after being
phenomenon selected randomly from the population, be randomly assigned
Without treatment to the experimental and control groups (such random assign-
ment of items to two groups is technically described as principle
sidered. If this assumption is not true, there is the possibility of randomization). Thus, this design yields two groups as
of extraneous variation entering into the treatment effect. representatives of the population.
However, data can be collected in such a design without the
Since in the simple randomized design the elements constitut-
introduction of problems with the passage of time. In this
ing the sample are randomly drawn from the same population
respect this design is superior to before –and- after without
and randomly assigned to the experimental and control groups,
control design.
it becomes possible to draw conclusions on the basis of
3. Before-and-after with control design - In this design two samples applicable for given different treatments of the
areas are selected and the dependent- variable is measured in independent variable. This design of experiment is quite
both the areas for an identical time-period before the treatment. common in research studies concerning behavioural sciences.
The treatment is then introduced into the test area only, and the The merit of such a design is that it is simple and randomizes
dependent variable is measured in both for an identical time- the differences among the sample items. But the limitation of it
period after the introduction of the treatment The treatment is that the individual differences among those conducting the
effect is determined by subtracting the change in the dependent treatments are not eliminated, i.e., it does not control the
variable in the control area from the change in the dependent extraneous variable and as such the result of the experiment
variable in test area. This design can be shown in this way: may not depict a correct picture.
This can be illustrated by taking an example.
Time period I Time Period II Example - Suppose the researcher wants to compare two
Test Area: Level of phenomenon Treatment Level of groups of students who have been randomly selected and
Phenomenon
randomly assigned. Two different treatments viz., the usual
Before treatment (X) Introduce After treatment
(Y) training and the specialised training are being given to the two
groups. The researcher hypothesises greater gains for the group
receiving specialised training. To determine this, he tests each
Control Area: Level of phenomenon Level of
phenomenon Without treatment (A) group before and after the training, and then compares the

© Copy Right: Rai University


11.556 31
RESEARCH METHODOLOGY

Experiment Treatment A
al Indepen
Populatio Randomly randomly dent
Selected Assigned Variabl
e
Control
group Treatment
B

amount of gain for the two


groups to accept or reject his
hypothesis.
Random replication
design: The limitation of
the two-group randomized
design is usually eliminated
within the random replica-
tion design. In the example
we just discuss, the teacher
differences on the dependent
variable were ignored, i.e.,
the extraneous variable was
not controlled. But in a
random replications design,
the effect of such differences
are minimised (or reduced)
by providing a number of
repetitions for each treat-
ment. Each repetition is
technically called a ‘replica-
tion’.
Random replication design
serves two purposes viz., it
provides controls for the
differential effects of the
extraneous independent
variables and secondly, it
randomizes any individual
differences among those
conducting the treatments.
Diagrammatically we can
illustrate the random
replications design thus
(Diagram given here)
From the diagram it is clear
that there are two popula-
tions in the replication
design. The sample is taken
randomly from the
population available for
study - and is randomly
assigned to, say, four
experimental and four
control groups. Similarly,
sample is taken randomly

© Copy Right: Rai University


32 11.556
from the population available to conduct experiments (because fertilizers, but it may also be the effect of fertility of soil.

RESEARCH METHODOLOGY
of the eight groups eight such individuals be selected) and the Similarly, there may be the impact of varying seeds on the yield.
eight individuals so selected should be randomly assigned to To overcome such difficulties, the L.S design is used when there
the eight groups. Generally, equal number of items is put in are two major extraneous factors such as the varying soil fertility
each group so that the size of the group is not likely to affect and varying seeds.
the results of the study. Variables relating to both population The Latin-square design is one wherein each fertilizer, in our
characteristics are assumed to be randomly distributed among example, appears five times but is used only once in each row
the two groups. Thus, this random replication design is, in fact, and in each column of the design. In other words, the treat-
an extension of the two-group simple randomized design. ments in a L. S. design are so allocated among the plots that no
5. Randomized block design (R.B. design) - It is an treatment occurs more than once in anyone row or anyone
improvement over the C.R design. In the RB, design the column. The two blocking factors may be represented through
principle of local control can be applied along with the other rows and columns (one through rows and the other through
two principles of experimental designs. In the R.B. design, columns). The following is a diagrammatic form of such a
subjects are first divided into groups, known as blocks, such design in respect of, say, five types of fertilizers, viz., A, B, C, D
that within each group the subjects are relatively homogeneous and E and the two blocking (actors viz., the varying soil fertility
in respect to some selected variable. and the varying seeds:
The variable selected for grouping the subjects is one that is
I II III IV V
believed to be related to the measures to be obtained in -respect X1 A B C D E
of the dependent variable. The number of subjects in a given
X2 B C D E A
block would be equal the number of treatments and one Seed Difference X3 C D E A B
subject in each block would be randomly assigned to each X4 D E A B C
X5 E A B C D
treatment.
The RB. design is analysed by the two-way analysis of variance The above diagram clearly shows that in a L.S. design the field is
(two-way ANOVA) technique. divided into as many blocks as there are varieties of fertilizers
Let us understand the RB design with the help of an example. and then each block is again divided into as many parts as there
are varieties of fertilizers in such a way that each of the fertilizer
Suppose four different forms of a standardized test in statistics
variety is used in each of the block (whether column-wise or
were given to each of five students (selected one from each of
row-wise) only once. The analysis of the L. S. design is very
the five I.Q. blocks)
similar to the two-way ANOV A technique.
The merit of this experimental design is that it enables
Very low Low Average High Very
IQ I.Q. I.Q. I.Q. High differences in fertility gradients in the field to be eliminated in
I.Q. comparison to the effects of different varieties of fertilizers on
Student Student Student Student Student the yield of the crop. But this design suffers from one limita-
A B C D E tion, and it is that although each row and each column
Form 1 82 67 57 71 73 represents equally all fertilizer varieties, there may be consider-
Form 2 90 68 54 70 81 able difference in the row and column means both up and
Form 3 86 73 51 69 84 across the field. This, in other words, means that in L.S. design
Form 4 93 75 60 68 75 we must assume that there is no interaction between treatments
and blocking factors.
If each student separately randomized the order in which he or This defect can, however, be removed by taking the means of
she took the four tests (by using random numbers or some rows and columns equal to the field mean by adjusting the
similar device), we refer to the design of this experiment as a results. Another limitation of this design is that it requires
R.B. design. The purpose of this randomization is to take care number of rows, columns and treatments to be equal. This
of such possible extraneous factors (say as fatigue) or perhaps reduces the utility of this design. In case of (2 x 2) L. S. design,
the experience gained from repeatedly taking the test. there are no degrees of freedom available for the mean square
6. Latin squares design (L. S. design) - It is an experimental error and hence the design cannot be used. If treatments are 10
design very frequently used in agricultural research. The condi- or more, than each row and each column will be larger in size so
tions under which agricultural investigations are carried out are that rows and columns may not be homogeneous. This may
different from those in other studies for nature plays an make the application of the principle of local control ineffective.
important role in agriculture. Therefore, L.S. design of orders (5 x 5) to (9 X 9) are generally
used.
For example, an experiment has to be made through which the
effects of five different varieties of fertilizers on the yield of a 7. Factorial designs: Factorial designs are used in experiments
certain crop, say wheat, is to be judged. In such a case the varying where the effects of varying more than one factor are to be
fertility of the soil in different blocks in which the experiment determined. They are specially important in several economic
has to be performed must be taken into consideration; and social phenomena where usually a large number of factors
otherwise the results obtained may not be very dependable affect a particular problem. Factorial designs can be of two
because the output happens to be the effect not only of types;

© Copy Right: Rai University


11.556 33
i. Simple factorial designs, and two levels of the control variable. As such there are four cells
RESEARCH METHODOLOGY

ii. Complex factorial designs. into which the sample is divided. Each of the four combina-
tions would provide one treatment or experimental condition.
We take them separately.
Subjects are assigned at random to each treatment in the same
Simple factorial designs: In case of simple factorial designs,
manner as in a randomized group design.
we consider the effects of varying two factors on the dependent
variable, but when an experiment is done with more than two The means for different cells may be obtained along with the
factors, we use complex factorial designs. Simple factorial design means for different rows and columns. Means of different cells
is also termed as ‘two-factor-factorial design’, whereas represent the mean scores for the dependent variable and the
complex factorial design is known as ‘multi-factor-factorial column means in the given design are termed the main effect
design’. Simple factorial design may either be a 2 x 2 simple for treatments without taking into account any differential effect
factorial design, or it may be, say, 3 x 4 or 5 X 3 or the like type that is due to the level of the control variable. Similarly, the row
of simple factorial design. We can design some simple factorial means in the said design are termed the main effects for levels
designs with this example. without regard to treatment Thus, through this design we can
study the main effects of treatments as well ‘as the main effects
Example : (2 x 2 simple factorial design).
of levels.
A 2 x 2 simple factorial design can graphically be design as
follows:

Experimental

Control Variable Treatment A Treatment A

Level 1
Cell 1 Cell 2

Level 2 Cell 3 Cell 4

ii. Complex factorial designs: Experiments with more than


two factors at a time involve the use of complex factorial
designs. A design, which considers three or more independent
variables simultaneously, is called a complex factorial design. In
case of three factors with one experimental variable having two
treatments and two control variables, each one of which having
two levels, the design used will be termed 2 x 2 x 2 complex
factorial design which will contain a total of eight cells as shown
below.

Experimental Variable

Treatment A Treatment B
Control Control Control Control
Variable 2 Variable 2 Variable 2 Variable 2
Level - I Level - II Level – I Level -II

Control Level - I Cell 1 Cell 3 Cell 5 Cell 7


Variable

Level – II Cell 2 Cell 4 Cell 6 Cell 8

2 x 2 x 2 Complex Factorial Design

You can understand this design better using 3 – D representa-


tion given below.

© Copy Right: Rai University


34 11.556
From this design it is possible to determine the main effects for

RESEARCH METHODOLOGY
three variables i.e., one experimental and two control variables.
The researcher can also determine the interactions between each
possible pair of variables (such interactions are called ‘First order
interactions’) and interaction between variable taken in triplets
(such interactions are called Second Order interactions). In case
of a 2 X 2 X 2 design, the further given first order interactions
arc possible.
Points to Ponder
• There are several research designs and the researcher must
decide in advance of collection and analysis of data as to
which design would prove to be more appropriate for his
research project.
• Consideration of the following activities is essential for the
execution of a well-planned experiments
• Select relevant variable for testing
• Specify the level of treatment
• Control the environment and extraneous factors
• Choose an experimental design suited to the
hypothesis
• Select and assign subjects to groups
• Pilot-test, revise, and conduct the final test
• Analyze the data
• He must give due weight to various points such as the –
• Type of universe and its nature,
• The objective of his study,
• The source list or the sampling frame,
• Desired standard of accuracy

Notes-

© Copy Right: Rai University


11.556 35
RESEARCH METHODOLOGY

LESSON 6:
TUTORIAL

Q1. Compare the advantages of experiment with


advantages of survey and observational methods.
Q2. A lighting company seeks to study the percentages of
defective glass shells being manufactured. Theoretically, the
percentages of defectives are dependent on temperature,
humidity, and the level of artisan expertise. Complete
historical data are available for the following variables on a
daily basis for a year:
A. Temperature ( high, normal, low)
B. Humidity (high, normal, low)
C. Artisan expertise level ( expert, average, mediocre)
Some experts feel that defective also depend on production
supervisors. However, data on supervisors in charge are
available for only 242 of the 365 days. How should this study
be conducted?
Q3. A pharmaceuticals manufacturer is testing a drug developed
to treat cancer. During the final stages of development the
drug’s effectiveness is being tested on individuals for
different (1) dosage conditions and (2) age groups. One of
the problems is patient mortality during experimentation.
Justify your design recommendations through a comparison
of alternatives and in terms of external and internal validity.
a. Recommend the appropriate design for the
experiment
b. Explain the use of control groups, blind, and
double blinds if you recommend them.
Q4.Describe how you would operationalize variables for
experimental testing in the following research question: what
are the performance difference between 10 microcomputers
connected in a local area network(LAN) and one
minicomputer with 10 terminals?
Q5.What type of experimental design would you recommend
in each of the following cases?
A. A test of three methods of compensation of factory
workers. The methods are hourly wage, incentive pay, and
weekly salary. The dependent variable is direct labor cost per
unit of output.
B. A study of the effects of various levels of advertising effort
and price reduction on the sale of specific branded grocery
products by a retail grocery chain.
C. A study to determine whether it is true that the use of fast –
paced music played over a store’s public address system will
speed the shopping.

© Copy Right: Rai University


36 11.556
RESEARCH METHODOLOGY
LESSON 7:
WRITING THE RESEARCH

From this we derive the essence of our discussion today –


‘Writing the Research Report’
Being asked to write a report can fill people with horror!
However, writing reports correctly is an essential skill that you
will need not only today as a student, but also even tomorrow
as a budding manager.
I am sure you would agree, when I say that report writing is
common to both academic and managerial situations.
In academics, you would be required to prepare reports to
facilitate comprehensive and application oriented learning. Such
reports of yours would be called term papers, project reports,
Students, before we start our topic for the day, I would like to theses and dissertations depending upon the nature of the
give you a brief recap from our last class. We had centered our report, the time and effort expected out of you as a student and
discussion on the various steps involved in a research process. your curriculum design.
These were identified as: Further, if you were a researcher, you would put out your initial
• Problem Definition: findings in a research report, paper or monograph, which would
It stated that before we actually initiate the investigation, we later be condensed into an article or expanded into a series of
should be clear about the problem we are facing. articles or a book
• Research Design: When you join the corporate world tomorrow, you would
As I had also highlighted in the last class, this provides the realize that report writing there forms the basis for decision-
blueprint of investigation. It gives you a broad idea about how making. Such reports would be expected to be brief but
to proceed further in getting information regarding the relevant comprehensive and clearly reflect your thinking as the manager,
variables from the units under consideration the management committee, or the consulting group that has
been given the terms of reference for fact finding or decision
• Data Collection:
making.
Once your design is developed you, as a researcher, would be
required to start collecting information from the units under We will start our lesson today with a brief classification of the
study. However, bear in mind that none of the variables should various types of reports
be over or under stated. Categories of Reports
• Data Analysis: Can any of you think of various forms a report might take?
Your next step would be to process the data. Here, you would No! Never mind. Let me explain it you.
try to investigate how various units respond to the variable or Broadly, any report would fall into one of the following three
characteristics under study. major categories:
Such data analysis that you may carry out could be: 1. Information Oriented
• Uni-variate
2. Decision Oriented
• Bi-variate
3. Research Oriented
• Multi-variate
As these names suggest, it is the substance and focus of the
Interpretation content that determines the category. However, a report that you
Literally speaking, interpretation is the ‘so what’ of a research make may contain characteristics of more than just one category
process. If you carry out a research or an investigation which is
Information Reports
not used in influencing any action anywhere, then it is a sheer
They are the first step to understanding the existing situation
waste of time and resources. Therefore, your research results
(for instance–business, economic, technological, labour market
must be consistent with the decisions that you have to make.
or research scenario) or what has been discussed or decided
This is not the end of your task. It is equally important that (minutes of a meeting). They, you should remember, form the
you should be able to communicate these findings and foundation of subsequent decision reports and research
recommendations in an understandable and concise manner to reports.
the decision makers. Your report should clearly highlight that
In describing any person, object, situation or concept, the
the recommendation or suggestion is justified.
following seven questions will help you to convey a comprehen-
sive picture

© Copy Right: Rai University


11.556 37
so far has been logical. Make sure that the decision is an
RESEARCH METHODOLOGY

adequate response to the problem


Subject / Object Action Reason
• Drawing up an action plan
Action steps and their consequences should be visualized to
Who? Or Whom? What? When? Where? Why? avoid your being caught unaware. Be clear of WHO does
How? WHAT, WHEN, WHERE and HOW for even the best analysis
can go waste if attention is not paid to the action plan
Therefore, you can check the comprehensiveness of an informa- • Working out a contingency plan
tion or descriptive report by iteratively asking: Managers thrive on optimism in getting things done. Yet, if
WHO does WHAT to WHOM? something can go wrong, it is likely to go wrong. You should
therefore be ready with parachutes to bail you out. Your
WHEN, WHERE, HOW
contingency plan must emerge from the action plan you have
and WHY? already prepared. There is need to think of how to achieve the
Decision Reports second best objective if the first one is not feasible.
As you would well be able to make out from the name itself, • Conclusion
decision reports adopt the problem solving approach. Such A good decision report should not only be structured sequen-
reports that you make have to follow the below mentioned tially but also reflect comprehensively your iterative thinking
steps: process as the decision maker.
• Identifying the problem Research Reports
Problem is the beginning and the end of decision-making. If As you would all know, research reports contribute to the
you start with a wrong problem, a wrong hypothesis or a growth of subject literature. They pave the way for new
wrong assumption, you will only end up solving a non-existing information, significant hypotheses and innovative and
problem or might even create a new problem. Therefore you rigorous methods of research and measurement. Students,
should carefully define the problem, keeping in mind each of while preparing them, you should broadly follow the following
the following elements pattern:
• What is the situation, and what should it be? • Undertake a Literature Survey to find gaps in knowledge
• What are the symptoms and what are the causes? • Next, you should clearly identify the nature and scope of
• What is the central issue and what are the subordinate study, hypothesis to be tested, and significance and utility of
issues? the study
• What are the decision areas – short medium and long term? • Methodology for collecting data, conducting the experiment,
• Constructing the Criteria and analyzing the data is what should follow.
In order to achieve your end objective of bringing the existing • Then, lay out the description and analysis of the experiment
situation to what it should be, you would require yardsticks to and data
evaluate options. Criteria link the ‘problem definition’ with • Try to identify your findings after that
‘option generation and evaluation’. In constructing the criteria,
• Come to a conclusion
your knowledge of SWOT analysis could be very useful.
• Draw up your recommendations
• Generating and Evaluating the Options
• Plug in suggestions for further research
In generating options it is your creativity that stands to test.
• End your survey with back-up evidence and data
• Sometimes the options may be obvious, but you should
look beyond the obvious. Steps of Report Writing
• Once a set of options has been generated, you should short- Preparing the Draft
list them and rank them by priority or their probability of Preparation of reports is time consuming and expensive.
meeting your end objectives. Therefore, you, while writing your report should ensure that
• As the decision maker, you should then evaluate the same they are very sharply focused in purpose, content and readership.
against the criteria and the possible implications in To control the final outcome of your product – whether it is a
implementation. However, all this while, you should not research report, committee/consulting/administrative report or
lose track of the main objective of what the situation should a student report – I advise that you precede it with a proposal/
be. draft and its acceptance or modification and periodic interim
reports and their acceptance or modification by your sponsor.
• Your next job is to present the evaluation. Make sure that it
is structured by criteria or options depending upon which
structure is easy to understand.
• Making a decision
Your recommendations would, but naturally, flow out of the
evaluation of the options, provided that your thinking process

© Copy Right: Rai University


38 11.556
• Length?

RESEARCH METHODOLOGY
• Appearance?
We can split the writing process into stages
Author’s Purpose
Getting
Getting in
in the
the Mood
Mood Writing
Writingthe
theFirst
First Draft
Draft The lack of clarity and explicitness in the communication
process leads to two major problems
• Confusion in determining the mix of content, language and
tone
• Misinterpretation of the message
Revising, Finishing Therefore try to use a simple, easy to read style and presentation
Revising,Revising,
Revising,Revising
Revising Finishing
that will help your reader to understand the content easily.
Reader’s Profile
Readership may consist of one or more person(s) / group(s).
You would therefore need to check whether all of them have
the same wavelength. If not, common interest areas will need
to be segregated from the special interest areas. Then you will
Your proposal should provide information on the following need to decide on the types and parts of the report that can
items: satisfy the various reader groups. The major discriminating
features of the readers profile are culture, religion, ideologies,
• Descriptive title of your study
age, education and economic background
• Your name as the author and your background
Content
• Nature of your Study
Please pay attention to the content’s focus, its organization, and
• Problem to be examined accuracy of facts and logic of arguments.
• Need for the study • You should clarify the focus right in the first few paragraphs
• Background information available to attract the reader’s attention and hold it.
• Scope of study • If any material is added or deleted in the text, recheck the
• To whom will it be useful focus to see whether you need to make any changes in the
foundation
• Hypothesis, if any, to be tested
• Data
• Keep in mind that you may loose credibility if you fail to
check for the accuracy of the facts, for a reader can easily test
• Sources internal consistency of the report by comparing information
• Collection procedure across pages and sections
• Methodology for analysis • Not all the data that is required to make the report may be
• Equipment and facilities required available. Sometimes you may need to make assumptions to
fill the gaps
• Schedule – target dates for completing
• Library research
• What is good in one situation may not hold for another.
Therefore please list and arrange the elements and the actors
• Primary research of a situation to understand its dynamics
• Data analysis
Language and Tone
• Outline of the report Since the purpose of communication is to make the reader
• First draft understand the message, use vocabulary and sentence structure
• Final draft which the reader understands. Abstract phrases are difficult to
comprehend while concrete phrases are easy to understand.
• Likely product or tentative outline
Finally, the tone of the language also matters. It can make the
• Bibliography reader receive, ignore or reject the message.
Reviewing the Draft Length
To err is human. Therefore after you have prepared your draft This is a matter that needs to be judged by you as the author
report, it should be thoroughly reviewed and edited before the keeping in mind the purpose, subject and the reader’s interest.
final report is submitted. Let us now try to make a checklist that Usually, shorter the content, the more attractive it is to the
will help you in reviewing the draft reader. However it should not be so brief as to miss the
• Your purpose as the author? essential points and linkages in the flow of arguments and force
• Reader’s profile? the reader to ask for more information.
• Content? Let us now try to work on a few tips to save words.
• Language and tone? Can you think of any?

© Copy Right: Rai University


11.556 39
• Cut out repetitions, unless they are needed to sharpen the • Return it to the printer according to the agreed schedule
RESEARCH METHODOLOGY

message • Also return the manuscript along with


• Take out redundancies • Upon printing, your final document is ready for reference
• Use active voice Format of a Report
• Use shorter and direct verbs No matter which category your report falls into, when you make
You have done quite a good job of this. Can you also give me one, make sure that it contains each of the following parts
some examples for the above? • A cover and title page
Hey! That’s nice. You’ve covered most of the tips. I’ll just add a • Introductory Pages
few more to complete the list. • Foreword
• Eliminate weighty expressions • Preface
• Make concrete adjectives • Acknowledgement
• Use abbreviations which are more familiar than their • Table of Contents
expanded form
• List of tables and illustrations
Appearance
• Summary
Looks Matter! Don’t you all agree with this?
• Text
This therefore also holds true for your report. The novelty of
presentation is as important as the originality of ideas. Both are • Headings
products of creativity. Presentation attracts readers and content • Quotations
holds their attention. Hence pay complete attention to both the • Footnotes
product and its packaging.
• Exhibits
• Reference Section
Style is the way you communicate • Appendices
the content to the audience
• Bibliography
• Glossary (if required)
We will now discuss each of these at length
[Peterson, 1987]

Illustration
words
wordswords
wordswordswords
Cover and the title page
wordswordswordswords
wordswordswords
wordswordswords
I am sure you would all know what details this page needs to
wordswordswords
wordswordswords contain. However, let’s try to list them down again
Structure
style
Language • Title of the subject or project
• Presented to whom
• On what date
• For what purpose
Proof Reading • Written by whom
If you or another person proofreading your report is good, he If there is any restriction on the circulation of the report that
should have the accuracy to pin point all the mistakes, clarity in you have made, you should indicate it on the top right corner
giving instructions to the printer and speed for meeting the of the cover and title page
printer’s deadline.
Sample
• Make sure that you indicate correction marks at two places
For Official Use Only
• Within the line where the correction is to be carried out
Working Capital Requirements
• In the margin against the corresponding line giving Of
the instruction Xyz Private Limited
• Please, never give instructions at the place of correction Presented To
• You should mark the proof preferably with a red ball point Managing Director
• To catch as many errors as possible read it over and over Xyz Private Limited
again On
• One last point. Always remember that proofs are meant to November 26, 2003
be corrected not edited By
Ms. ABC
Final Printing
And
Phew! At last your job is almost over. Once you have thor-
Ms. DEF
oughly proof read your report, you should:

© Copy Right: Rai University


40 11.556
Rai Business School List of Tables and Illustrations

RESEARCH METHODOLOGY
New Delhi Campus After your table of contents, you should give a list that
……………………………………………………………………………………………… mentions the details and page numbers of the various tables
and illustrations that you may have used to support your
Introductory Pages report. Each list should start on a separate page. You should
Every time you open any book introduction is the first thing number the tables and illustrations continuously in a serial
that you will come across. While writing such pages for your order throughout the book/report. Usually keep them in
report, number them in lower case Roman Numerals (i, ii, Arabic Numerals or Decimal Form
iii…). Use Arabic Numerals (1, 2, 3…) from the first page of
the introduction, Make sure that your introductory pages Summary
contain: The executive summary that you would write in the initial pages
is usually of great help to a busy reader.
Foreword
This is not numbered but counted among the introductory The summary should highlight the following essential informa-
pages. It would be written by someone other than you, usually tion:
an authority on the subject or the sponsor of the research or the • What is the study about?
book. At the end of the foreword, your name as the writer • What is the extent and limitation of the coverage?
would appear on the right side. On the left come address, place • What is the significance and need for the study?
of writing and date, which are put in italics.
• What is the kind of data used?
Preface • What research methodology has been used?
It has to be written by you to indicate how the subject was
chosen, its importance and need and the focus of the book’s/ • What are the findings and conclusions?
research paper’s content, purpose and audience. Your name will • What are the incidental findings, if any?
appear at the end of the preface on the right side. On the left • How can the conclusions be used and by whom?
would be your address, place of writing and date, which you • What are the recommendations and the suggested action
should put in italics. plan?
Acknowledgement Text
As a courtesy, you should give due credit to anyone else whose The subject matter of the text of your report should be divided
efforts were instrumental in your writing the report. Such into the following
recognition will form the acknowledgement. If it is short, I
suggest that you treat it as a part of the preface, if not you may Headings
put it in a separate section. At the end of the acknowledgement This I am sure is very simple for you to understand. You all
obviously only your name would appear on the right side and would have been using this classification right from your
in italics. secondary school days. Just as a refresher, I am mentioning the
classifications once again
Table of Contents
The content sheet of your report would act as both a summary • Center head,
and a guide to the various segments of your report. You • Center sub-head,
should ensure that it covers all the essential parts of the book/ • Side head,
report and yet is brief enough to be clear and attractive. It • Paragraph head.
should list out the sections/chapters/main heads and give their
Which combination of headings you would use would depend
corresponding page numbers along with. Have a look at the
on the number of classifications or divisions that the chapters
sample that I have prepared below for better understanding
of your report have.
………………………………………………………………………………………………
Quotations
SAMPLE There may be times when you feel that you need to reproduce a
CONTENTS portion of the work of another author to add value to your
Foreword v. own report. This is what I mean by Quotation. Quotation
Preface vii Marks must necessarily used for
Acknowledgement ix
SECTION A 1 • A directly quoted passage or word
1. Chapter Title 3
A. Center Head 10 • A word or phrase to be emphasized
i. Center Side Head 17
SECTION B
SECTION C
25
30
• Titles of articles
Summary and Conclusions 32 While quoting, be very careful that all quotations should
APPENDICES 37
a. Questionnaire 39 correspond exactly to the original in word, spelling and
b. Interview 45
BIBLIOGRAPHY 51 punctuation. You may allow quotations up to three typewritten
GLOSSARY 55 lines to run into the text. Direct quotations over this limit have
……………………………………………………………………………………………… to be set in indented paragraphs.

© Copy Right: Rai University


11.556 41
Footnotes Appendices:
RESEARCH METHODOLOGY

When you insert quotations, it is important that you indicate They will help you, as the author of the report, to a
the source of the reference. This is what you may do using the uthenticate the thesis and help your reader to check the
footnotes. Also, there may be times when you might want to data. Let us now try to list out the material that you
provide an explanation that is not important enough to be would usually put in the appendices
included in the text. Again, the footnote would be of use to • Original data
you for this.
• Long tables
Please ensure that explanatory footnotes are put at the bottom
• Long quotations
of the page and are linked to the text with a footnote number.
But you must incorporate source references within the text and • Supportive legal decisions, laws, documents
supplement them with a bibliographical note at the end of the • Illustrative material
chapter or book or report. • Extensive Computations
Footnotes would help the reader to check the accuracy of the • Questionnaires and Letters
interpretation of the source by going to the source if they want
• Schedules or forms that you might have used in
to. They are also a form of your acknowledgement of the
collecting data
indebtedness to the source. They help the reader distinguish
between your contribution as the author of the report and the • Case studies
work of others. • Transcripts of interviews
Exhibits Bibliographies
Writing just theory about any subject matter would never be It would follow the appendices and make sure that it is listed as
sufficient. You will need to supplement it with exhibits for a major section in your table of contents. It should contain the
better and faster understanding by the reader. I am sure you source of every reference cited in the footnote and any other
would all agree that such pictorial representations also help in relevant work that you had consulted. This would give the
ensuring longer retention period. reader an idea of the literature available on the subject and that
They may take the form of either a table or an illustration. has influenced or aided your study. If you try to look up the
bibliographical section of any book or report, you would see
Table: Before you introduce a table make sure that it is that the following information is given for each reference:
referred to in the text. It is meant only to expand, clarify or
give visual explanation, rather than stand by itself. The text • Name of the Author
should highlight the table’s focus and conclusions • Title of his work
………………………………………………………………………………… • Place of publication
SAMPLE • Name of the Publisher
• Table 10 Mean Information Test scores of Employees • Date of publication
receiving Communication through Different Media (From • Number of pages
Dalhe, 11, p.245) Glossary
Medium No. of Employees Mean Test Score* Finally we come to a short dictionary giving definitions and
Combined Oral and Written
Oral Only
102
94
7.7
6.17
examples of terms and phrases, which are technical, used by you
Written Only 109 4.91 in a special connotation, unfamiliar to the reader, or foreign to
Bulletin Board 115 3.72
Grapevine Only 108 3.56 the language in which the book is written. I hope you know
that even this is listed as a major section in the table of content
* All differences are significant at the 5% level or better except that
I hope you enjoyed today’s session. It was something very
between the last two means in the column
general and away from the usual theory. However it was
………………………………………………………………………………… necessary to formally list down the steps of report writing
because as we mentioned, these reports are very critical in
• Illustrations: They cover charts graphs, diagrams and maps. decision-making – whether in academics (for performance
Most of the instructions that I have listed out for tables review), research (as base for further reference) or an organiza-
hold good for illustrations tion (to decide the future course of action)
Before we call it a day lets just look back to recapitulate all
Reference Section
that we covered in the class today
This section will follow the text. First write out the appendices
In this lesson we have discussed the steps involved in prepara-
section, then the bibliography and finally the glossary. Students,
tion of a proposal for a report. I explained you three categories
please ensure that a divider page on which only the words
of reports namely – information reports, decision reports and
APPENDICES, BIBLIOGRAPHY or GLOSSARY appear in
research reports. The steps involved in writing reports were also
all capital letters separates each section.
highlighted. I am summarizing the same with the following
flow chart

© Copy Right: Rai University


42 11.556
picture. You have to try to view your research from your

RESEARCH METHODOLOGY
audience’s perspective. You may have to let go of some of the
details that you obsessed so much about and leave them out of
the write up or bury them in technical appendices or tables
Eureka!
Formatting Considerations
Proof reading and
submission Are you writing a research report that you will submit for
Final draft
Teachers feedback publication in a journal? If so, you should be aware that every
journal requires articles that you follow specific formatting
Review
Report writing
Glimpsing the process guidelines. Thinking of writing a book. Again, every publisher
will require specific formatting. Writing a term paper? Most
1. Strategic thinking
Action plan
How? When? Why?
faculties will require that you follow specific guidelines. Doing
Writing the report What? Why? Who?
your thesis or dissertation? Every university I know of has
Gathering information
very strict policies about formatting and style. There are
Planning the report
legendary stories that circulate among graduate students about
the dissertation that was rejected because the page margins were
Analysing information

Put something on paper

a quarter inch off or the figures weren’t labeled correctly.


To illustrate what a set of research report specifications might
We further divided a report into various parts – Title Page, include, I present in this section general guidelines for the
Introductory Pages, Text and reference Section. I hope you all formatting of a research write-up for a class term paper. These
have understood each of these heads. We concluded the unit by guidelines are very similar to the types of specifications you
explaining that before you submit your final report, it should might be required to follow for a journal article. However, you
be thoroughly reviewed and edited. need to check the specific formatting guidelines for the report
you are writing — the ones presented here are likely to differ in
some ways from any other guidelines that may be required in
Write-up other contexts.
So now that you’ve completed the research project, what do you
I’ve also included a sample research paper write-up that
do? In fact, this final stage — writing up your research — may
illustrates these guidelines. This sample paper is for a “make-
be one of the most difficult. Developing a good, effective and
believe” research project. But it illustrates how a final research
concise report is an art form in itself. And, in many research
report might look using the guidelines given here.
projects you will need to write multiple reports that present the
results at different levels of detail for different audiences. There
are several general considerations to keep in mind when Key Elements
generating a report:
Introduction
The Audience
Who is going to read the report? Reports will differ consider- Statement of the problem
ably depending on whether the audience will want or require The general problem area is stated clearly and unambiguously.
technical detail, whether they are looking for a summary of The importance and significance of the problem area is
results, or whether they are about to examine your research in a discussed.
Ph.D. exam I believe that every research project has at least one Statement of causal relationship
major “story” in it. The cause-effect relationship to be studied is stated clearly and
The Story is sensibly related to the problem area.
Sometimes the story centers on a specific research finding. Statement of constructs
Sometimes it is based on a methodological problem or Each key construct in the research/evaluation project is
challenge. When you write your report, you should attempt to explained (minimally, both the cause and effect). The explana-
tell the “story” to your reader. Even in very formal journal tions are readily understandable (i.e., jargon-free) to an
articles where you will be required to be concise and detailed at intelligent reader.
the same time, a good “storyline” can help make an otherwise
Literature citations and review
very dull report interesting to the reader.
The literature cited is from reputable and appropriate sources
The hardest part of telling the story in your research is finding (e.g., professional journals, books and not Time, Newsweek,
the story in the first place. Usually when you come to writing etc.) and you have a minimum of five references. The literature
up your research you have been steeped in the details for weeks is condensed in an intelligent fashion with only the most
or months (and sometimes even for years). You’ve been relevant information included. Citations are in the correct
worrying about sampling response, struggling with operational format (see APA format sheets).
zing your measures, dealing with the details of design, and
wrestling with the data analysis. You’re a bit like the ostrich that Statement of hypothesis
has its head in the sand. To find the story in your research, you The hypothesis (or hypotheses) is clearly stated and is specific
have to pull your head out of the sand and look at the big about what is predicted. The relationship of the hypothesis to

© Copy Right: Rai University


11.556 43
both the problem statement and literature review is readily Internal validity
RESEARCH METHODOLOGY

understood from reading the text. Threats to internal validity and how they are addressed by the
design are discussed. Any threats to internal validity, which are
Methods
not well controlled, are also considered.
Sample section
Description of procedures
Sampling procedure specifications: An overview of how the study will be conducted is included.
The procedure for selecting units (e.g., subjects, records) for the The sequence of events is described and is appropriate to the
study is described and is appropriate. The author state which design. Sufficient information is included so that a reader could
sampling method is used and why. The population and replicate the essential features of the study.
sampling frame are described. In an evaluation, the program
participants are frequently self-selected (i.e., volunteers) and, if
Results
so, should be described as such. Statement of Results
The results are stated concisely and are plausible for the research
Sample description
described.
The sample is described accurately and is appropriate. Problems
in contacting and measuring the sample are anticipated. Tables
External validity considerations The table(s) is correctly formatted and accurately and concisely
Generalizability from the sample to the sampling frame and presents part of the analysis.
population is considered. Figures
The figure(s) is clearly designed and accurately describes a
Measurement Section
relevant aspect of the results.
Measures
Each outcome measurement construct is described briefly (a
Conclusions, Abstract and Reference
minimum of two outcome constructs is required). For each
Sections
construct, the measure or measures are described briefly and an Implications of the study
appropriate citation and reference is included (unless you created Assuming the expected results are obtained, the implications
the measure). You describe briefly the measure you constructed of these results are discussed. The author mentions briefly any
and provide the entire measure in an Appendix. The measures, remaining problems, which are anticipated in the study.
which are used, are relevant to the hypotheses of the study and Abstract
are included in those hypotheses. Wherever possible, multiple The Abstract is 125 words or less and presents a concise picture
measures of the same construct are used. of the proposed research. Major constructs and hypotheses are
Construction of measures included. The Abstract is the first section of the paper. See the
For questionnaires, tests and interviews: questions are clearly format sheet for more details.
worded, specific, appropriate for the population, and follow in a
References
logical fashion. The standards for good questions are followed.
All citations are included in the correct format and are appropri-
For archival data: original data collection procedures are ad-
ate for the study described.
equately described and indices (i.e., combinations of individual
measures) are constructed correctly. For scales, you must describe Stylistic Elements
briefly which scaling procedure you used and how you imple- Professional Writing
mented it. For qualitative measures, the procedures for First person and sex-stereotyped forms are avoided. Material is
collecting the measures are described in detail. presented in an unbiased and unemotional (e.g., no “feelings”
Reliability and validity about things), but not necessarily uninteresting, fashion.
You must address both the reliability and validity of all of your Parallel Construction
measures. For reliability, you must specify what estimation Tense is kept parallel within and between sentences (as appro-
procedure(s) you used. For validity, you must explain how you priate).
assessed construct validity. Wherever possible, you should
minimally address both convergent and discriminate validity. Sentence Structure
The procedures, which are used to examine reliability and Sentence structure and punctuation are correct. Incomplete and
validity, are appropriate for the measures. run-on sentences are avoided.

Design and Procedures Section Spelling and Word Usage


Spelling and use of words are appropriate. Words are capitalized
Design and abbreviated correctly.
The design is clearly presented in both notational and text form.
The design is appropriate for the problem and addresses the General Style
hypothesis. The document is neatly produced and reads well. The format
for the document has been correctly followed.

© Copy Right: Rai University


44 11.556
The Formatting The title should sound interesting. Subtitles are often useful.

RESEARCH METHODOLOGY
Use graphic illustrations. The return address does not include
Booklet Format and Printing Procedures
the name of the researcher. The goal is to have the respondent
Print the survey booklet on 81/2 x 11 paper. Place no questions
view the researcher as an intermediary between the respondent
on the front or back pages. The survey pages should be printed
and the accomplishment the back cover should consist of an
using a high quality laser printer on white or off-white paper.
invitation to make additional comments, a thank you and
Ordering the Questions plenty of white space.
Questions are ordered according to social usefulness or
importance: those which people are most likely to see as useful Why do a Pilot-test?
come first and those least useful come last. Group questions The pilot-test is useful for demonstrating instrument reliability,
those are similar in content. Establish a flow of responding the practicality of procedures, the availability of volunteers, and
from one question to the next. Questions in any topic area that the variability of observed events as a basis for power tests,
are most likely to be objectionable to respondents should be participants’ capabilities or the investigators skills. The pilot test
positioned after the less objectionable ones. Demographic is a good way to determine the necessary sample size needed for
questions are usually placed at the beginning or at the end. experimental designs. From the findings of the pilot test, the
researcher can estimate the expected group means differences as
The first question is the most important. The first question well as the error variance. Even a modest pilot test conducted
should be clearly related to the survey topic and should be easy informally can reveal flaws in the research design or methodol-
to answer. The first question should convey a sense of neutral- ogy beforehand.
ity. The first question should be clearly applicable and interesting
to everyone. Any surveys that have not been used in the past or have been
modified in any way should always be pilot-tested. Any
Formatting the Pages procedures that require complex instructions should be pilot-
Use lower case letters for questions, upper case for answers. tested. Any methodology requiring time estimates should be
Identify answer categories on left with numbers this allow pre- pilot-tested.
coding of responses. Establish a vertical flow. The purpose of
Pilot testing allows you to answer the following questions:
vertical flow is to prevent inadvertent omissions, something
that occurs often when respondents are required to move back • Is each of the questions valid?
and forth across a page with their answers. Vertical flow also • Are all the words understood?
prevents the common error of checking the space on the wrong • Do all respondents interpret questions similarly?
side of the answers when answer categories are placed beside • Does each close response question have an answer that
one another. Also, vertical flow enhances feelings of accom- applies to each respondent?
plishment.
• Does the questionnaire create positive impression, one that
The need to provide clear directions is extremely important. Use motivates people to answer it?
the same marking procedure throughout the survey. Directions
for answering are always distinguished from the questions by • Are questions answered correctly?
putting them in parentheses. • Does any aspect of the questionnaire suggest bias on the
part of the researcher?
Items in a Series
Repeat the scale for each item. Ask one question at a time. The Selecting the Pilot Test Sample
respondent should only be asked to do one thing at a time. The The sample for the pilot test should be as close as possible to
problem of asking two questions is that each request interferes the actual sample that will be drawn for the main project. When
with the other. this is not possible, then you should try to get a sample with
similar characteristics. Depending upon the availability of
Use Words for Answer Choices
people, you may need to save as many participants for the main
Show a connection between items and answers. Use multiple
survey as you can which case, you don’t want to include them in
column technique to conserve space. Show how to skip
a pilot test.
screening questions. Make questions fit each page. Use transi-
tions for continuity — for example, when a new line of Some researchers often will do a pilot test on a subset of their
questioning starts, when a new page starts or to break up the sample and then include them as part of the main sample. That
monotony of a long series of questions on a single topic. is like mixing apples and oranges. If you make any change
Transitions must also fit the situation. It is also useful to whatsoever to your study as a consequence of the pilot-test,
distinguish between major and minor transitions. then the participants in the pilot-test will have experienced
something different from those in the main study. Additionally,
Designing the Covers one of the purposes of doing a pilot test is to debrief the
The front cover receives the greatest attention and contains: participants after the study by asking questions about the
• A study title, methods, instruments, and procedures.
• A graphic illustration, Information to be Collected
• Any needed directions and The pilot test should be run exactly as if it were the actual study.
• The name and address of the study sponsor. The exception here is that you will be collecting data on how

© Copy Right: Rai University


11.556 45
long procedures take, what actions facilitate or inhibit the 4. Take a report of some organization and check whether the
RESEARCH METHODOLOGY

operation of the study, whether instructions are understood problem solving approach or descriptive approach has been
and if the data you obtain is in the form expected. used. If you were to rewrite the report , what will be your
It may be necessary to have more than one pilot test especially in outline and what stages would you do to improve the
the situation where instructional materials or methods have report.
been developed. In the case of instructional materials or Group Discussion
methods, you would do a formative evaluation of the materials As a small group of four or five, discuss the following issues.
and methods. Unlike a pilot test where the researcher may not
1. What type of institutional structure is best for a marketing
interact with participants, you would be asking questions of the
research department in a large business firm?
participants as they read, watch, or listen to the instruction and
when they are quizzed on what they have learned. 2. What is the ideal educational background for someone
seeking a career in marketing research? Is it possible to
Participant debriefing acquire such a background?
If your study involves questionnaires or interviews with
3. Can ethical standards be enforced in marketing research? If
people, you should have a debriefing session at the completion
so, how?
of the pilot test. Ask the participants if they understood all of
the instructions, if they had any particular problem with any of Self Assessment Exercises
the questions asked, if they understood the intent of the study, 1 Take a report of any organization and check whether the
and if they had any recommendations how to improve the problem solving or descriptive approach has been used. If
study. you were to rewrite the report, what will be your contents
outline and what steps would you follow to improve the
report.
Activities
2 Describe an incident that has recently occurred and check
Role Playing whether your description answers all the conditions indicated
1. You are the research director for a major bank. You are to under descriptive reporting
recruit a junior analyst who would be responsible for 3 Prepare a sample title / cover page
collecting and analyzing secondary data (data already collected
4 Pick up a report that you have recently prepared. Examine
by other agencies that are relevant to your operations). With
whether the introductory pages contain all the sections
a fellow student playing the role of an applicant for this
indicated in this unit. If not, put these sections if they are
position, conduct the interview. Does this applicant have the
necessary for the report
necessary background and skills? Reverse the roles and repeat
the exercise. 5 Examine the appendices to any report. Are all of them
essential for understanding the theme of the report? Can
2. You are a project director working for a major research
they be pruned?
supplier. You have just received a telephone call from an
irate respondent who believes that an interviewer has 6 Edit a report using the copy reading and proof reading
violated her privacy by calling at an inconvenient time. The symbols
respondents express several ethical concerns. Ask a fellow References and Further Readings
student to play the role of this respondent. Address the
• Gallagher, J. William, “Report Writing for Management”,
respondent’s concerns and pacify her.
Addison-Wesley
Presentations • Golen, P. Stevan, “Report Writing for Business and Industry”,
You have recently read a book and your friends want you to Business Communication Service
make a brief presentation about it.
• Sharma R.C. and Krisna Mohan, “Business Correspondence and
How would you go about preparing and handling of audio Report Writing”, Tata McGraw-Hill Book Company
visual materials?
• Course Design MS 95, Unit IV – “Report Writing and
Fieldwork Presentation”, IGNOU
1. Using your local newspaper and national newspapers such as • Wright, C., “Report Writing”, Witherby & Co. England
USA Today, the Wall Street Journal, or the New York Times, • Kepner H. Charles and Benjamin B. Tregoe, “The Rational
compile a list of career opportunities in marketing research. Manager”, McGraw-Hill Book Company
2. Interview someone who works for a marketing research • Abrams Mark, Social Surveys and Social Action, London:
supplier. What is this person’s opinion about career William Heinemann Ltd., 1951.
opportunities in marketing research? Write a report of your
interview.
• Anderson, R. and Zelditch Morris Jr. A Basic Course in
Statistics with Sociological Application. New York: Holt,
3. Interview someone who works in the marketing research Rinehart and Winston INC., 1975.
department of a major corporation. What is this person’s
opinion about career opportunities available in marketing
• Best John, Research in Education, New Delhi Prentice Hall
of India Pvt. Ltd., 1963.
research? Write a report of your interview.

© Copy Right: Rai University


46 11.556
• Blalock Jr. Herbet, M. and Blalock Ann. B. Methodology in

RESEARCH METHODOLOGY
Social Research New York: McGraw Hill Book Company,
1968
• Borg Walter, R. Educational Research, An Introduction, New
York: David Mckay Company, 1976.

Notes -

© Copy Right: Rai University


11.556 47
UNIT II
TECHNIQUES OF DATA COLLECTION
LESSON 8: AND SAMPLING
TECHNIQUES OF DATA COLLECTION

Studentsso far we have studied about various research processes environments are all areas of serious concern to the reputable
RESEARCH METHODOLOGY

and the writing of report. The write up part we have done in retailer or brand owner.
detail. Now, we will be discussing in detail each and every step The audit process includes an opening meeting, factory tour,
of research. After the research problem is framed along with the document review, interviews with employees and a closing
hypothesis the next step is the collection of required informa- meeting.
tion and data.
The key parameters that we look at when carrying out
In this class we will be focusing specially on the collection of retail audits are:
qualitative data for market research. The list of techniques and
• In-store availability of product/brand;
sources of data are
• Types of outlets (by owner, location, specialty);
Sources of Market Data
• Sales volume cross-tabbed with type and location;
Retail Audit
• Pricing of product/brand cross-tabbed with type/location
Consumer Panel of outlet;
Consumer Panel • Display value;
TV Meters • Customer demand;
Diary Method • Resulting market share and rank/position of product/
Internet as a source of Data brand.
Secondary Data It must be noted that there are no readily available retail
Sources of Secondary Data universe data. The design of a retail audit is critical to the success
of the project. The data obtained from the retail audit is useful
RBI for carrying out
Economic Survey • Identification of market opportunities
CSO • Trend analyses and forecasting
Investment Data • Studying market structure
Foreign Trade • Prioritisation of markets
Survey Data • Conducting analyses of competitors
Types of Survey Techniques • Product portfolio analysis
Now let us discuss these in detail • Understanding changes in distribution
Retail Audit: • Pricing trend analyses
Retail Audit is a common term in marketing research Product Categories Covered
Audits This Audit covers more than 100 product categories including
During the 1990s, it became increasingly important to develop a • Baby products (oil, powder, diapers, milk food, weaning
strong brand image. It’s not just the product that needs to be food.)
sold, but also the brand, charged with values such as ethics,
• Beverages (coffee, soup mix, squash and juice, syrup, tea,
quality, feelings and identity that put over a positive message to
concentrated drinks.)
consumers.
• Contraceptives
Today, many companies are moving their production from their
home countries to nations where manufacturing costs are • Cosmetics (colognes, deodorant, perfume, lipstick, nail
considerably lower. However, the role of the company extends polish.)
beyond just financial issues; every organisation has a social • Environmental hygiene (air freshener, floor cleaner, floor
responsibility. Consumer and pressure groups are increasingly polish, etc.)
concerned about the social conditions in which workers from • Fabric care (fabric bleach, washing powder, liquid, whitener,
developed and developing countries are subjected. They expect soap, detergent.)
companies to accept its responsibilities and to conduct its
• Food products (butter, margarine, salt, packaged food, etc.)
activities in accordance with the ethical and moral values accepted
in the country in which their product is sold. Forced labor, child • General toiletries (mouthwash, talcum powder, toilet soap,
labor, low pay, poor conditions and dangerous working toothpaste, toothbrush, sanitary napkins.)

© Copy Right: Rai University


48 11.556
• Hair care (conditioner, dye, oil, shampoo.) 7. Usually the sources can be broken down into three basic

RESEARCH METHODOLOGY
• Health products and OTC (analgesic, digestive, medicated groups:
dressing, etc.) a. “White area”: from official stats sources to fully legal
• Liquor (beer, brandy, gin, rum, vodka, whisky, wine, liquor.) retail;
• Milk products (milk, condensed milk, milk powder, Cheese.) b. “Grey area”: includes medium and small wholesale,
and kiosks (partial reporting) original, but locally
• Semi-durable products (batteries, bulbs, lubricants, paint,
unauthorized product;
tube lights, etc.)
c. “Black area”: private entrepreneurs operating without a
• Shaving products (after-shaves, blades, razors, etc.)
license, ad-hoc open air markets, van sales, babushkas,
• Skin care (cream, cold cream, lotion, face-wash, etc.) etc.
• Snack foods and soft drinks (biscuits, chocolates, 8. Analysis and report writing
confectionery, etc.)
After verification, the data are punched in (software and formats
Measures to be determined based on client needs), structured, analyzed,
• Market size in terms of units sold, volume and value and presented as text, graphics, customized databases, or a
• Market share by volume and value combination of these.

• Numeric distribution Consumer Panel


There’s nothing (consumer) panel data can tell us that we
• Weighted distribution
don’t already know from scanner data.
• Share among handlers
Consumer panels are a unique tool that can enable a clever
• Out-of-stock retailers researcher to examine dynamic longitudinal changes in behav-
• Per dealer off-take iors, attitudes, and perceptions. Consumer panels can also be an
• Purchases by retailers overly costly, excessive generator of unused data
• Stock levels with retailers What are Consumer Panels?
• Stock turnover ratio There are two basic kinds of consumer panels.
• Trends for market, company, brand and SKU - for size and In the first kind, respondents report essentially the same
shares information repeatedly over some period of time. The chief
examples of these kinds of panels are the syndicated purchase
Following steps can be followed panels using store and home, termed as, continuous panels.
We never assume that our clients will mean the same thing
under “retail audit”. We always strive to define exactly the The second kind of panel consists of samples of pre-screened
specific knowledge needs, and design the approach, methodol- respondents who report over time on a broad range of
ogy, and sample accordingly. Our experience has taught us that different topics, termed as discontinuous access panels.
there can be no long-term representative samples. Each new Both kinds of panels come in all different forms. Panel studies
project requires a revision of the existing sample size and can involve data collection at widely different intervals varying
structure in order to achieve credible results. anywhere from a day to several years between waves of inter-
1. Draft the research plans and schedule, indicating. views. Panel operators are continuously faced with the decision
about how often panel members should be contacted and
• Scope and goals; asked to report. Contacting the panel either too frequently or
• Optimal sample size, methods of collecting too infrequently may lead to reduced cooperation,
quantitative & qualitative data, etc.;
The Benefits of Continuous Consumer
• Deadlines; Panels
• Structure and format of reports. 1. The effect of a special offer can be measured through a
3. Fine-tuning and approval of research approach. before-and-after design using a panel approach. Thus, a sample
of families might be interviewed initially to gather information
4. Design and production of customized research tools.
on their purchases of soft drinks, possibly over several weeks to
5. Launch and management of field research obtain a good idea of their “steady state” purchasing patterns.
As a rule, we use the following field research methods: A special deal for a particular brand is then introduced, and the
• Observation; purchases of the same sample are monitored for perhaps every
• Face-to-face POS interviews; week for three months. In this way, sampling variation is
minimized and both short-term and long-term effects of the
• Mystery shopping. deal are obtained.
Note: do not expect data on opening stock/deliveries/closing 2. A static consumer panel of families with young children
stock, bar code data (scanning d-bases), audit code levels, etc. might be set up to monitor the acceptance of new line of toys.
They are mostly non-existent. In this case no type of experimental treatment is involved.
6. Data collection Rather, information is obtained, say, every month on the toy

© Copy Right: Rai University


11.556 49
purchases of the families. In this way, data are compiled on the (1)Screening for special populations (especially for rare special
RESEARCH METHODOLOGY

types of families that are buying any of the new toys, how soon populations),
the toys are purchased after they have been placed on the (2)Evaluation of new product concepts and formulations,
market, and how many of the toys are purchased by each family.
(3)Marketing and advertising experimentation
3. A dynamic consumer panel might be used to keep track of
The following examples illustrate some uses of discontinuous
the purchases of frozen
consumer access panels:
foods of one brand in relation to other brands. By obtaining
1. A manufacturer of tennis racquets is considering alternative
such data every week for several years, very detailed information
shapes for a new racquet that would make it easier to handle.
can be obtained on what sorts of families are purchasing each
Initially, sheets with pictures and a description of the new
major brand and on the change in market shares of the
racquets might be sent by mail or e-mail to pre-screened
different brands over time among different groups of consum-
samples of respondents who play tennis. Any one respondent
ers. Also estimates can be derived of the extent to which
would receive only one of the alternatives, but the manufacturer
purchasers remain loyal to different brands.
could determine which racquet was preferred from the different
4. A continuous consumer panel may be used to obtain more samples. Alternatively, respondents might receive pictures of
detailed and reliable information on different types of behavior. two racquets with the order of the pictures randomized, and
It has been demonstrated that data on consumer financial asked for their preference between the two. At a later stage,
holdings are obtained much more reliably if this information is respondents might receive the actual racquets for use testing.
sought over a period of time, thus allowing the respondent to
2. Instead of a new product, a marketer might be considering a
build up confidence in the validity and trustworthiness of the
new advertising campaign for an existing product, and might
study. Similarly, information on medical care events is obtained
wish to choose between several alternatives that had been
much more accurately from panels than from one-time surveys.
proposed by the advertising agency. Again, samples of each of
5. A continuous consumer panel is the only means of obtain- the alternatives would be sent to relevant panel members for
ing information on a series of events extended through time. their evaluation. As above, they might be asked to evaluate a
For example, reactions to the weekly episodes of a television single advertisement or to choose from among multiple
program are best obtained by monitoring the viewing of the advertisements. The testing could also be done by the advertis-
same family and at the same time getting their reactions to the ing agency before the recommendation was made to the
different programs. In this way it becomes possible to measure manufacturer. Similarly, panel members could be asked to
changes in program acceptance and to relate attitudes and evaluate different designs or layouts for a web page or a
behavior at one time to viewing and attitudes toward earlier brochure. In all cases, the objective is to screen different ideas or
episodes. executions inexpensively by having a panel evaluate them singly
6. Only through continuous consumer panels is it possible to or side-by-side. It is obvious that similar information could be
monitor changes in the behavior of particular cohorts. For obtained from one-time surveys, but with greater difficulty and
example, the purchase habits of teenagers might be monitored at greater expense. Two reasons for using discontinuous panels
over a number of years to ascertain how these purchase habits are because they can provide greater relevance and better quality.
change as the subjects move into a different stage of life. By They are more relevant because respondents can be easily
monitoring the behavior of peers at the same time, it becomes screened on the basis of prior questions (e.g., pet owners, users
possible to distinguish effects due to history (i.e., changes in of denture cream, recent car purchasers) They are often better
economic and social conditions) from effects due to the aging quality because respondents are experienced and can easily be
process. It is possible to use a series of demographically pre-qualified as panel members on the basis of the quality of
identical discontinuous access panels for the purposes of the previous survey responses. In the next section we discuss
continuous tracking. Selecting demographically identical samples problems with discontinuous consumer panels that sometimes
containing different panelists at predetermined intervals across make one-time surveys the better alternative.
time can do this. These different groups of panelists can then Challenges of Panels?
be used separately in the separate waves of the panel. Since the Essentially, a continuous consumer panel operation poses four
sample is not static, traditional static panel analytics, such as problems:
measures of trial and repeat and brand switching, are lost. What
is gained, however, means of obtaining other insights in at a (1)Gaining and maintaining cooperation,
lower cost than it would be to maintain a continuous, full-time (2)Information validity and reliability,
panel. (3)Panel conditioning, and
The Benefits of Discontinuous Consumer (4)Record maintenance.
Access Panels 1. Gaining and Maintaining Cooperation
The benefits of discontinuous consumer access panels are Mail, Internet, or the World Wide Web does even when panels
primarily related to reductions in the cost and time required are recruited by personal methods, the initial rate of cooperation
obtaining market research information. Although these kinds can be as low as 50%, and may be much lower if recruiting.
of panels are used in a wide variety of ways, three uses are Those who cooperate are more likely to be more educated, in
especially common professional or clerical occupations, in middle or upper middle-

© Copy Right: Rai University


50 11.556
income levels, and in the younger and middle age brackets. As a tion of portable household scanner equipment and electronic

RESEARCH METHODOLOGY
result, a panel at the beginning of the operation may not be meters for television viewing has increased validity, although
representative of the population from which it was selected. even such equipment and meters do not prevent errors caused
This is only the beginning of the problem, however. Attrition by respondents forgetting to use them. Many panels still rely on
can be substantial. About half the people that even consent to diaries. Although diaries significantly reduce reporting error as
participate may drop out after the first two or three rounds, compared to recall, reporting errors do occur if panel members
especially if they are asked to keep extensive written records. As forget to make their entries, or attempt to recall and record
a result, a panel operation can become increasingly unrepresenta- earlier behavior at the end of the recording period instead of at
tive of the population from which it originally came, something the time that it occurred. This is especially a problem for
that could be a problem even for a static panel. Of course, panel behaviors that are infrequent and of low salience to respon-
operators take steps to reduce this non-representativeness dents.
through the use of a variety of methods such as selective
3. Panel Conditioning
recruiting and weighting the panel.
The third major problem that affects continuous panels, but
2. Information Validity and Reliability probably not discontinuous ones is the danger of conditioning
This problem of sample representativeness is also the key effects. That is, the possibility that behavior or attitudes of
problem for discontinuous consumer panels. Almost all of panel members will be influenced or contaminated by their
these panels are recruited by mail with initial cooperation rates participation in the panel. For example, respondents who keep
usually below 5%. As with continuous panels, there are also diaries about visits to restaurants may become aware of the
very high initial dropout rates. Operators of discontinuous large amount of money they are spending on restaurant meals
panels make initial efforts to balance their samples for major and either reduces the frequency of the visits to restaurants or
demographic characteristics by selectively recruiting respondents switch to lower cost ones. In a similar fashion, a family asked
from groups least likely to cooperate and by dropping from month after month about ownership of savings accounts may
their panels respondents from groups that are over-represented. decide to open a savings account, even though they originally
It is often claimed by operators of such panels that the had no such intention,
response rate to an individual survey is 70 percent or higher, but Panel conditioning effects are both erratic and pervasive. They
this refers to respondents who had already previously agreed to exist in some sorts of studies, but do not seem to exist in
participate. As with continuous panels, the data from individual others. As with panel mortality, methods exist for detecting and
surveys are weighted to further control for major demographic correcting such effects.
biases.
4. Record Maintenance
Sample representativeness is also a concern when recruiting on- The fourth major problem of a continuous panel study is not
line panels. Critics argue that because on-line panels necessitate so much methodological as of the researcher’s own making.
computer literacy and the means to access the Internet, it is This is the need for some systematic means of keeping easily
biased against low-income groups and technological laggards. accessible records on the activities of panel members and on
These concerns of representatives may be exacerbated depend- changes in the characteristics of these panel members over time.
ing on how the panel is recruited. Surfers who inadvertently Since members of a panel are usually households or families,
stumble on to a site, or ones who are attracted by the lure of a rather than single individuals, there is the problem in a long-
lottery, may be even less representative than those recruited term panel study of keeping track of changes in the
through more deliberate or personal means. Yet, just as efforts composition of these households and changes in their
are made to make off-line panels more representative, so are characteristics. A family member may leave the household,
many of these same efforts being used to make-to-make on- another may be born or move into it, a household may be
line research more representative. It is important to realize that dissolved, or a new household may be formed. In addition, the
“purpose defines precision.” The representatives and precision employment status and other characteristics of the individual
required to determine which of six package designs is most members will change over time. All of these changes have to be
appealing is different (perhaps less important) than that needed recorded so that the data can be used for analytical purposes
to estimate the impact of a price change on market share. when required. The attitudes and behavior of the panel
Independent of the population being sampled is the reliability members also have to be recorded in such a way that the data
of the information obtained from panel members. For are readily accessible, particularly so that analyses can be made
discontinuous panels, these problems are identical to those either on a cross-sectional or longitudinal basis. Some idea of
conducting one-time surveys. Since many of the uses of the the magnitude of the problem can be obtained from the fact
panel relate to attitudes and buying intentions, the only way of that in many panel studies a single round of data collection may
ultimately verifying the quality of responses is to observe provide information on 500 to 1,000 variables. If a panel has
marketplace results. The extensive and increased use of 10,000 families, which is not unusually high, and information is
discontinuous panels suggests the responses obtained from obtained every month for, say, five years, the number of pieces
these panels do provide information that is sufficiently accurate of information could be as high as 600,000,000. Fortunately,
for making marketing decisions. For continuous panels that are the capacity of computers has expanded so rapidly that the
more often measuring behavior, shipment data can validate computers themselves are no longer the problem. The major
results, particularly at an aggregate level. Certainly the introduc- problem was and remains the designing of computer systems

© Copy Right: Rai University


11.556 51
that make storing and accessing the data straightforward. It is 28. UK ORC International 1938 44-20-7675.1000
RESEARCH METHODOLOGY

especially important for syndicated services to design systems www.opinionresearch.com


that make client access to data fast and easy. 29. UK The Research Business International 1981 44-20-
Worldwide Major Panel Operators 7923.6000 www.trbi.co.uk
Country/ Firm/ Started/ Phone/ URL 30. UK Research International 1962 44-20-7656.5000
1. USA AC Nielsen 1933 203-961-3330 www.acnielsen.com 31. UK Research Resources Ltd 1986 44-20-7656.5555
2. USA NFO Worldwide 1946 203-629-8888 www.nfow.com 32. Canada Angus Reid Group, Inc. 1979 1.613.241.5802
3. USA Maritz Marketing Research, Inc. 1973 636-827-1610 www.angusreid.com
www.maritz.com 33. Canada CF Group Inc. (ARC, Canadian Facts, 1932 1-416-
4. USA Market Facts, Inc. 1946 847-590-7000 924.5751)
www.marketfacts.com Burke International Research
5. USA NPD Group, Inc. 1953 516-625-0700 www.npd.com 1. France B.V.A. 1970 33-1-30.84.88.00 www.bva.fr
6. USA Opinion Research Corporation Intl. 1938 908-281- 2. France CSA (CSA TMO Group) 1983 33-1-41.86.22.00
5100www.opinionresearch.com www.csa-fr.com
7. USA Taylor Nelson Sofres Intersearch 1960 215-442-9000 3. France IFOP 1938 33-1-45.84.14.44 www.ifop.com
8. USA Roper Starch Worldwide, Inc. 1923 914-698-0800 4. France Ipsos France 1975 33-1-53.68.28.28
www.roper.com 5. France IRI-SECODIP 1993 33-1-30.06.22.00
9. USA Burke, Inc. 1931 513-241-5663 www.burke.com 6. France MEDIAMETRIE 1985 33-1-47.58.97.58
10. USA MORPACE International, Inc. 1940 248-737-5300 www.mediametrie.fr
www.morpace.com 7. France Research International 1952 33-1-44.06.65.65
11. USA Creative and Response Rsch Services, Inc. 1960 312- www.researchint.com
828-9200 www.crresearch.com 8. France SECODIP / Groupe SOFRES 1969 33-1-30.74.80.80
12. USA Harris Interactive, Inc. 1956 716-272-9020 www.secodip.com
www.harrisinteractive.com
9. Brazil IBOPE GROUP IBOPE Ad Hoc 1942 55-11-
13. USA Lieberman Research Worldwide 1973 310-553-0550 3066.1587 www.ibope.com.br
www.lrw.com
10. Brazil INDICATOR Pesquisa de Mercado Ltda. 1987 55-11-
14. USA Ziment 1976 212-647-7200 3365.3000 www.indicator.com.br
15. UK BJM Research and Consultancy Ltd 1973 44-20- 11. Brazil Instituto de Pesquisas Datafolha 1983 55-11-224.3933
7891.1200 12. Brazil MARPLAN BRASIL Pesquisas Ltda. 1958 55-11-
16. UK BMRB International 1933 44-20-8566.5000 3361.2033 www.marplan.com.br
www.bmrb.co.uk 13. India Indian Market Research Bureau (IMRB) 1970 91-22-
17. UK The Gallup Organization 1937 44-208-939.7000 432.3636 www.imrbint.com
www.gallup.com 14. India Indica Research Pvt. Ltd. 1994 91-22-265.1741
18. UK GfK Marketing Services Ltd 1992 44-870-603.8100 www.indica.com
www.gfkms.co.uk 15. India MBL Rsch. & Consultancy Group Pvt. Ltd. 1987 91-
19. UK Harris Research 1965 44-20-8332.9898 40-335.5433 www.mblindia.com
20. UK Information Resources 1992 44-1344-746000 16. India ORG-MARG Research Ltd. 1961 91-22-218.6922
www.unfores.com
17. Indonesia PT AMI Indonesia 1996 62-21-521.3420
21. UK INFRATEST BURKE GROUP LTD. 1974 44-208-
18. Japan Marketing Intelligence Corporation (MiC) 1960 81-
782.3000
424.76.5164 www.micjapan.com
22. UK Ipsos-RSL Ltd. 1946 44-20-
(K.K.Shakai-Chosa Kenkyusho)
8861.8000www.ipsos.rslmedia.co.uk
1. Japan Video Research Ltd. 1962 81-3-5541.6506
23. UK Isis Research plc 1973 44-20-
www.videor.jp
8788.8819www.isisresearch.com
2. Korea Hyundai Research Institute 1986 82-2-737.2685
24. UK Martin Hamblin 1969 44-20-7222.8181
www.martinhamblin.co.uk 3. Turkey Procon 1. Japan NIKKEI RESEARCH INC. 1970
81-3-5281.2891 www.nikkei-r.co.jp
25. UK Millward Brown UK Ltd 1973 44-1926-452233
www.millwardbrown.com 4 GfK Research Services 1997 91-212-216.21.91
www.procongfk.com
26. UK MORI (Market & Opinion Rsch Intl)) 1969 44-20-
7222.0232 www.mori.com
27. UK MVA 1968 44-1483-728051 www.mva-research.com

© Copy Right: Rai University


52 11.556
TV Meters • And just how much of this task will they tolerate, anyway?

RESEARCH METHODOLOGY
It’s easy to get lost in the details of audience research, so today; Beyond the capabilities of the respondent lie the technical
I’d like to begin with a very simple model for television research capabilities of any equipment used in the survey or panel. There
quality assessment. Specifically, I believe that an acceptable are tremendous variations in meter equipment around the
research service has to be thought of as having four quality world, each with its own strengths and weaknesses, and more
components. If you’re trying to decide whether to use a new variations are coming.
source of research, or if you’re trying to influence the priorities
For example, the BBM in Canada had rolled out an intriguing
of an existing service, you’ve got to be able to address these
new people meter design for use nationally and in local markets.
four issues.
BBM has licensed a newly designed people meter from Taylor
Good research always requires: Nelson AGB, which uses picture matching as an alternative
• A sample of the right people... means of identifying the channels tuned.
• In sufficient quantity for your purpose... One of the strengths of the BBM system was their open
• Who provide accurate data about their behavior... approach to testing this new meter. This is the way to pursue
changes in technology—with an unusually rigorous test design,
• To a supplier with high-quality process controls.
with an industry steering committee representing all parts of
Take away any one of those, and you don’t have usable research. the industry, and conducted openly, with provision for two
The “who” questions—getting a sample of the right people to independent audits. It doesn’t get much better than that— and
participate in the research—often get the lion’s share of our it’s often much less. Which leads nicely into the “how” issues.
attention. Sometimes I think we go overboard with meter The methodology issues of
panels, especially, sacrificing our knowledge of the other quality
• Who,
issues while putting samples under the microscope. But there’s
no question that sampling issues are critical. We have to start • How Many, and
off well, with a good knowledge of the population we’re • What Data can only be considered?
measuring. And of course, there are response rates, the golden in the applied context of how the work is actually being
yardsticks of survey research. executed. That’s why it’s critical to have a transparent, verifiable
I only wish that other quality concerns had such appealing system in place, with full disclosure of defined, objective
summary measures; maybe then we’d pay them more attention. policies and procedures.
Some people think that if your sample appears to mirror the Obviously, research quality and usability can be very difficult to
population demographically, then your panel is “representa- assess. The details can quickly overwhelm you. But in the end,
tive.” Panels and surveys are frequently judged more on you have to ask whether you’re really going to receive:
composition than cooperation, and even getting appropriate
• A sample of the right people...
data about cooperation can sometimes be a challenge. But that’s
not enough. We have to insist on open disclosure of coopera- • In sufficient quantity for your purpose...
tion data for all major segments of the population. There are • Who provide accurate data about their behavior...
many shortcuts to a balanced panel, and we have to consider • To a supplier with high-quality process controls.
not just whether we have “enough,” but whether the ones we
I would argue that you need to have knowledge, and an
have are representative of their population segments.
opinion, on each of those four dimensions in order to make a
The importance of sample size doesn’t need much additional good decision. Those are some of my core beliefs about media
stress from me. But I’d like to remind you quickly that the research. Let me segue from that to a few observations specific
overall reliability of surveys could have multiple components. to meter measurement.
So be wary of simplistic assumptions about sampling error. As
Under the heading of “getting the right people,” television
for overall sample size: Frankly, there’s never enough sample,
measurement, is challenged in at least two ways, one obvious
with meters, or diaries, or anything else. We’d always like to
and one not so obvious:
have more stable data.
The real issue, though, is reducing the total error in our surveys. Response rate
Having stable data isn’t very helpful if all you’ve done is make a It would be easy to simply decry the fact that the television
bias more consistent. Here’s the one that I wish were easier to ratings have response rates that are painfully low; they are low,
measure, because in my mind, it’s at least equal in importance to and we’re honestly not sure how much that affects us. It would
the other three quality factors. As important as it is to have a also be easy to make excuses. There do appear to be major
good sample of adequate size, it’s just as important to collect factors beyond the direct control of individual suppliers,
good data from that sample. I place a number of distinct issues causing an accelerating overall trend toward lower cooperation.
under this heading. Obviously, we have the respondent’s ability But that’s too easy—on us. There are fixes to the declining
and willingness to provide accurate data: response rate problem, but most of them cost money. Cash
incentives, for example, are still the most potent way to affect
• Can they answer the question, and does the question make response rates, and they work universally. But they also directly
sense?
affect the unit costs of a research supplier. For a supplier to
• Will they push their buttons, and what do they mean when invest in response rates, they have to believe that it’s a priority
they do so? with customers.
© Copy Right: Rai University
11.556 53
Ability to Estimate the Universe more watching that program to make it a financial success.
RESEARCH METHODOLOGY

Another sampling challenge concerns our ability to estimate the That’s why some shows with a loyal following still get canceled.
universe that we project our numbers to. Television measure-
Using Diaries in Social Research
ment is dependent on many non-government measures of the
Biographers, historians and literary scholars have long consid-
population. Cable and satellite penetration and multi-set
ered diary documents to be of major importance for telling
distribution are just two of them. To at least some users, those
history. More recently, sociologists have taken seriously the idea
population estimates are of growing concern, too. One of the
of using personal documents to construct pictures of social
greatest concerns is the extent to which the population can
reality from the actors’ perspective (see Plummer’s 1983 book
change rapidly without us knowing about it.
Documents of Life). In contrast to these ‘journal’ types of
Under the heading of “what”—of trying to collect better quality accounts, diaries are used as research instruments to collect
data from our samples—let me focus today just on meter detailed information about behaviour, events and other aspects
measurement of set tuning. Internationally, there’s a lot of of individuals’ daily lives.
effort directed at better measurement of set tuning with
Self-completion diaries have a number of advantages over other
meters. New media technologies pose significant near-term
data collections methods. First, diaries can provide a reliable
threats to the current generation of television set meters. The
alternative to the traditional interview method for events that
next challenge is direct-to-home satellite, and digital television in
are difficult to recall accurately or that are easily forgotten.
general.
Second, like other self-completion methods, diaries can help to
That’s not measured today. overcome the problems associated with collecting sensitive
Statistical sampling information by personal interview. Finally, they can be used to
This is the same technique that pollsters use to predict the supplement interview data to provide a rich source of informa-
outcome of elections. A “sample audience” is created, and then tion on respondents’ behaviour and experiences on a daily
count how many in that audience view each program. The basis. The ‘diary interview method’ where the diary-keeping
researcher then extrapolates from the sample and estimates the period is followed by an interview asking detailed questions
number of viewers in the entire population watching the show. about the diary entries is considered to be one of the most
That’s a simple way of explaining what is a complicated, reliable methods of obtaining information.
extensive process. The researcher relies mainly on information The Subject Matter of Diary Surveys
collected from TV set meters that it installs, and then combines A popular topic of investigation for economists, market
this information with huge databases of the programs that researchers, and more recently sociologists, has been the way in
appear on each TV station and cable channel. which people spend their time. Accounts of time use can tell us
To find out who is watching TV and what they are watching, much about quality of life, social and economic well being and
the research company gets around 5,000 households to agree to patterns of leisure and work. The ‘time-budget’ involved
be a part of the representative sample for the national ratings respondents keeping a detailed log of how they allocated their
estimates. Then TVs, homes, programs, and people are time during the day. More qualitative studies have used a
measured in a variety of ways. “standard day” diary, which focuses on a typical day in the life of
To find out what people are watching, meters installed in the an individual from a particular group or community.
selected sample of homes track when TV sets are on and what One of the most fruitful time-budget endeavors, initiated in
channels they are tuned to. A “black box,” which is just a the mid 60s, has been the Multinational Time Budget Time Use
computer and modem, gathers and sends all this information Project. Its aim was to provide a set of procedures and guidance
to the company’s central computer every night. Then by on how to collect and analyse time-use data so that valid cross-
monitoring what is on TV at any given time, the company is national comparisons could be made
able to keep track of how many people watch which program. Two other major areas where diaries are often used are:
Small boxes, placed near the TV sets of those in the national • Consumer expenditure and
sample, measure who is watching by giving each member of the • Transport planning research.
household a button to turn on and off to show when he or
she begins and ends viewing. This information is also collected Other topics covered using diary methods are social networks,
each night. health, illness and associated behaviour, diet and nutrition,
social work and other areas of social policy, clinical psychology
The national TV ratings largely rely on these meters. To ensure and family therapy, crime behaviour, alcohol consumption and
reasonably accurate results, the company uses audits and quality drug usage, and sexual behaviour. Diaries are also increasingly
checks and regularly compares the ratings it gets from different being used in market research.
samples and measurement methods.
Using Diaries in Surveys
This research is very costly. Advertisers pay to air their commer-
Diary surveys often use a personal interview to collect additional
cials on TV programs using rates that are based on these data.
background information about the household and sometimes
Programmers also use these data to decide which shows to keep
about behaviour or events of interest that the diary will not
and which to cancel. A show that has several million viewers
capture (such as large items of expenditure for consumer
may seem popular to us, but a network may need millions
expenditure surveys). A placing interview is important for

© Copy Right: Rai University


54 11.556
explaining the diary keeping procedures to the respondent and a blocks should include columns for start and finish times for

RESEARCH METHODOLOGY
concluding interview may be used to check on the completeness activities.
of the recorded entries. Often retrospective estimates of the 7. Appropriate terminology or lists of activities should be
behaviour occurring over the diary period are collected at the designed to meet the needs of the sample under study, and
final interview. if necessary, different versions of the diary should be used
Diary Design and Format for different groups.
Diaries may be open format, allowing respondents to record 8. Following the diary pages it is useful to include a simple set
activities and events in their own words, or they can be highly of questions for the respondent to complete, asking, among
structured where all activities are pre-categorized. An obvious other things, whether the diary-keeping period was atypical in
advantage of the free format is that it allows for greater any way compared to usual daily life. It is also good practice
opportunity to recode and analyse the data. However, the to include a page at the end asking for the respondents’ own
labour intensive work required to prepare and make sense of comments and clarifications of any peculiarities relating to
the data may render it unrealistic for projects lacking time and their entries. Even if these remarks will not be systematically
resources, or where the sample is large. Although the design of analysed, they may prove helpful at the editing or coding
a diary will depend on the detailed requirement of the topic stage.
under study, there are certain design aspects, which are common
Data Quality and Response Rates
to most. Below are sets of guidelines recommended for anyone
In addition to the types of errors encountered in all survey
thinking about designing a diary. Furthermore, the amount of
methods, diaries are especially prone to errors arising from
piloting required to perfect the diary format should not be
respondent conditioning, incomplete recording of information
under-estimated.
and under-reporting, inadequate recall, insufficient cooperation
1. An A4 booklet of about 5 to 20 pages is desirable, and sample selection bias.
depending on the nature of the diary. Disappointing, as it
might seem, most respondents do not carry their diaries Diary Keeping Period
around with them. The period, over which a diary is to be kept needs to be long
enough to capture the behaviour or events of interest without
2. The inside cover page should contain a clear set of
jeopardizing successful completion by imposing an overly
instructions on how to complete the diary. This should
burdensome task for collecting time-use data, anything from
stress the importance of recording events as soon as possible
one to three day diaries may be used. Household expenditure
after they occur and how the respondent should try not to let
surveys usually place diaries on specific days to ensure an even
the diary keeping influence their behaviour.
coverage across the week and distribute their fieldwork over the
3. A model example of a correctly completed diary should year to ensure seasonal variation in earnings and spending is
feature on the second page. captured.
4. Depending on how long a period the diary will cover, each Reporting Errors
page denoting either a week, a day of the week or a 24 hour In household expenditure surveys it is routinely found that the
period or less. Pages should be clearly ruled up as a calendar first day and first week of diary keeping shows higher reporting
with prominent headings and enough space to enter all the of expenditure than the following days. This is also observed
desired information (such as what the respondent was for other types of behaviour and the effects are generally termed
doing, at what time, where, who with and how they felt at “first day effects”. They may be due to respondents changing
the time, and so on). their behaviour as a result of keeping the diary (conditioning),
5. Checklists of the items, events or behaviour to help jog the or becoming less conscientious than when they started the diary.
diary keeper’s memory should be printed somewhere fairly Recall errors may also extend to ‘tomorrow’ diaries. Respon-
prominent. Very long lists should be avoided since they may dents often write down their entries at the end of a day and
be off-putting and confusing to respondents. For a only a small minority are diligent (and perhaps obsessive!) diary
structured time budget diary, an exhaustive list of all keepers who carry their diary with them at all times. Expendi-
possible relevant activities should be listed together with the ture surveys find that an intermediate visit from an interviewer
appropriate codes. Where more than one type of activity is to during the diary keeping period helps preserve ‘good’ diary
be entered, that is, primary and secondary (or background) keeping to the end of the period.
activities, guidance should be given on how to deal with
Literacy
“competing” or multiple activities.
All methods that involve self-completion of information
6. There should be an explanation of what is meant by the unit demand that the respondent has a reasonable standard of
of observation, such as a “session”, an “event” or a “fixed literacy. Thus the diary sample and the data may be biased
time block”. Where respondents are given more freedom in towards the population of competent diary keepers.
naming their activities and the activities are to be coded later,
it is important to give strict guidelines on what type of Participation
behaviour to include, what definitely to exclude and the level The best response rates for diary surveys are achieved when
of detail required. Time budget diaries without fixed time diary keepers are recruited on a face-to-face basis, rather than by
post. Personal collection of diaries also allows any problems in

© Copy Right: Rai University


11.556 55
the completed diary to be sorted out on the spot. Success may researchers for secondary analysis (further analysis of data
RESEARCH METHODOLOGY

also depend on the quality of interviewing staff that should be already collected). This is perhaps not surprising given that the
highly motivated, competent and well briefed. Appealing to budget for many diary surveys does not extend to systematic
respondent’s altruistic nature, reassuring them of confidentiality processing of the data. Many diary surveys are small-scale
and offering incentives are thought to influence co-operation in investigative studies that have been carried out with very specific
diary surveys. One research company gives a 10-pound postal aims in mind. For these less structured diaries, for which a
order for completion of their fourteen-day diary and other common coding scheme is neither feasible, nor possibly
surveys offer lottery tickets or small promotional items. desirable, an answer to public access is to deposit the original
survey documents in an archive. This kind of data bank gives
Coding, Editing and Processing
the researcher access to original diary documents allowing them
The amount of work required to process a diary depends largely
to make use of the data in ways to suit their own research
on how structured it is. For many large-scale diary surveys, the
strategy. However, the ethics of making personal documents
interviewer while still in the field does part has the editing and
public (even if in the limited academic sense) have to be
coding process. Following this is an intensive editing procedure,
considered
which includes checking entries against information collected in
the personal interview. For unstructured diaries, involving Internet as a Source of Data
coding of verbatim entries, the processing can be very labour The expansion of the Internet over the past decade has
intensive; in much the same way as it is for processing qualita- provided the researcher with a range of new opportunities for
tive interview transcripts. Using highly trained coders and a finding information, networking, conducting research, and
rigorous unambiguous coding scheme is very important disseminating research results.
particularly where there is no clear demarcation of events or Through the use of tools such as online focus groups,
behaviour in the diary entries. Clearly, a well-designed diary with electronic mail, and online questionnaires, the Internet opens
a coherent pre-coding system should cut down on the degree of up new possibilities for conducting research. It offers, for
editing and coding. example:
Relative Cost of Diary Surveys 1). Shorter timeframes for collecting and recording data: e-mail
The diary method is generally more expensive than the personal messages can be saved and analyzed in qualitative data packages,
interview, and personal placement and pick-up visits are more for example, while online surveys can be captured directly into a
costly than postal administration. The interviewers usually database
make at least two visits and are often expected to spend time 2). The possibility of conducting interviews and focus groups
checking the diary with the respondent. If the diary is unstruc- by e-mail, with related savings in costs and time
tured, intensive editing and coding will push up the costs.
However, these costs must be balanced against the superiority 3). New “communities” to serve as the object of social scientific
of the diary method in obtaining more accurate data, particularly enquiry
where the recall method gives poor results. The ratio of costs 4). Opportunities for including mixed multiple media in
for diaries compared with recall time budgets are of the order questionnaires
of three or four to one On the other hand, these opportunities also raise new chal-
Computer Software for Processing and Analysis lenges for the researcher, such as:
Probably the least developed area relating to the diary method is • Problems of sampling
the computer storage and analysis of diary data. One of the • The ethics of conducting research into online communities
problems of developing software for processing and manipu- • Physical access and skills required to use the technologies
lating diary data is the complexity and bulk of the information involved
collected. Although computer assisted methods may help to
reduce the amount of manual preparatory work, there are few • Accuracy and reliability of information obtained from online
packages and most of them are custom built to suit the sources
specifics of a particular project. Time-budget researchers are • The changed chronology of interaction resulting from
probably the most advanced group of users of machine- asynchronous communication
readable diary data and the structure of these data allows them Internet is a useful media to get valuable information and
to use traditional statistical packages for analysis. More recently, results of various surveys. Access to computer-led data
methods of analysis based on algorithms for searching for becomes handy in solving many complex mysteries, related to
patterns of behaviour in diary data are being used. Software the market place. The 10 ‘C’s outlined here, provide criteria to be
development is certainly an area, which merits future attention. considered while evaluating Internet resources:
For textual diaries, qualitative software packages such as The
1. Content
ETHNOGRAPH can be used to code them in the same way as
What is the intent of the content? Are the title and author
interview transcripts.
identified? Is the content “juried?” Is the content “popular” or
Archiving Diary Data “scholarly”, satiric or serious? What is the date of the document
In spite of the abundance of data derived from diary surveys or article? Is the “edition” current? Do you have the latest
across a wide range of disciplines, little is available to other version? (Is this important?) How do you know?

© Copy Right: Rai University


56 11.556
2. Credibility Internet.) Do you need to compare data or statistics over time?

RESEARCH METHODOLOGY
Is the author identifiable and reliable? Is the content credible? Can you identify sources for comparable earlier or later data?
Authoritative? Should it be? What is the purpose of the Comparability of data may or may not be important, depend-
information, that is, is it serious, satiric, humorous? Is the URL ing on your project.
extension .edu, .com, .gov or .org? What does this tell you
10. Context
about the “publisher”?
What is the context for your research? Can you find “anything”
3. Critical Thinking on your topic, that is, commentary, opinion, narrative, statistics
How can you apply critical thinking skills, including previous and your quest will be satisfied? Are you looking for current or
knowledge and experience, to evaluate Internet resources? Can historical information? Definitions? Research studies or articles?
you identify the author, publisher, edition, etc. as you would How does Internet information fit in the overall information
with a “traditionally” published resource? What criteria do you context of your subject? Before you start searching, define the
use to evaluate Internet resources? research context and research needs and decide what sources
4. Copyright
might be best to use to successfully fill information needs
Even if the copyright notice does not appear prominently, without data overload.
someone wrote, or is responsible for, the creation of a docu- Why use the networks?
ment, graphic, sound or image, and the material falls under the International computer networks provide a very cheap and
copyright conventions. “Fair use” applies to short, cited effective vehicle for collaboration and communication. Long
excerpts, usually as an example for commentary or research. distances and time zones do not disrupt the process and your
Materials are in the “public domain” if this is explicitly stated. colleagues in other institutions or other parts of the world can
Internet users, as users of print media, must respect copyright. work at the times that suit them. It is easy to share resources of
all kinds amongst a group of researchers and to make results
5. Citation
available to as many or as few people as you desire. Once you
Internet resources should be cited to identify sources used, both
get used to map reading on the Internet you will find that you
to give credit to the author and to provide the reader with
have access to a huge variety of resources including the expertise
avenues for further research. Standard style manuals (print and
of the many other network users, the power of many comput-
online) provide some examples of how to cite Internet
ers and programs and the stored information from millions of
documents, although these standards are not uniform.
documents.
6. Continuity
Getting connected
Will the Internet site be maintained and updated? Is it now and
If you are working in a higher education institution you should
will it continue to be free? Can you rely on this source over time
have easy access to a network connection of some sort. Since the
to provide up-to-date information? Some good .edu sites have
institution’s connection and use of it are paid for en bloc, you
moved to .com, with possible cost implications. Other sites
will not incur any charges for using the network, though to start
offer partial use for free, and charge fees for continued or in-
with you may have to buy an add-on such as an Ethernet card
depth use.
for your machine. If you are a researcher connected with, or
7. Censorship working for, a higher education institution but you do the
Is your discussion list “moderated”? What does this mean? majority of your work at home, then you should talk to your
Does your search engine or index look for all words or are some computer support staff about connecting to the institution via
words excluded? Is this censorship? Does your institution, a modem, using your home telephone. Most institutions will
based on its mission, parent organization or space limitations, allow this, and once you have made the initial connection you
apply some restrictions to Internet use? Consider censorship should be able to get “out” onto the Internet without incurring
and privacy issues when using the Internet. any charges other than the local phone call between your home
8. Connectivity and the institution.
If more than one user will need to access a site, consider each If you are an independent researcher with no links to an
users’ access and “functionality.” How do users connect to the academic institution, then you must use a commercial service to
Internet and what kind of connection does the assigned access the Internet. A list of such commercial services appears
resource require? Does access to the resource require a graphical below. There are various levels of service available from such
user interface? If it is a popular (busy) resource, will it be providers. Make sure that the service you get is the service you
accessible in the time frame needed? Is it accessible by more than need - if you want to be able to use telnet and FTP (described
one Internet tool? Do users have access to the same Internet below) then don’t subscribe to an e-mail only service.
tools and applications? Are users familiar with the tools and
Etiquette - a sense of timing
applications? Is the site “viewable” by all Web browsers?
Most services on the Internet are made available through
9. Comparability volunteer effort. Many services, particularly in the USA, are very
Does the Internet resource have an identified comparable print busy during their working day. Please try to use US- based
or CD ROM data set or source? Does the Internet site contain services in the morning, before their day starts. You will find
comparable and complete information? (For example, some that access times are much improved and that you get less
newspapers have partial but not full text information on the “system busy” messages.

© Copy Right: Rai University


11.556 57
Interactive Access (telnet and PAD>) (the Domain Name) which is a lot easier to remember. For
RESEARCH METHODOLOGY

Sometimes known as Terminal Access, this is the process in example, the NISS Gateway’s domain name is:
which you make a connection between the machine on your niss.ac.uk
desk and another computer (a “remote host”). The information
And it’s IP address is:
stored on the remote computer appears on your screen. In this
way you can read news items and bulletin boards, search 196.63.76.1
through library catalogues and data archives, browse through When you are using telnet you can usually use either of these,
articles and sometimes books and, if you find them useful, ask they will get you through to the same place. But sometimes, if
the remote computer to e-mail them to you. There are many you are away from your home site, particularly abroad, or if the
thousands of computers, which freely allow public access in this machine which works out (“resolves”) which number matches
way. which name has gone wrong, then you will need to know the
If you have never used interactive access before try accessing the number. If you are relying on reading your e-mail when you are
NISS Gateway, a sort of information supermarket from whose away from home it is always a good idea to have a note of the
menus you can choose a variety of other services. IP address (the number) as well as the name.
From a PAD> prompt type: E-mail and more
call uk.ac.niss Many academics have discovered the advantages of using e-
mail. Your correspondent doesn’t have to be available to take
or make a telnet call to:
your call - she may be in a meeting or, if she’s on the other side
niss.ac.uk of the world, still asleep - yet your message will be waiting for
If you don’t know how to make either of these calls please ask her as soon as she switches on her machine. E-mail allows quick
your computer services. question and answer sessions, rapid revisions and corrections to
The NISS Gateway is an example of a centrally funded, publicly documents and is an ideal medium for collaboration and
accessible service. Other services only permit access once you or supervision.
your institution has paid a subscription. Probably the best Where e-mail really scores is in-group communication. A group
known such service in the UK is the BIDS ISI service run from of geographically disparate researchers can continually consult
the University of Bath. If your academic institution subscribes and keep in contact using the discussion list facilities available
to BIDS, you can obtain a username and password from your over e-mail. Your one message can be circulated to the whole
library or computing service. This allows you to search the Social group and any replies or comments also seen by the whole
Science Citation Index looking for citations (references) quoted group in a matter of minutes. Messages are archived for future
in articles from several thousand journals, published from 1981 reference and some systems also allow important files to be
onwards. BIDS is a service provided for the academic commu- stored and retrieved by list members. There is a general
nity, but there are also commercial services, which require discussion list for UK sociologists. To join this list sends an e-
subscription and/or make access charges. This article concen- mail message to:
trates on using freely accessible resources on the Internet with socbb-request@soc.surrey.ac.uk
no charges attached.
Also there are many other academic discussion lists on the UK
The program, which allows you to interactively access other service Mail base, and many thousands of lists on other
computers on the Internet, is called telnet. You may be able to systems worldwide. A few lists that may be of interest to
run telnet directly from the machine on your desk, or alterna- sociologists are mentioned at the end of this article.
tively you may be able to run it from an account you have on a
File transfer
larger local computer, perhaps the one that handles your e-mail.
Anything that can be stored as a file on a computer can be
Many sites are not yet able to use telnet directly, if you are in this transferred over the networks from one computer to another.
position you will probably access the UK JANET via a PAD> This includes word-processed documents, datasets, software
prompt. The NISS Gateway at Bath runs a Guest Telnet Service and graphics. File transfer over the Internet is known as FTP
to allow you to use telnet to access the rest of the world via the (file transfer protocol). Many sites on the Internet have set up
Internet. If you have a telnet name of a service you wish to call, repositories of files, which are freely available for you to transfer.
at your PAD> prompt type These repositories are known as FTP Archives. The process of
call uk.ac.niss transfer is known as Anonymous FTP because you don’t need
then from the menus choose: to identify yourself with a password in order to use such
systems. If you can use telnet from your machine then you can
General Services
probably use ftp too.
and then:
Example of a file transfer session:
NSF-Relay Guest Telnet Service
When the location of a file available for anonymous ftp is
You will then be prompted for the name or the number of the
quoted it will probably look something like this:
service you wish to call using telnet.
Filename: README
All services on the Internet have a unique number, which is
called the IP address (Internet Protocol). Most also have a name Site: ftp.bris.ac.uk

© Copy Right: Rai University


58 11.556
Directory: pub/info/networks/generic * Online bibliographic details to the ESRC catalogue and

RESEARCH METHODOLOGY
To transfer this file to your local machine you would follow this subject index. Contains over 3000 datasets including the
procedure: General Household Survey and Census of Great Britain.
At the system prompt type: Access
ftp ftp.bris.ac.uk * PAD> call uk.ac.essex.solb1 Login: biron Password: norib
This will connect you to the ftp archive at the University of or Telnet dasun.essex.ac.uk Login: biron Password: norib
Bristol. You will be prompted for your username, type: PenPages
anonymous * An American information server concerning all aspects of
You will then be asked for your password, it is polite to enter rural life, it contains several databases including MAPP - the
your e-mail address e.g.: national Co-operative Extension family database. This is
provided by the Dept of Agricultural Economics and Rural
nicky.ferguson@bristol.ac.uk
Sociology at Pennsylvania University and contains research
You should then see on your screen the prompt: briefs, bibliographies, census data, reference materails, publica-
ftp> tions, etc. Some is of a local nature but there is also general
Now type: interest material available. Server also hosts the Senior Series
Database and the 4-H Youth Development Database.
dir
Access
This will give you a directory listing of the contents of the root
* Telnet: psupen.psu.edu
or base directory. Now type
Login: world
cd pub/info/networks/generic
RAPID - Research Activities and Publications Information
You have now changed into the appropriate directory, use dir
Database
again to look at the contents. You will see that one of the files
is called README (note the capital letters - case is important in * Information on the ESRC research awards and the surround-
such systems. Now type: ing publications resulting from these awards (journals,
audio-visual material, books, databases, software, etc)
hash
this asks the system to put hash symbols (#) on your screen Access
while it is working. This is reassuring - sometimes the process * PAD> call uk.ac.ed.ercvax Username: rapid Password: rapid
takes several minutes. Now type: or telnet ercvax.ed.ac.uk Username: rapid Password: rapid
get README Discussion Lists:
While the process is working the hash symbols will appear on When you join an e-mail discussion list it is often called
your screen. When the transfer is complete you will get a subscribing (but you don’t have to pay). You will then receive
message saying so. Leave the system by typing: all the messages that are sent to that list. To join a list you send
bye a specific message to the machine (or sometimes the person)
that runs the list, NOT to the list itself. Messages sent to the list
and look for the file on your home system. It will explain to
itself are distributed to ALL the list members. The lists below
you the rest of the contents of the directory:
each appear with their subscription address and the correct text
pub/info/networks/generic. The directory contains a set of of the e-mail message to send. In each case replace first name
practical exercises designed for social scientists to explore the last name with your names.
networks. You may want to print them out and investigate
Social-Theory:
them at your leisure.
* This list aims to provide a forum for the discussion of social
Resources for Sociologists theory with the social sciences. Particular emphasis will be placed
CoombsQuest - Social Science Research Data Bank upon the relationship between psychology and sociology with
* Collection of 21 databases on material specific to the Pacific special reference to the individual and social processes.
region and South and North East Asia. Includes papers, Subscription address: mailbase@mailbase.ac.uk
bibliographies, directories and abstracts of theses. Server points Message text: subscribe social-theory first name lastname
to other social science information sites around the world.
List details:
Access
POR - Public Opinion Research
* telnet info.anu.edu.au login: info
* Unmoderated discussion list for academics and professionals
Many files from the Coombs archives are available for file
interested in public opinion research, useful to researchers
transfer, using ftp:
currently conducting survey research projects.
sitename: unix.hensa.ac.uk
Subscription address: listserv@unc.edu
directory: pub/uunet/doc/papers/coombspapers
Message text: subscribe por first name lastname
BIRON - Bibliographic Information Retrieval Online
PSN - Progressive Sociologists Network

© Copy Right: Rai University


11.556 59
* International discussion list for sociologists concerned with primary researchers in order to investigate the circumstances of
RESEARCH METHODOLOGY

progressive issues and values such as civil rights struggles, the original data generation and processing.
women’s rights, community development, etc. Membership is Despite the interest in and arguments for developing secondary
mostly from the US, Canada and Western Europe. analysis of qualitative data, the approach has not been widely
Subscription address: listserv@csf.colorado.edu adopted to date. This raises questions about the desirability and
Message text: subscribe psn firstname lastname feasibility of particular strategies for secondary analysis of
qualitative data discussed below:
SOCORG-K - Social Organisation of Knowledge Discussion
* Moderated discussion list for sociologists. Methodological and Ethical Considerations
Before highlighting some of the key practical and ethical issues,
Subscription address: listserv@vm.utcc.utoronto.ca
which have been discussed in the literature, there are two
Message text: subscribe socorg-k firstname lastname fundamental methodological issues to be considered.
SOS-DATA - Social science data discussion list 1). Tenable
* Provides a forum for discussion of any topic related to social The first is whether secondary analysis of qualitative studies is
science data. The list is used most frequently to ask for refer- tenable, given that it is often thought to involve an inter-
ences to sources of data on some particular subject but it also subjective relationship between the researcher and the
includes announcements of conferences and new data resources. researched. In response, it may be argued that even where
Subscription address: listserv@unc.edu primary data is gathered via interviews or observation in
qualitative studies, there may be more than one researcher
Message text: subscribe sos-data firstname lastname
involved. Hence within the research team the data still has to be
Electronic Journal contextualised and interpreted by those who were not present.
Psycoloquy A more radical response is to argue that the design, conduct and
* Refereed electronic journal intended to implement peer review analysis of both qualitative and quantitative research are always
over the network. The journal is primarily for psychologists but contingent upon the contextualisation and interpretation of
it is interdisciplinary in the topics covered and includes articles, subjects’ situation and responses. Thus, secondary analysis is no
book reviews, queries and announcements, etc. more problematic than other forms of empirical inquiry, all of
Subscription address: listserv@pucc.princeton.edu which, at some stage, depend on the researcher’s ability to form
critical insights based on inter-subjective understanding.
Message text: subscribe psych firstname lastname
2). Origin
Many files from the Psycoloquy journal are available for file
The second issue concerns the problem of where primary
transfer, using ftp:
analysis stops and secondary analysis starts. Qualitative research
sitename: ftp: princeton.edu (128.112.128.1) is an iterative process and grounded theory in particular requires
directory: /pub harnad that questions undergo a process of formulation and refine-
Secondary Analysis ment over time. For primary researchers re-using their own data
Secondary analysis involves the use of existing data, collected for it may be difficult to determine whether the research is part of
the purposes of a prior study, in order to pursue a research the original enquiry or sufficiently new and distinct from it to
interest, which is distinct from that of the original work. In this qualify as secondary analysis. For independent analysts re-using
respect, secondary analysis differs from systematic reviews and other researchers’ data there are also related professional issues
meta-analyses of qualitative studies, which aim instead to about the degree of overlap between their respective works.
compile and assess the evidence relating to a common concern There is no easy solution to these problems except to say that
or area of practice. The approach may either be employed by greater awareness of secondary analysis might enable researchers
researchers to re-use their own data or by independent analysts to more appropriately recognise and define their work as such.
using previously established qualitative data sets. Compatibility of the data with secondary analysis:
Why do Secondary Analysis? Are the data amenable to secondary analysis?
It has been contended that the approach can be used to generate This will depend on the ‘fit’ between the purpose of the
new knowledge, new hypotheses, or support for existing analysis and the nature and quality of the original data. Scope
theories; that it reduces the burden placed on respondents by for additional in-depth analysis will vary depending on the
negating the need to recruit further subjects; and that it allows nature of the data; for example, while tightly structured
wider use of data from rare or inaccessible respondents. In interviews tend to limit the range of responses, designs using
addition, it has been suggested that secondary analysis is a more semi-structured schedules may produce more rich and varied
convenient approach for particular researchers, notably students. data. A check for the extent of missing data relevant to the
It should also be noted that use of the approach does not secondary analysis but irrelevant to the original study may also
necessarily preclude the possibility of collecting primary data. be required; for example, where semi-structured interviews
This may, for example, be required to obtain additional data or involved the discretionary use of probes. More generally, the
to pursue in a more controlled way the findings emerging from quality of original data will also need to be assessed.
the initial analysis. There may also be a need to consult the

© Copy Right: Rai University


60 11.556
Position of the secondary analyst undertaking qualitative work that may be re-used in the

RESEARCH METHODOLOGY
Was the analyst part of the original research team? future
This will influence the decision over whether to undertake Conclusion
secondary analysis and, if so, the procedures to be followed. Despite growing interest in the re-use of qualitative data,
Secondary analysts require access to the original data, including secondary analysis remains an under-developed and ill-defined
tapes and field notes, in order to re-examine the data with the approach. Various methodological and ethical considerations
new focus in mind. This is likely to be easier if they were part of pose a challenge for the would-be secondary analyst, particularly
the original research team. If not, then ideally they should also those who were not part of the primary research team. Further
be able to consult with the primary researchers in order to assess work to develop this approach is required to see if the potential
the quality of the original work and to contextualise the material benefits can actually be realised in practice.
(rather than rely on field notes alone). Further consultation may
also be helpful in terms of crosschecking the results of the
Survey
This is an “information society.” That is, our major problems
secondary analysis. Finally, whether conducting secondary
and tasks no longer mainly center on the production of the
analysis in an independent capacity or not, some form of
goods and services necessary for survival and comfort. Our
contractual agreement between the secondary analyst and the
“society,” thus, requires a prompt and accurate flow of informa-
primary researchers, data archive managers, and colleagues
tion on preferences, needs, and behavior. It is in response to
involved in the primary research but not in the secondary
this critical need for information on the part of the govern-
analysis may have to be negotiated.
ment, business, and social institutions that so much reliance is
Reporting of Original and Secondary Data Analysis placed on surveys.
Such is the complexity of secondary analysis, that it is particu-
Then, What is a Survey?
larly important that the study design, methods and issues
Today the word “survey” is used most often to describe a
involved are reported in full. Ideally this should include an
method of gathering information from a sample of individu-
outline of the original study and data collection procedures,
als. This “sample” is usually just a fraction of the population
together with a description of the processes involved in
being studied.
categorising and summarising the data for the secondary
analysis, as well as an account of how methodological and For example
ethical considerations were addressed. • A sample of voters is questioned in advance of an election
Ethical Issues to determine how the public perceives the candidates and the
issues.
How was consent obtained in the original study?
Where sensitive data is involved, informed consent cannot be • A manufacturer does a survey of the potential market before
presumed. Given that it is usually not feasible to seek additional introducing a new product ... A government entity
consent, a professional judgment may have to be made about commissions a survey to gather the factual information it
whether re-use of the data violates the contract made between needs to evaluate existing legislation or to draft proposed
subjects and the primary researchers. Growing interest in re- new legislation.
using data make it imperative that researchers in general now Not only do surveys have a wide variety of purposes, they also
consider obtaining consent which covers the possibility of can be conducted in many ways — including over the telephone,
secondary analysis as well as the research in hand; this is by mail, or in person. Nonetheless, all surveys do have certain
consistent with professional guidelines on ethical practice. characteristics in common.
Developing the Approach: Unlike a census, where all members of the population are
To see if the potential of secondary analysis can be realised in studied, surveys gather information from only a portion of a
practice, developmental work still needs to be undertaken: population of interest — the size of the sample depending on
the purpose of the study.
• First, there should be a more comprehensive review of the
literature on secondary analysis and studies, which have In a bona fide survey, the sample is not selected haphazardly or
explicitly (and perhaps implicitly) used this approach. This only from persons who volunteer to participate. It is scientifi-
could include examination of the methods used, as well as cally chosen so that each person in the population will have a
the quality, value and impact of this work. measurable chance of selection. This way, the results can be
reliably projected from the sample to the larger population.
• Secondly, further work on the protocols for conducting
secondary analysis of qualitative data, particularly with regard Information is collected by means of standardized procedures
to the re-use of other researchers’ data should be carried out. so that every individual is asked the same questions in more or
less the same way. The survey’s intent is not to describe the
• Thirdly, there should be greater consideration of the issues particular individuals who, by chance, are part of the sample but
involved in the secondary analysis of single, multiple and
to obtain a composite profile of the population.
mixed data sets.
The industry standard for all reputable survey organizations is
• Finally, some more specific guidelines are needed for that individual respondents should never be identified in
researchers about the ethical issues to be considered when
reporting survey findings. All of the survey’s results should be

© Copy Right: Rai University


11.556 61
presented in completely anonymous summaries, such as scientists, and sociologists conduct surveys to study such
RESEARCH METHODOLOGY

statistical tables and charts. matters as income and expenditure patterns among house-
holds, the roots of ethnic or racial prejudice, the implications of
How Large Must The Sample Size Be?
health problems on people’s lives, comparative voting behavior,
The sample size required for a survey partly depends on the
and the effects on family life of women working outside the
statistical quality needed for survey findings; this, in turn, relates
home.
to how the results will be used.
Even so, there is no simple rule for sample size that can be used What are Some Common Survey Methods?
for all surveys. Much depends on the professional and financial Surveys can be classified in many ways.
resources available. Analysts, though, often find that a moderate Size and type of sample.
sample size is sufficient statistically and operationally. For Surveys also can be used to study either human or non-human
example, the well-known national polls frequently use samples populations (e.g., animate or inanimate objects — animals,
of about 1,000 persons to get reasonable information about soils, housing, etc.). While many of the principles are the same
national attitudes and opinions. for all surveys, the focus here will be on methods for surveying
When it is realized that a properly selected sample of only 1,000 individuals.
individuals can reflect various characteristics of the total Many surveys study all persons living in a defined area, but
population, it is easy to appreciate the value of using surveys to others might focus on special population groups — children,
make informed decisions in a complex society such as ours. physicians, community leaders, the unemployed, or users of a
Surveys provide a speedy and economical means of determin- particular product or service. Surveys may also be conducted
ing facts about our economy and about people’s knowledge, with national, state, or local samples.
attitudes, beliefs, expectations, and behaviors.
Method of data collection
Who Conducts Surveys? Surveys can be classified by their method of data collection.
We all know about the public opinion surveys or “polls” that Mail, telephone interview, and in-person interview surveys are
are reported by the press and broadcast media. They conduct the most common. Extracting data from samples of medical
surveys on national public opinion on a wide range of current and other records is also frequently done. In newer methods of
issues. State polls and metropolitan area polls, often supported data collection, information is entered directly into computers
by a local newspaper or TV station, are reported regularly in either by a trained interviewer or, increasingly, by the respon-
many localities. The major broadcasting networks and national dent. One well-known example is the measurement of TV
news magazines also conduct polls and report their findings. audiences carried out by devices attached to a sample of TV sets
The great majority of surveys, though, are not public opinion that automatically record the channels being watched.
polls. Most are directed to a specific administrative, commercial, Mail surveys can be relatively low in cost. As with any other
or scientific purpose. The wide variety of issues with which survey, problems exist in their use when insufficient attention is
surveys deal is illustrated by the following listing of actual uses given to getting high levels of cooperation. Mail surveys can be
• Major TV networks rely on surveys to tell them how many most effective when directed at particular groups, such as
and what types of people are watching their programs subscribers to a specialized magazine or members of a profes-
• Statistics Canada conducts continuing panel surveys of sional association.
children (and their families) to study educational and other Telephone interviews are an efficient method of collecting some
needs types of data and are being increasingly used. They lend
• Auto manufacturers use surveys to find out how satisfied themselves particularly well to situations where timeliness is a
people are with their cars factor and the length of the survey is limited.
• The U.S. Bureau of the Census conducts a survey each In-person interviews in a respondent’s home or office are much
month to obtain information on employment and more expensive than mail or telephone surveys. They may be
unemployment in the nation necessary, however, especially when complex information is to
be collected.
• The U.S. Agency for Health Care Policy and Research
sponsors a periodic survey to determine how much money Some surveys combine various methods. For instance, a survey
people are spending for different types of medical care worker may use the telephone to “screen” or locate eligible
respondents (e.g., to locate older individuals eligible for
• Local transportation authorities conduct surveys to acquire Medicare) and then make appointments for an in-person
information on commuting and travel habits
interview.
• Magazine and trade journals use surveys to find out what
their subscribers are reading What Survey Questions Do You Ask?
You can further classify surveys by their content. Surveys are
• Surveys are conducted to ascertain who uses our national concerned with:
parks and other recreation facilities.
Surveys provide an important source of basic scientific knowl- • Opinions and attitudes (such as a pre-election survey of
edge. Economists, psychologists, health professionals, political voters),
• Factual characteristics or behaviors (such as people’s health,
housing, consumer spending, or transportation habits).

© Copy Right: Rai University


62 11.556
Many surveys combine questions of both types. Respondents persuade them to participate in the survey, and to collect the

RESEARCH METHODOLOGY
may be asked if they have heard or read about an issue ... what data needed in exact accordance with instructions.
they know about it ... their opinion ... how strongly they feel Less visible, but equally important are the in-house research
and why... their interest in the issue ... past experience with it ... staffs, who among other things — plan the survey, choose the
and certain factual information that will help the survey analyst sample, develop the questionnaire, supervise the interviews,
classify their responses (such as age, gender, marital status, process the data collected, analyze the data, and report the
occupation, and place of residence). survey’s findings.
Questions may be open-ended (“Why do you feel that way?”) In most survey research organizations, the senior staff will have
or closed (“Do you approve or disapprove?”). Survey takers taken courses in survey methods at the graduate level and will
may ask respondents to rate a political candidate or a product on hold advanced degrees in sociology, statistics, marketing, or
some type of scale, or they may ask for a ranking of various psychology, or they will have the equivalent in experience.
alternatives.
Middle-level supervisors and research associates frequently have
The manner in which a question is asked can greatly affect the similar academic backgrounds to the senior staff or they have
results of a survey. For example, a recent NBC/Wall Street advanced out of the ranks of clerks, interviewers, or coders on
Journal poll asked two very similar questions with very different the basis of their competence and experience.
results:
What About Confidentiality and Integrity?
(1) Do you favor cutting programs such as social security,
The confidentiality of the data supplied by respondents is of
medicare, medicaid, and farm subsidies to reduce the budget
prime concern to all reputable survey organizations.
deficit? The results: 23% favor; 66% oppose; 11% no
opinion. Several professional organizations dealing with survey methods
have codes of ethics that prescribe rules for keeping survey
(2) Do you favor cutting government entitlements to reduce the
responses confidential. The recommended policy for survey
budget deficit? The results: 61% favor; 25% oppose; 14% no
organizations to safeguard such confidentiality includes:
opinion.
• Using only number codes to link the respondent to a
The questionnaire may be very brief — a few questions, taking
questionnaire and storing the name-to-code linkage
five minutes or less — or it can be quite long — requiring an
information separately from the questionnaires
hour or more of the respondent’s time. Since it is inefficient to
identify and approach a large national sample for only a few • Refusing to give the names and addresses of survey
items of information, there are “omnibus” surveys that respondents to anyone outside the survey organization,
combine the interests of several clients into a single interview. including clients
In these surveys, respondents will be asked a dozen questions • Destroying questionnaires and identifying information
on one subject, a half dozen more on another subject, and so about respondents after the responses have been entered
on. into the computer
Because changes in attitudes or behavior cannot be reliably • Omitting the names and addresses of survey respondents
ascertained from a single interview, some surveys employ a from computer files used for analysis
“panel design,” in which the same respondents are interviewed • Presenting statistical tabulations by broad enough categories
on two or more occasions. Such surveys are often used during so that individual respondents cannot be singled out.
an election campaign or to chart a family’s health or purchasing
What are Other Potential Concerns?
pattern over a period of time.
The quality of a survey is largely determined by its purpose and
Who Works on Surveys? the way it is conducted.
The survey worker best known to the public is the interviewer
Most call-in TV inquiries (e.g., 900 “polls”) or magazine write-in
who calls on the telephone, appears at the door, or stops people
“polls,” for example, are highly suspect. These and other “self-
at a shopping mall.
selected opinion polls (SLOPS)” may be misleading since
Traditionally, survey interviewing, although occasionally participants have not been scientifically selected. Typically, in
requiring long days in the field, was mainly part-time work and, SLOPS, persons with strong opinions (often negative) are more
thus, well suited for individuals not wanting full-time employ- likely to respond.
ment or just wishing to supplement their regular income.
Surveys should be carried out solely to develop statistical
Changes in the labor market and in the level of survey automa- information about a subject. They should not be designed to
tion have begun to alter this pattern — with more and more produce predetermined results or as a ruse for marketing and
survey takers seeking to work full time. Experience is not similar activities. Anyone asked to respond to a public opinion
usually required for an interviewing job, although basic poll or concerned about the results should first decide whether
computer skills have become increasingly important for the questions are fair.
applicants.
Another important violation of integrity occurs when what
Most research organizations provide their own training for the appears to be a survey is actually a vehicle for stimulating
interview task. The main requirements for interviewing are an donations to a cause or for creating a mailing list to do direct
ability to approach strangers (in person or on the phone), to marketing.

© Copy Right: Rai University


11.556 63
UNIT I
RESEARCH METHODOLOGY

FUNDAMENTALS OF RESEARCH
LESSON 9: PROCESS
TUTORIAL

Q1] Assume that you are the manufacturer of modules office 4: What is professor’s weakest point?__________________
systems and furniture as well as office organization elements 5: What kind of class does the professor teach?___________
(desktop and wall organizers, filing systems, etc.) Your
6: Is this course required?___________________________
company has been asked to propose an observational study
to examine the use of office space by white –collar and
managerial workers for a large insurance company. This study
will be part of a project to improve office efficiency and
paperwork flow. It is expected to involve the redesign of
office space and the purchase of new office furniture and
organization elements.
A: what are the varieties of information that might be
observed?
B: Select a limited number of content areas for study, and
operationally define the observation acts that should
be measured.
Q2] You wish to analyze the pedestrian traffic that passes a
given store in a major shopping center. You are interested in
determining how many shoppers pass by this store, and you
would like to classify these shoppers on various relevant
dimensions. Any information you can secure should be
obtainable from observation alone.
A: What other information might you find useful to
observe?
B: How would you decide what information to collect?
C: Devise the operational definitions you would need
D: What would you say in your instructions to be
observers you plan to use?
E: How might you sample this shoppers traffic?
Q3] In a class project, students developed a brief self –
administered questionnaire by which they might quickly
evaluate a professor. One student submitted the following
instrument. Evaluate the questions asked and format of the
instrument
Professor Evaluation Form
1: Overall , how would you rate this professor ?
Good_________ Fair_______ Poor____
2: Does this professor
a: Have good class delivery?______________________
b: Know the subject?__________________________
c: Have a positive attitude toward the subject?________
d: grade fairly?_______________________________
e: Have a sense of humor?______________________
f: Use audiovisual case examples, or other classroom
aids?______________________________________
g: Return exams promptly?______________________
3: What is professor’s strongest point?________________

© Copy Right: Rai University


64 11.556
RESEARCH METHODOLOGY
LESSON 10:
QUESTIONNAIRE DESIGN

Students, today we shall be studying a very important part of technique have dramatically reshaped our traditional views on
data collection i.e Questionnaire. As you are going to became the time-intensive nature and inherent unreliability of the
managers of future and you would be facing problems relating interview technique.
to decision making and planning –the art of preparing the
2. Better Samples
questonnaire will help you in generating the desired informa-
Many surveys are constrained by a limited budget.
tion .
Since a typical questionnaire usually has a lower cost per
We will also be discussing various situations on various issues
respondent, you can send it to more people within a given
relating to questionnaire mode of collecting data, like its
budget (or time) limit.
advantages and disadvantages, criteria of a good research design,
types of questions, bias in questions, nonresponse etc. This will provide you with more representative samples.
We know that the final step in preparing the survey is develop- 3. Standardization
ing the data collection instrument. The most common means The questionnaire provides you with a standardized data-
of collecting data are the interview and the self- or group- gathering procedure.
administered questionnaire. • The effects of potential human errors (for example, one can
In the past, the interview has been the most popular data- alter the pattern of question asking, calling at inconvenient
collecting instrument. Recently, the questionnaire has surpassed times, and biasing by “explaining”) can be minimized by
the interview in popularity. using a well-constructed questionnaire.

The Questionnaire — Pros and Cons • The use of a questionnaire also eliminates any bias
First of all it is important for you to understand the advan- introduced by the feelings of the respondents towards the
tages and disadvantages of the questionnaire as opposed to the interviewer (or vice versa).
personal interview. This knowledge will allow you to maximize 4. Respondent Privacy
the strengths of the questionnaire while minimizing its • Although the point is debatable, most surveyors believe the
weaknesses. respondent will answer a questionnaire more frankly than he
The advantages of administering a questionnaire instead of would answer an interviewer, because of a greater feeling of
conducting an interview are: anonymity.
The primary advantages of questionnaire are • The respondent has no one to impress with his/her answers
(i) it is economical in terms of money and time and need have no fear of anyone hearing them.
(ii) it gives samples which are more representative of • To maximize this feeling of privacy, it is important to guard,
population and emphasize, the respondent’s privacy
(iii) it generates the standardized information The primary disadvantages of the questionnaire are discussed
on the grounds of:
(iv) it provides the respondent the desired privacy
• (i)non return
We will discuss these advantages of Questionnaire technique of
collecting primary data • (ii) mis-interpretation

1. Economical in Money and Time • (iii validity


The questionnaires will save your time and money. We will discuss them in detail.
• There is no need to train the interviewers, there by reducing 1. Non returns
the time of operation and is economical. Non returns are questionnaires or individual questions that are
• The questionnaires can be send to a large group and can be not answered by the people to whom they were sent.
collected simultaneously, however when personal interview is For example, You may be surveying to determine the attitude
done the interviewer has to go to each and every individual of a group about a new policy. Some of those opposed to it
seperately. might be afraid to speak out, and they might comprise the
• The questions reach the respondends very efficiently. Finally, majority of the non returns. This would introduce non-
the cost of postage should be less than that of travel or random (or systematic) bias into your survey results, especially
telephone expenses. if you found only a small number of the returns were in favour
Recent developments in the science of surveying have led to of the policy.
incorporating computers into the interview process, yielding Non returns cannot be overcome entirely. What we can do is try to
what is commonly known as computer automated telephone minimize them. Techniques to accomplish this we will be
interview (or CATI) surveys. Advances in using this survey studying later on.

© Copy Right: Rai University


11.556 65
2. Misinterpretation • Also you will strongly stress on the confidentiality of the
RESEARCH METHODOLOGY

Misinterpretation occurs when the respondent does not results


understand either the survey instructions or the survey • When you will enclose a well written cover letter, it will help
questions. in minimizing both nonreturn and validity problems.
If respondents become confused, they will either give up on the
Instructions
survey (becoming a nonreturn) or answer questions in terms of
the way they understand it, but not necessarily the way you • The cover letter should be followed by a clear set of
meant it. This would turn out to be more serious than non instructions explaining how to complete the survey and
return , sometimes. where to return it.

Your questionnaire’s instructions and questions must be able • If the respondents do not understand the mechanical
to stand on their own and you must use terms that have procedures necessary to respond to the questions, their
commonly understood meanings throughout the population answers will be meaningless.
under study. • If you do not want respondents to provide their names, say
If you are using novel terms, be sure to define them so all so explicitly in the instructions, and tell them to leave the
respondents understand your meaning. NAME column blank

3. Validity Set of questions


The third disadvantage of using a questionnaire is inability to The third and final part of the questionnaire is the set of
check on the validity of the answer. questions.

Without observing the respondent’s reactions (as would be the • Since the questions are the means by which you are going to
case with an interview) while completing the questionnaire, collect your data, they should be consistent with your survey
You have no way of knowing the true answers to following plan.
questions • They should not be ambiguous or encourage feelings of
• Did the person you wanted to survey give the questionnaire frustration or anger that will lead to nonreturns or validity
to a friend or complete it personally? problems.
• Did the individual respond indiscriminately? Types of Questions
Before investigating the art of question writing, it will be useful
• Did the respondent deliberately choose answers to mislead
to examine the various types of questions.
the surveyor?
Cantelou (1964; p 57) identifies four types of questions used in
Criteria of a good questionnaire surveying.
What is the secret of getting all strengths of questionnaire
while minimizing its weakness? • According to him the background question is used to
obtain demographic characteristics of the group being
The secret to take advantage of the strengths of questionnaires studied, such as age, sex, grade, level of assignment, and so
(lower costs, more representative samples, standardization, forth.This information is used when you are categorizing
privacy) while minimizing the number of non returns, your results by various subdivisions such as age or grade.
misinterpretations, and validity problems lies in the prepara- Therefore, these questions should be consistent with your
tion of a survey questionnaire. data analysis plan.
The key to minimizing the disadvantages of the survey • The second and most common type of question is the
questionnaire lies in the construction of the questionnaire itself. multiple choice or closed-end question. It is used to
You should remember that determine feelings or opinions on certain issues by allowing
• A poorly developed questionnaire contains the seeds of its the respondent to choose an answer from a list you have
own destruction. provided .
• Each of the three portions of the questionnaire – the cover • The intensity question, a special form of the multiple-
letter, the instructions, and the questions - must work choice question, is used to measure the intensity of the
together to have a positive impact on the success of the respondent’s feelings on a subject. These questions provide
survey. answers that cover a range of feelings.
Cover letter • The final type of question is the free response or open-end
The cover letter should explain to the respondent the purpose question. This type requires respondents to answer the
of the your survey and it should motivate him to reply question in their own words .It can be used to gather
truthfully and quickly. opinions or to measure the intensity of feelings.
• If possible, it should explain why the survey is important Multiple-choice questions are the most frequently used
to him, how he was chosen to participate, and who is types of questions in surveying today. It is prudent,
sponsoring the survey (the higher the level of sponsorship therefore,we need to concentrate primarily on factors relating to
the better). their application.

© Copy Right: Rai University


66 11.556
Questionnaire Construction Wording the question to reduce the number of possible

RESEARCH METHODOLOGY
The complex art of question writing has been investigated by answers is the first step. Avoid dichotomous (two-answer)
many researchers questions (except for obvious demographic questions such
From their experiences, they offer valuable advice. Below are as gender).
some helpful hints typical of those that appear most often in If you cannot avoid them, add a third option, such as no
texts on question construction. opinion, don’t know, or other. These may not get the
• Keep the language simple. answers you need but they will minimize the number of
invalid responses.
Analyze your audience and write on their level. Avoid the use
of technical terms. A great number of “don’t know” answers to a question in a
fact-finding survey can be a useful piece of information.
An appropriate corollary to Murphy’s Law in this case would
be: But a majority of other answers may mean you have a poor
question, and perhaps should be cautious when analyzing
“ If someone can misunderstand something, they will”.
the results.
• Keep the questions short. • Avoid emotional or morally charged questions.
Long questions tend to become ambiguous and confusing.
The respondent may feel your survey is getting a bit too
A respondent, in trying to comprehend a long question, may personal!
leave out a clause and thus change the meaning of the
• Understand the should-would question.
question.
Respondents answer “should” questions from a social or
• Keep the number of questions to a minimum. moral point of view while answering “would” questions in
There is no commonly agreed on maximum number of terms of personal preference.
questions that should be asked, but research suggests higher
• Formulate your questions and answers to obtain exact
return rates correlate highly with shorter surveys.
information and to minimize confusion.
Ask only questions that will contribute to your survey. Apply
For example, does “How old are you?” mean on your last or
the “So what?” and “Who cares?” tests to each question.
your nearest birthday?
“Nice-to-know” questions only add to the size of the
questionnaire. By including instructions like “Answer all questions as of (a
certain date)”, you can alleviate many such conflicts.
Having said this, keep in mind that you should not leave out
questions that would yield necessary data simply because it • Include a few questions that can serve as checks on the
will shorten your survey. If the information is necessary, ask accuracy and consistency of the answers as a whole.
the question. Have some questions that are worded differently, but are
• Limit each question to one idea or concept. soliciting the same information, in different parts of the
questionnaire.
A question consisting of more than one idea may confuse
the respondent and lead to a meaningless answer. These questions should be designed to identify the
respondents who are just marking answers randomly or who
Consider this question: “Are you in favour of raising pay
are trying to game the survey (giving answers they think you
and lowering benefits?” What would a yes (or no) answer
want to hear).
mean?
If you find a respondent who answers these questions
• Do not ask leading questions. differently, you have reason to doubt the validity of their
These questions are worded in a manner that suggests an entire set of responses. For this reason, you may decide to
answer. exclude their response sheet(s) from the analysis.
Some respondents may give the answer you are looking for • Organize the pattern of the questions:
whether or not they think it is right. Such questions can
• Place demographic questions at the end of the
alienate the respondent and may open your questionnaire to
questionnaire.
criticism.
• Have your opening questions arouse interest.
A properly worded question gives no clue as to which
answer you may believe to be the correct one. • Ask easier questions first.
• Use subjective terms such as good, fair, and bad • To minimize conditioning, have general questions
sparingly, if at all. precede specific ones.
These terms mean different things to different people. One • Group similar questions together.
person’s “fair” may be another person’s “bad.” How much is • If you must use personal or emotional questions, place
“often” and how little is “seldom?” them at the end of the questionnaire.
• Allow for all possible answers. Pretest (Pilot Test) the Questionnaire
Respondents who cannot find their answer among your list This is the most important step in preparing your question-
will be forced to give an invalid reply or, possibly, become naire.
frustrated and refuse to complete the survey.

© Copy Right: Rai University


11.556 67
The purpose of the pretest is to see just how well your cover
RESEARCH METHODOLOGY

letter motivates your respondents and how clear your


instructions, questions, and answers are.
• You should choose a small group of people (from three to
ten should be sufficient) you feel are representative of the
group you plan to survey.
• After explaining the purpose of the pretest, let them read
and answer the questions without interruption.
• When they are through, ask them to critique the cover letter,
instructions, and each of the questions and answers.
Don’t be satisfied with learning only what confused or alienated
them.
• Question them to make sure that what they thought
something meant was really what you intended it to mean.
• Use the above 12 hints as a checklist, and go through them
with your pilot test group to get their reactions on how well
the questionnaire satisfies these points.
• Finally, redo any parts of the questionnaire that are weak.
Have your questionnaire neatly produced on quality paper. A
professional looking product will increase your return
rate. A poorly designed survey that contains poorly written
questions will yield useless data regardless of how “pretty” it
looks.
Finally, make your survey interesting
Let us now summarise what we have studied today.
• The questionnaire is the means for collecting your survey
data.
• It should be designed with your data collection plan in
mind.
• Each of its three parts the cover letter, instructions and
questions should take advantage of the strengths of
questionnaires while minimizing their weaknesses.
• Each of the different kinds of questions is useful for
eliciting different types of data, but each should be
constructed carefully with well- developed construction
guidelines in mind.
• Properly constructed questions and well-followed survey
procedures will allow you to obtain the data needed to check
your hypothesis and, at the same time, minimize the chance
that one of the many types of bias will invalidate your
survey results.
The types of bias which you will be encounted with when you
prepare and execute a questionnaire with be studied in the next
lecture.

© Copy Right: Rai University


68 11.556
RESEARCH METHODOLOGY
LESSON 11:
ISSUES IN QUESTIONNAIRE

Student’s today we will be continuing with the questionnaire Strongly Disagree (1) Disagree (2) Undecided (3)
design. Today we will be studying the Intensity Questions , bias Agree (4) Strongly Agree (5)
in questions, bias in volunteer samples and levels of measure- 2. On the whole, judges are honest.
ment , reliability and validity of findings. Strongly Disagree (1) Disagree (2) Undecided (3) Agree
We have studied in the last lecture that the questionnaire is the (4) Strongly Agree (5)
means for collecting your survey data. It should be designed The weights (shown by the numbers below the answers) are
with your data collection plan in mind. not shown on the actual questionnaire and, therefore, are not
Each of its three parts should take advantage of the strengths seen by the respondents.
of questionnaires while minimizing their weaknesses. Each of A person who feels that laws are unjust would score lower than
the different kinds of questions is useful for eliciting different one who feels that they are just. The stronger the feeling, the
types of data, but each should be constructed carefully with higher (or lower) the score.
well- developed construction guidelines in mind.
The scoring is consistent with the attitude being measured.
Properly constructed questions and well-followed survey Whether “agree” or “disagree” gets the higher weight actually
procedures will allow you to obtain the data needed to check makes no difference.
your hypothesis and, at the same time, minimize the chance
that one of the many types of bias will invalidate your survey But for ease in interpreting the results of the questionnaire, the
results. weighting scheme should remain consistent throughout the
survey.
Before studying the types of bias first we will deal with the
Intensity quest One procedure for constructing Likert-type questions is as
follows (adapted from Selltiz, et al., 1963; pp 367-368):
Intensity Questions and the Likert Scale 1. You being an investigator collect a large number of
As I have told you before , the intensity question is used to
definitive statements relevant to the attitude being
measure the strength of a respondent’s feeling or attitude on a
investigated.
particular topic.
2. You conduct and score a pretest of your survey. The most
Such questions allow you to obtain more quantitative informa- favourable response to the attitude gets the highest score for
tion about the survey subject. each question. The respondent’s total score is the sum of the
Instead of a finding that 80 percent of the respondents favour scores on all questions.
a particular proposal or issue, you can obtain results that show 5 3. If you are investigating more than one attitude on your
percent of them are strongly in favour whereas 75 percent are survey, intermix the questions for each attitude. In this
mildly in favour. manner, the respondent will be less able to guess what you
These findings are similar, but the second type of response are doing and thus more likely to answer honestly.
supplies more useful information. 4. Randomly select some questions and flip-flop the Strongly
The most common and easily used intensity (or scaled) Agree — Strongly Disagree scale to prevent the respondent
question involves the use of the Likert-type answer scale. from getting into a pattern of answering (often called a
Likert-type answer scale response set).
It allows the respondent to choose one of several (usually five) The intensity question, with its scaled answers and average
degrees of feeling about a statement from strong approval to scores, can supply quantitative information about your respon-
strong disapproval. The “questions” are in the form of dents’ attitudes toward the subject of your survey.
statements that seem either definitely favourable or definitely A number of studies have been conducted over the years
unfavourable toward the matter under consideration. The attempting to determine the limits of a person’s ability to
answers are given scores (or weights) ranging from one to the discriminate between words typically found on rating or
number of available answers, with the highest weight going to intensity scales. The results of this research can be of consider-
the answer showing the most favorable attitude toward the able value when trying to decide on the right set of phrases to
subject of the survey. use in your rating or intensity scale. When selecting phrases for a
Illustration 4-, 5-, 7-, or 9-point .
The following questions designed to measure the amount of Likert scale, you should choose phrases that are far enough apart
“anti- law” feelings this : from one another to be easily discriminated, while, at the same
1. Almost anything can be fixed up in the courts if you have time, keeping them close enough that you don’t lose potential
enough money. information.

© Copy Right: Rai University


11.556 69
You should also try to gauge whether the phrases you are using • Since little can be done to estimate the feelings of the non-
RESEARCH METHODOLOGY

are commonly understood so that different respondents will returnees, especially in a confidential survey, the only solution
interpret the meaning of the phrases in the same way. is to minimize the number of non-returns.
An obvious example is shown with the following 3 phrases: The non returns can be minimized by
Strongly Agree, Neutral, Strongly Disagree.
• Use follow-up
• These are easily discriminated, but the gap between each These letters are sent to the non-respondents after a period
choice is very large. of a couple of weeks asking them again to fill
• How would a person respond on this three-point scale if out and return the questionnaire. The content of this letter is
they only agreed with the question being asked? similar to that of the cover letter.
• There is no middle ground between Strongly Agree and • Use high-level sponsorship.
Neutral. People tend to reply to surveys sponsored by organizations
• The same thing is true for someone who wants to they know or respect. If possible, use the letterhead of the
respond with a mere disagree. sponsor (sponsoring the research) on the cover letter.
• Your scales must have enough choices to allow • Make your questionnaire attractive, simple to fill out,
respondents to express a reasonable range of attitudes on and easy to read.
the topic in question, but there must not be so many
A professional product usually gets professional results.
choices that most respondents will be unable to
consistently discriminate between them. • Keep the questionnaire as short as possible.
You are asking for a person’s time, so make your request as
Bias and How to Combat It
small as possible
Like any scientist or experimenter, surveyors must be aware of
ways their surveys might become biased and of the available • Use your cover letter to motivate the person to return
means for combatting bias. the questionnaire
The main sources of bias in a questionnaire are: One form of motivation is the have the letter signed by an
individual known to be respected by the target audience for
• a non-representative sample –leading questions and non your questionnaire.
returns
In addition, make sure the individual will be perceived by the
• question misinterpretation and mistrustful. audience as having a vested interest in the information
Now we will discuss these in detail. needed.
Non-representative sample • Use inducements to encourage a reply.
Surveyors can expose themselves to possible non-representative These can range from a small amount of money attached to
sample bias in two ways. the survey to an enclosed stamped envelope. A promise to
1. The first is to actually choose an non-representative sample. report the results to each respondent can be helpful. If you
This bias can be eliminated by careful choice of the sample . do promise a report, be sure to send it.
2. The second way is to have a large number of non- returns. Proper use of these techniques can lower the non-return rate to
• The non-return bias (also called non-respondent bias) can acceptable levels.
affect both the sample survey and the complete survey. Misinterpretation
• The bias stems from the fact that the returned The second source of bias is misinterpretations of questions.
questionnaires are not necessarily evenly distributed • Misinterpretation of questions can be limited by clear
throughout the sample. The opinions or attitudes instructions,well constructed questions, and through judicious pilot
expressed by those who returned the survey may or may testing of the survey.
not represent the attitudes or opinions of those who did
• Biased questions can also be eliminated by constructing the
not return the survey.
questions properly and by using a pilot test.
• It is impossible to determine which is true since the non- • Finally, bias introduced by untruthful answers can be
respondents remain an unknown quantity.
controlled by internal checks and a good motivational cover
Ilustration letter.
A survey shows that 60 percent of those returning question- • Although bias cannot be eliminated totally, proper construction
naires favour a certain policy. of the questionnaire, a well-chosen sample, follow- up letters, and
If the survey had a 70 percent response rate (a fairly high rate as inducements can help control it
voluntary surveys go), then
Bias in Volunteer Samples
• the favourable replies are actually only 42 percent of those Now we will illustrates the many diverse, and sometimes
questioned (60 percent of the 70 percent who replied), which powerful factors that influence survey findings as a result of
is less than 50 percent! using volunteers in a survey.

© Copy Right: Rai University


70 11.556
The exclusive use of volunteers in survey research represents 2. Volunteers tend to be more interested in religion than non-

RESEARCH METHODOLOGY
another major source of bias to the surveyor — especially the volunteers, especially when volunteering is for questionnaire
new surveyor. studies.
• Although it may not be immediately evident, 3. Volunteers tend to be more altruistic than non- volunteers.
• It is nonetheless empirically true that volunteers, as a group, 4. Volunteers tend to be more self-disclosing than non -
possess characteristics quite different from those who do not volunteers.
generally volunteer. 5. Volunteers tend to be more maladjusted than non -
• Unless the surveyor takes these differences into consideration volunteers, especially when volunteering is for potentially
before choosing to use an exclusively volunteer sample, the unusual situations (e.g., drugs, hypnosis, high temperature,
bias introduced into the data may be so great that the or vaguely described experiments) or for medical research
surveyor can no longer confidently generalize the survey’s employing clinical rather than psychometric definitions of
findings to the population at large, which is usually the goal psychopathology.
of the survey. 6. Volunteers tend to be younger than non- volunteers,
Conclusions Warranting Maximum Confidence especially when volunteering is for laboratory research and
especially if they are female.
1. Volunteers tend to be better educated than non- volunteers,
especially when personal contact between investigator and Conclusions Warranting Minimum Confidence
respondent is not required. 1. Volunteers tend to be higher in need for achievement than
2. Volunteers tend to have higher social-class status than non non-volunteers, especially among American samples.
volunteers, especially when social class is defined by 2. Volunteers are more likely to be married than non
respondents’ own status rather than by parental status. volunteers, especially when volunteering is for studies
3. Volunteers tend to be more intelligent than non- volunteers requiring no personal contact between investigator and
when volunteering is for research in general, but not when respondent.
volunteering is for somewhat less typical types of research 3. Firstborns are more likely than later borns to volunteer,
such as hypnosis, sensory isolation, sex research, small-group especially when recruitment is personal and when the research
and personality research. requires group interaction and a low level of stress.
4. Volunteers tend to be higher in need for social approval than 4. Volunteers tend to be more anxious than non -volunteers,
non -volunteers. especially when volunteering is for standard, unstressful
Conclusions Warranting Considerable Confidence tasks and especially if they are college students.
1. Volunteers tend to be more arousal- seeking than non- 5. Volunteers tend to be more extraverted than non- volunteers
volunteers, especially when volunteering is for studies of when interaction with others is required by the nature of the
stress, sensory isolation, and hypnosis. research.
2. Volunteers tend to be more unconventional than non- Borg and Gall (1979) have suggested how surveyors might use
volunteers, especially when volunteering is for studies of sex this listing to combat the effects of bias in survey research.
behavior. For example, they suggest that:
3. Females are more likely than males to volunteer for research The degree to which these characteristics of volunteer samples
in general, more likely than males to volunteer for physically affect research results depends on the specific nature of the
and emotionally stressful research (e.g., electric shock, high investigation.
temperature, sensory deprivation, interviews about sex For example, a study of the level of intelligence of successful
behavior). workers in different occupations would probably yield spuri-
4. Volunteers tend to be less authoritarian than non- ously high results if volunteer subjects were studied, since
volunteers. volunteers tend to be more intelligent than non-volunteers
5. Jews are more likely to volunteer than Protestants, and On the other hand, in a study concerned with the cooperative
Protestants are more likely to volunteer than Roman behavior of adults in work-group situations, the tendency for
Catholics. volunteers to be more intelligent may have no effect on the
results, but the tendency for volunteers to be more sociable
6. Volunteers tend to be less conforming than non-volunteers
could have a significant effect.
when volunteering is for research in general, but not when
subjects are female and the task is relatively “clinical” (e.g., It is apparent that the use of volunteers in research greatly
hypnosis, sleep, or counseling research). complicates the interpretation of research results and their
general ability to the target population, which includes many
Conclusions Warranting Some Confidence
individuals who would not volunteer.
1. Volunteers tend to be from smaller towns than non-
volunteers, especially when volunteering is for questionnaire
studies.

© Copy Right: Rai University


11.556 71
APPENDIX – A • 40-59
RESEARCH METHODOLOGY

SAMPLE QUESTIONNAIRES • over 60


( Market Research) 8. Other comments:
Created by Women’s Enterprise Society of BC 2
In each of these cases, the business owners gain valuable
information to help them make major B. This survey was done by a business man
decisions about their businesses. Remember that if the results interested in opening public storage buildings.
of the survey aren’t very positive, you need to find out WHY. Before he committed any time and money to the
The questionnaire is used as a guide. It doesn’t mean you can’t project, he sent a questionnaire to consumers
go into business. within a 15 mile radius of the proposed site.

A. The first questionnaire is for a select group, the Public Storage Questionnaire
customers of Speedy Photos. The owner 1. Are you presently renting any public storage space? Yes
conducted the survey during a one week period, _____ No_____
reaching both weekday and weekend customers. If no then go to question 2
Speedy Photo Survey If yes, then continue with 1a.
In order for us to serve our customers better, we would like to 1a. Where are you currently renting storage space (name and
find out what you think of us. Please address)
take a few minutes to answer the following questions while 1b. How many times a month do you visit your storage
your photographs are being printed. Your space? _______
honest opinions, comments and suggestions are extremely 1c. Is your storage space heated? Yes ______ No______
important to us.
1d. Approximately how much space are you renting?
Thank you,
_________square feet
Speedy Photo
1e. Do you think you’ll need additional space in the future
1. Do you live/work in the area (circle one or both) Yes ______ No ______
• Close to home 1f. Are there any changes or improvements you would like to
• Close to work see in your present storage space arrangement? If yes, what
would you like to see?
2. Why did you choose Speedy Photo (circle all that apply)
2. Are you planning on using any public storage space?
• Convenient
Yes______ No ______
• Good service
2a. If you are planning to rent public storage space or may
• Quality rent such space, how far of a distance are you willing to
• Full-service photography shop travel to use your space? ______miles
• Other 2b. Approximately what size storage space would you need?
3. How did you learn about us? (circle one) ______square feet
• newspaper 2c. How much monthly rent would you be willing to pay?
$______per square foot/month
• flyer/coupon
2d. Would you require heat for your space?
• passing by
Name:
• recommended by someone
Title:
• other
Address:
4. How frequently do you have film printed? (please estimate)
Thank you very much for your co-operation
• ______ time per month
Created by Women’s Enterprise Society of BC 3
• ______ other
5. Which aspect of our photography shop do you think needs C. This questionnaire was developed by a woman
improvement? who was interested in selling southwestern
jewelry made by Native Indians.
6. Our operating hours are from 8 am to 5:30 pm weekdays and
Saturdays from 9:30 am to 6 pm. We are closed on Sundays Southwestern Jewelry Questionnaire
and legal holidays. What changes in our operating hours 1. Have you ever purchased or received southwestern jewelry?
would be better for you? Yes ______ No ______
7. Your age (circle one) 2. Have you ever purchased or received southwestern jewelry
• under 25 made by native Indians?
• 26-39 Yes ______ No ______

© Copy Right: Rai University


72 11.556
If Yes, what type of jewelry? 3. Are there any other services you would like to see offered?

RESEARCH METHODOLOGY
Necklace____ Ring _____ Bracelet _____ Earnings _____ (open-ended)
Other _____ 4. Do you believe that our competitors prices are too high?
3. Would you be interested in purchasing the above mentioned (two-choice)
jewelry made by native Indians? _____ Yes _____ No
Yes ______ No ______ 5. What price would you be willing to pay for this product/
4. Do you know where to shop for such jewelry? service? (two-choice) Note: This is an important question to
ask because the answer will affect one’s sales revenue
Yes ______ No ______
projections
5. When buying jewelry, what do you value the most? On a scale
____ $10 - 20 ___$20 - 30
of 1 through 5, list in order according to your preference.
One represents your most valued choice. 6. Which of the following services would you like to see
offered? Choose one. (multiple choice)
Craftsmanship_____ Cost _____ Uniqueness _____ Other
_____ ____ loans program ____ mentoring ____ counselling
____ research ____ other
D. The last questionnaire was developed by a
woman who wanted to open a fitness center and Examples of Poor Survey Questions
offer one-on-one training Do you like this hotel?
Fitness Center Questionnaire (This does not give any valuable information, but it could be re-
1. Do you exercise Yes ______ No ______ worded, “What do you like about this hotel, what don’t you
If no, please answer questions to Part A like about this hotel?)
If yes, please answer questions to Part B How do you rate the service received?
A. Please check reasons for not exercising: ____ poor ____ fair ____ good ____ very good ____
excellent
____Lack of time ____Lack of motivation ____Cost
(This should have an even number of choices)
____No convenient fitness centers ____medical reasons
Which of these services would you be interested in?
B. Check the type of exercise you do:
____ loans ____ mentoring ____ business counselling
____aerobic ____Nautilus ____Free weights
____ information referral
____running ____Swimming
(This question should have an “other” category)
____Other, please specify
What beverages do you drink?
C. Check you age group
____ Milk ____ coke ____ non-cola drink ____ coffee
_____under 25 _____ 26-35 _____over 35 ____ tea ____ juice
D. Where do you normally exercise? (This question is too broad. Most of us will have drunk some
_____ at home _____ fitness center of these at some time. Is the respondent to check a number of
E. How far do you live from ( town of proposed center)? boxes or only one?
_____ in town _____ 10-15 miles _____ out of town Source: www.
Wesbc01companyELibraryMarketResearchSampleQuestionnaires.pdf
F. Do you think your town needs a fitness center?
Yes _____ No _____
G. Would you be interested in one -on -one training?
Yes _____ No _____
H. Please note any other suggestions or comments you might
have.
Created by Women’s Enterprise Society of BC 4
Examples Of Good Survey Questions
1. How do you rate the convenience of our location? (ranking)
_____ poor _____ good _____ very good _____ excellent
2.. Please rank the following factors in the order of important
to you when making a buying decision for this service (1
being most important, 5 being lease important) (multiple
choice & ranking)
____ price ____ referral ____ location ____ availability
____ guarantee ____ other

© Copy Right: Rai University


11.556 73
RESEARCH METHODOLOGY

LESSON 12:
MEASUREMENT AND SCALING

Levels of Measurement • It describes most judgments about things, such as big or


We know that the level of measurement is a scale by which a little, strong or weak.
variable is measured. For 50 years, with few detractors, science • Most opinion and attitude scales or indexes in the social
has used the Stevens (1951) typology of measurement levels sciences are ordinal in nature
(scales). There are three things, which you need to remember Interval level of measurement:
about this typology: Any thing that can be measured falls into The interval level of measurement describes variables that have
one of the four types: more or less equal intervals, or meaningful distances between
The higher the level of measurement, the more precision in their ranks. For example, if you were to ask somebody if they
measurement and every level up contains all the properties of were first, second, or third generation immigrant, the assump-
the previous level. The four levels of measurement, from tion is that the distance, or number of years, between each
lowest to highest, are as follows: generation is the same.
• Nominal Ratio level of measurement:
• Ordinal The ratio level of measurement describes variables that have
• Interval equal intervals and a fixed zero (or reference) point. It is
possible to have zero income, zero education, and no involve-
• Ratio
ment in crime, but rarely do we see ratio level variables in social
science since it’s almost impossible to have zero attitudes on
things, although “not at all”, “often”, and “twice as often”
might qualify as ratio level measurement.
Advanced statistics require
• At least interval level measurement, so the researcher
always strives for this level,
• Accepting ordinal level (which is the most common) only
when they have to.
• Variables should be conceptually and operationally
defined with levels of measurement in mind since it’s going
to affect the analysis of data later
Types of measurement scales
Ordinal and nominal data are always discrete. Continuous data
has to be at either ratio or interval level of measure
Now let us discuss these in detail:
Nominal level of measurement
Nominal variables include demographic characteristics like sex,
race, and religion.
The nominal level of measurement describes variables that
are categorical in nature. The characteristics of the data you’re
collecting fall into distinct categories:
• If there are a limited number of distinct categories (usually
only two), then you’re dealing with a dichotomous
variable.
• If there are an unlimited or infinite number of distinct
categories, then you’re dealing with a continuous variable.
Ordinal level of measurement:
• The ordinal level of measurement describes variables that
can be ordered or ranked in some order of importance.
Figure: Levels of measurement

© Copy Right: Rai University


74 11.556
Reliability and validity • Split-half

RESEARCH METHODOLOGY
For a research study to be accurate, its findings must be both • Test-retest
reliable and valid.
Test-retest
Reliability The Test Retest in the same group technique is to administer
Research means that the findings would be consistently the your test, instrument, survey, or measure to the same group of
same if the study were done over again people at different points in time. Most researchers administer
Validity what is called a pretest for this, and to troubleshoot bugs at the
A valid measure is one that provides the information that it was same time.
intended to provide. The purpose of a thermometer, for All reliability estimates are usually in the form of a correlation
example, is to provide information on the temperature, and if coefficient, so here, all you do is calculate the correlation
it works correctly, it is a valid thermometer. coefficient between the two scores of each group and report it as
A study can be reliable but not valid, and it cannot be valid your reliability coefficient.
without first being reliable. There are many different threats to Multiple forms
validity as well as reliability but an important early consideration The multiple forms technique has other names, such as parallel
is to ensure you have internal validity. forms and disguised test-retest, but it’s simply the scrambling
or mixing up of questions on your survey, for example, giving
Reliable but not Valid
Not Reliable (so not valid either)
Reliable AND Valid it to the same group twice. It’s a more rigorous test of reliabil-
ity.
Inter-rater
Inter-rater reliability is most appropriate when you use assis-
tants to do interviewing or content analysis for you. To calculate
This means that you are using the most appropriate research this kind of reliability, all you do is report the percentage of
design for what you’re studying (experimental, quasi-experi- agreement on the same subject between your raters, or assis-
mental, survey, qualitative, or historical), and it also means that tants.
you have screened out spurious variables as well as thought out
Split-half
the possible contamination of other variables creeping into
Taking half of your test, instrument, or survey, and analyzing
your study.
that half as if it were the whole thing estimate split-half
Anything you do to standardize or clarify your measurement reliability.
instrument to reduce user error will add to your reliability.
Then, you compare the results of this analysis with your overall
It’s also important consider the time frame that is appropriate analysis.
for what you’re studying as soon as possible. Some social and
psychological phenomena (most notably those involving Methods of Measuring Validity
behaviour or action) lend themselves to a snapshot in time. Once you find that your measurement of variable under study
is reliable, you will want to measure its validity. There are four
If so, your research need only be carried out for a short period
good methods of estimating validity:
of time, perhaps a few weeks or a couple of months. In such a
case, your time frame is referred to as cross-sectional. Some- • Face
times, cross-sectional research is criticized as being unable to • Content
determine cause and effect A longer time frame is called when • Criterion
cross-sectional data fails to depict the cause- effect relationship,
• Construct
one that is called longitudinal, which may add years onto
carrying out your research. Face validity
Face validity is the least statistical estimate (validity overall is not
There are many different types of longitudinal research, such as
as easily quantified as reliability) as it’s simply an assertion on
those that involve time-series (such as tracking a third world
the researcher’s part claiming that they’ve reasonably measured
nation’s economic development over four years or so). The
what they intended to measure. It’s essentially a “take my word
general rule is to use longitudinal research the greater the
for it” kind of validity. Usually, a researcher asks a colleague or
number of variables you’ve got operating in your study and the
expert in the field to vouch for the items measuring what they
more confident you want to be about cause and effect.
were intended to measure.
Methods of Measuring Reliability:
Content validity
Now, the question arises that how will you measure the
Content validity goes back to the ideas of conceptualization and
reliability of a particular measure? There are four good methods
operationalization. If the researcher has focused in too closely
of measuring reliability:
on only one type or narrow dimension of a construct or
• Test-retest concept, then it’s conceivable that other indicators were over-
• Multiple forms looked. In such a case, the study lacks content validity Content
• Inter-rater validity is making sure you’ve covered all the conceptual space.

© Copy Right: Rai University


11.556 75
There are different ways to estimate it, but one of the most • Multiple measures - a mixture of techniques can be used to
RESEARCH METHODOLOGY

common is a reliability approach where you correlate scores on validate the findings; especially worthwhile when self-
one domain or dimension of a concept on your pretest with reporting is used.
scores on that domain or dimension with the actual test. There are several types of attitude rating scales:
Another way is to simply look over your inter-item correlations.
Attitude Measurement
Criterion validity Many of the questions in a marketing research survey are
Criterion validity is using some standard or benchmark that is designed to measure attitudes. Attitudes are a person’s general
known to be a good indicator. There are different forms of evaluation of something. Customer attitude is an important
criterion validity: factor for the following reasons:
• Concurrent validity is how well something estimates actual • Attitude helps to explain how ready one is to do something.
day-by-day behavior; • Attitudes do not change much over time.
• Predictive validity is how well something estimates some • Attitudes produce consistency in behavior.
future event or manifestation that hasn’t happened yet. It is
• Attitudes can be related to preferences.
commonly found in criminology.
There are several types of attitude rating scales:
Construct validity
Construct validity is the extent to which your items are tapping Equal-appearing interval scaling –
into the underlying theory or model of behavior. It’s how well In this scale a set of statements are assembled. These state-
the items hang together (convergent validity) or distinguish ments are selected according to their position on an interval
different people on certain traits or behaviors (discriminant scale of favorableness. Statements are chosen that has a small
validity). It’s the most difficult validity to achieve. You have to degree of dispersion. Respondents then are asked to indicate
either do years and years of research or find a group of people with which statements they agree.
to test that have the exact opposite traits or behaviors you’re Likert method of summated ratings
interested in measuring. In this scale a statement is made and the respondents indicate
Attitude Measurement their degree of agreement or disagreement on a five-point scale
Many of the questions in a questionnaire are designed to (Strongly Disagree, Disagree, Neither Agree Nor Disagree,
measure attitudes. Attitudes are a person’s general evaluation of Agree, Strongly Agree). It actually extends beyond the simple
something. Customer attitude is an important factor for the ordinal choices of “strongly agree”, “agree”, “disagree”, and
following reasons: “strongly disagree” In fact, Likert scaling is initially assigned
through a process that calculates the average index score for each
• Attitude helps to explain how ready one is to do something.
item in an index and subsequently ranks them in order of
• Attitudes do not change much over time intensity (recall the process for constructing Turnstone scales).
• Attitudes produce consistency in behavior. Once ordinality has been assigned, the assumption is that a
• Attitudes can be related to preferences. respondent choosing a response weighted with say a 15 out of
20 in an increasing scale of intensity is placed at that level for the
Attitudes can be measured using the following procedures:
index.
• Self-reporting - subjects are asked directly about their Example of a Likert scale:
attitudes. Self-reporting is the most common technique used
How would you rate the following aspects of your food store?
to measure attitude.
Extremely Extremely
• Observation of behaviour - assuming that one’s behaviour
is a result of one’s attitudes, attitudes can be inferred by Important unimportant
observing behaviour. For example, one’s attitude about an Service 1 2 3 4 5 6 7
issue can be inferred by whether he/she signs a petition Check outs 1 2 3 4 5 6 7
related to it.
Bakery 1 2 3 4 5 6 7
• Indirect techniques - use unstructured stimuli such as word Deli 1 2 3 4 5 6 7
association tests.
• Performance of objective tasks - assumes that one’s Semantic differential scale
performance depends on attitude. For example, the subject A semantic differential scale is constructed using phrases
can be asked to memorize the arguments of both sides of describing attributes of the product to anchor each end. For
an issue. He/she is more likely to do a better job on the example, the left end may state, “Hours are inconvenient” and
arguments that favor his/her stance. the right end may state, “Hours are convenient”. The respon-
dent then marks one of the seven blanks between the
• Physiological reactions - subject’s response to a stimulus
statements to indicate his/her opinion about the attribute.
is measured using electronic or mechanical means. While the
intensity can be measured, it is difficult to know if the The process entitled Semantic Differential employs a similar
attitude is positive or negative. approach as the Likert scaling in that it seeks a range of re-
sponses between extreme polarities but it seeks to place the
ordinal range of responses between two keywords expressing

© Copy Right: Rai University


76 11.556
opposite “ideas” or concepts. Bobbie’s illustration provides the The entire exercise is really just a way of indicating that the

RESEARCH METHODOLOGY
best illustration of the concept. degree to which a set of responses accurately reflects the scalar
assumptions is an indication of the degree to which the entire
Semantic Differential: Feelings about Musical Selections
set could be recreated from the scale itself. What the above
illustration shows is that if we were to project an imaginary
Very
Much
Some-
what
Neither Some-
what
Very
Much
“sample” from the coefficient of reproducibility of 99.3%, then
Enjoyable Unenjoyable
the projection would reflect the real sample to that degree.
Guttman scaling shows that a well constructed scale can very
Simple
Complex
accurately the profile of a response set. But then, you only know
the coefficient of reproducibility after you have run the
Discordant Harmonic
survey and crunched the numbers so it is not a predictive tool,
Traditional Modern it is a proof of the strength of the scale as a measure.
A brief word on typologies is in order. So far, we have limited
One of the first things that strike you is the highly interpretative ourselves to an examination of unidirectional variables; that is
nature of Bobbie’s example. Choices such as “enjoyable” and one thing in one direction (attitudes for or against abortion,
“un enjoyable” simply reflect preference, but the other choices etc.). Often relationships are better explained as the function of
are sufficiently ambiguous as to invite imprecise understanding. the intersection of several variables. This is referred to as a
typology. Remember what we have noted about making sure
If you are seeking nothing more than attitudinal information that your indices and scales are comprised of single dimension
to an abstract social artifact such as a piece of music, the process indicators. Recall that while “religion” can have a strong
of semantic differential may be usable. Otherwise, its ambiguity correlation with “attitudes on abortion”, that does not mean
in application remains problematic. that a question on religion belongs in an index or scale of
As with the Likert, Bogardus and Thurstone scales, Guttman questions on “attitudes on abortion”. But, if you wish to
scaling seeks to place indicators into an ordinal progression examine the intersection of the two, you can construct a
from “weak” indicators to “strong” ones (well, that’s the typology effectively showing, for example that “Catholics” may
difference between a scale and an index in the first place). be “conservative” on “abortion” but remain “liberal” on “other
Similarly, the assumption that a respondent indicating a given human rights”.
level of preference, attitude or belief will also demonstrate all Bobbie warns us that typologies are useful as independent
“weaker” indicators of the same thing. variables (“religion” may be a good causal factor in “attitudes on
However, the premise of the Guttman scale extends even abortion”) but can be problematic as dependent variables
further, in that it examines all of the responses to the survey (explaining the “why” isn’t always clear). Catholics may be more
and separates out the number of responses that do not exactly anti-abortion because the church has forbidden it but what of
reflect the scalar pattern; that is the number of response sets other groups? You can get onto some very shaky ground using
that do not reflect the assumption that a respondent choosing typologies as the “effect” or dependent variable.
one level of response would give the same type of response to Example of semantic differential:
all inferior levels. How would you describe Kmart, Target, and Wal-Mart on the
The number of response sets that violate the scalar pattern is following scale?
compared to the number that do reflect the pattern and what is Clean ___ ___ ___ ___ ___ dirty
referred to as a coefficient of reproducibility. Again, Bobbie’s
Bright ___ ___ ___ ___ ___ dark
illustration provides a very clear understanding.
Low quality ___ ___ ___ ___ __high quality
Conservative ___ ___ ___ ___ __innovative
Guttman Scaling and Coefficient of Reproducibility
Stapel Scale
Response Number Index Scale Total It is similar to the semantic differential scale except that
Pattern of Cases Scores Scores Scale Errors
+ + + 612 3 3 0
numbers identifies points on the scale, only one statement is
Scale Types +
+
+
=
=
=
448 2 2 0 used and if the respondent disagrees a negative number should
92 1 1 0
= = = 79 0 0 0 marked, and there are 10 positions instead of seven. This scale
= + = 15 1 2 15 does not require that bipolar adjectives be developed and it can
Mixed Types +
=
=
=
+
+
5
2
2
1
3
0
5
2
be administered by telephone.
= + + 5 2 3 5
Q-sort technique
Coefficient of Reproducibility = 1 -
Number of Errors In Q- sort Technique the respondent if forced to construct a
Number of Guesses
normal distribution by placing a specified number of cards in
27 27
In the example = 1 -
1,258 x 3
=
3,774
= .993 or 99.3% one of 11 stacks according to how desirable he/she finds the
characteristics written on the cards. This technique is faster and
less tedious for subjects than paired comparison measures. It

© Copy Right: Rai University


11.556 77
also forces the subject to conform to quotas at each point of the
RESEARCH METHODOLOGY

scale so as to yield a normal or quasi – normal distribution.


Thus we can say that the objective of Q-Technique is intensive
study of individuals.
Selection of an appropriate attitude measurement
of scale:
We have examined a number of different techniques, which are
available for the measurement of attitudes. Each method has
got certain strengths and weaknesses. Almost all the techniques
can be used for the measurement of any component of
attitudes. But all the techniques are not suitable for all purposes.
The selection depends upon the stage and size of research.
Generally, Q-sort and Semantic differential scale are preferred in
the preliminary stages. The Likert scale is used for item analysis.
For specific attributes the semantic differential scale is very
appropriate. Overall the semantic differential is simple in
concept and results obtained are comparable with more
complex, one-dimensional methods. Hence it is widely used.
Limitations of Attitude Measurement Scales:
The main limitation of these tools is the emphasis on describ-
ing attitudes rather than predicting behaviour. This is primarily
because of a lack of models that describe the attitudes in
behaviour

Tutorial
Prepare a questionnaire on any one of the following objectives
1. To know the corporate productivity
2. Job analysis / needs and satisfaction level of employees/
motivation level of employees /job involvement etc.
3. Product testing / Feedback of after sales services

© Copy Right: Rai University


78 11.556
RESEARCH METHODOLOGY
LESSON 13:
SAMPLING ISSUES IN RESEARCH

Students, today we shall be doing various issues in sampling . 3. Measurement of the volume of timber available in a forest
To understand it better it is necessary that we do certain related ____________________________________________
terms first. ____________________________________________
When we are doing certain investigation the interest lies in the 4. Annual yield of apple fruit in a hilly district.
assessment of the general magnitude and the study of variation
____________________________________________
with respect to one or more characteristics relating to individuals
belonging to a group ____________________________________________
5. Study of child mortality rate in a district
Population
This group of individuals is called population or universe. ____________________________________________
Thus we can define population as A population is any entire ____________________________________________
collection of people, animals, plants or things on which we may Parameter and Statistic
collect data. It is the entire group of interest, which we wish to A parameter is an unknown value, and therefore it has to be
describe or about which we wish to draw conclusions. estimated.
It is impractical for an investigator to completely enumerate the Parameters are used to represent a certain population characteris-
whole population for any statistical investigation. For example, tic.
if we want to have an idea about the average montly income of For example, the population mean m is a parameter that is
people residing in India, we will have to enumerate all the often used to indicate the average value of a quantity.
earning individuals in the country, which is rather a very difficult
Within a population, a parameter is a fixed value that does not
task.
vary. Each sample drawn from the population has its own value
Also, when population is large infinite) or if units are de- of any statistic that is used to estimate this parameter.
stroyed during investigation it is not possible to enumerate or For example, the mean of the data in a sample is used to give
investigate whole population. But even if population is finite information about the overall mean min the population from
100% inspection is not possible because of various factors like which that sample was drawn.
time, money and administrative convenience.
A statistic is a quantity that is calculated from a sample of data.
Sampling It is used to give information about unknown values in the
Sampling is the selection of part of an aggregate or totality corresponding population.
known as population, on the basis of which a decision For example, the average of the data in a sample is used to give
concerning the population is made. information about the overall average in the population from
Thus, we can say that a finite subset of statistical individuals in which that sample was drawn.
a population is called a sample and the number of individuals A statistic is a function of an observable random sample. It is
in a sample is called sample size. therefore an observable random variable.
Sampling Unit: A unit is a person, animal, plant or thing Statistics are often assigned Roman letters (e.g. and s), whereas
which is actually studied by a researcher; the basic objects upon the equivalent unknown values in the population ( parameters )
which the study or experiment is executed. For example, a are assigned Greek letters (e.g., µ, s).
person; a sample of soil; a pot of seedlings; a zip code area; a
doctor’s practice. Variables
A characteristic or phenomenon, which may take different
Activity values, such as weight, gender since they are different from
Define population and sampling unit in each of the following individual to individual.
problems
Any object or event, which can vary in successive observations
1. Popularity of family planning among families having more either in quantity or quality is called a “variable.”
than two children
Variables are classified accordingly as quantitative or qualita-
____________________________________________ tive.
____________________________________________ A qualitative variable, does not vary in magnitude in succes-
2. Election for a political office with adult franchise sive observations. The values of quantitative called
____________________________________________ “Attributes”.
____________________________________________ A quantitative variable does vary in magnitude in successive
observations. The values of quantitative are called “Variates”

© Copy Right: Rai University


11.556 79
Variable: Randomness: The average of the sampling distribution is the population
RESEARCH METHODOLOGY

Randomness means unpredictability. parameter, and inference is all about making generalizations
The fascinating fact about inferential statistics is that, although from statistics (sample) to parameters (population).
each random observation may not be predictable when taken You can use some of the information you’ve collected thus far
alone, collectively they follow a predictable pattern called its to calculate the sampling distribution, or more accurately, the
distribution function. sampling error.
For example, it is a fact that the distribution of a sample In statistics, any standard deviation of a sampling distribution
average follows a normal distribution for sample size over 30. is referred to as the standard error (to keep it separate in our
In other words, an extreme value of the sample mean is less minds from standard deviation).
likely than an extreme value of a few raw data. In sampling, the standard error is referred to as sampling
Desirable Characteristics of Sample error.
Statistics Definitions are as follows:
1. Unbiased • Standard deviation — the spread of scores around the
If the arithmetic mean of the statistic calculated for all average in a single sample
possible samples of a given size n exactly equals its • Standard error — the spread of averages around the average
population parameter. of averages in a hypothetical sampling distribution
2. Sufficient You never actually see the sampling distribution.
Summarizes all relevant information about the parent
population contained in the sample, while ignoring any All you have to work with is the standard deviation of your
sample-specific information. sample. The greater your standard deviation, the greater the
standard error (and your sampling error).
3. Efficient
The more the statistic values for various samples cluster The standard error (this term was first used by Yule, 1897) is
around the true parameter value, the lower the sampling the standard deviation of a mean and is computed as:
error and the greater the efficiency. Consider an archer Standard Error= (s2/n)1/2,where ,s2 is the sample variance, n
shooting at a target. The archer wants to be accurate, but also is the sample size.
wants the arrows to cluster as closely to the centre of the Let us illustration the sampling distribution, it will make the
target as possible. topic very clear.
4. Consistent Example of the creation of a sampling distribution
The larger the sample size, the closer the statistic should be Six rocks were extracted from my team and each was weighed,
to its parameter value. labeled, and put in a bag. This forms the population from
Every statistic in a sample might have a different sampling which I can draw samples.
distribution Suppose, I want to construct a sampling distribution of the
mean weight of 3 rocks from the population of 6. To do this, I
must enumerate all samples of size 3 which can be drawn from
a population of size 6 (there are 20 in total) and compute the
mean of each.
The frequency distribution I can create from these 20 numbers
is the sampling distribution I want. Below is the table I
Sampling Distribution would use to create this distribution, and below that is the
actual sampling distribution.
The sampling distribution is a hypothetical device that
figuratively represents the distribution of a statistic (some Example: Creation of a Sampling Distribution
number you’ve obtained from your sample) across an infinite
Example: Creation of a Sampling Distribution
number of samples.
You have to remember than your sample is just one of a Rock ID 1 2 3 4 5 6
potentially infinite number of samples that could have been Weight (g) 11.24 13.48 16.9 24.28 20.89 10.43 Sample Means
drawn.While it’s very likely that any statistics you generate from
your sample would be near the center of the sampling distribu- Sample 1 1 1 1 0 0 0 13.87

tion, just by luck of the draw, the researcher normally wants to Sample 2 1 1 0 1 0 0 17.47
find out exactly where the center of this sampling distribution
is. Sample 3 1 0 1 1 0 0 16.33

That’s because the center of the sampling distribution repre- Sample 4 0 1 1 1 0 0 18.22
sents the best estimate of the population average, and the
Sample 5 1 1 0 0 1 0 15.20
population is what you want to make inferences to.
Sample 6 1 0 1 0 1 0 16.34

© Copy Right: Rai University


80 11.556
3.Principle of optimization impresses upon obtaining

RESEARCH METHODOLOGY
Sample 7 0 1 1 0 1 0 17.09
optimum results in terms of efficiency and cost of the design
Sample 8 1 0 0 1 1 0 18.80 with the resources at disposal. The reciprocal of the sampling
Sample 9 0 1 0 1 1 0 19.55 variance of an estimate provides a measure of its efficiency while
a measure of cost of the design is provided by the total
Sample 10 0 0 1 1 1 0 20.69
expenses incurred in terms of money and man hour.
Sample 11 1 1 0 0 0 1 11.72 The principle of optimization consists in
Sample 12 1 0 1 0 0 1 12.86 a. achieving a given level of efficiency at minimum cost

Sample 13 0 1 1 0 0 1 13.60
b. obtaining maximum possible efficiency with given level of
cost.
Sample 14 1 0 0 1 0 1 15.32
Sampling and Non-sampling error
Sample 15 0 1 0 1 0 1 16.06 We can classify broadly the errors involved in the process of
research into two heads: Sampling Errors and Non-Sampling
Sample 16 0 0 1 1 0 1 17.20
Errors
Sample 17 1 0 0 0 1 1 14.19
Sampling Errors
Sample 18 0 1 0 0 1 1 14.93
These have the origin in sampling and arise out of the fact that
only a part of the population is used to estimate the popula-
The Sampling Distribution tion parameters and draw inferences about the population.
Bin >11, <=13 >13, <=15 >15, <=17 >17, <=19 >19, <=21
Therefore, sampling errors are absent in complete enumeration.
The sampling errors are basically because of following reasons:
Frequency 2 4 6 6 2
a. Faulty selection of sample
If you use a defective technique for selecting a sample, e.g
purposive or judgement sampling in which the investigator
Relation between Standard Error and Sample Size
deliberately chooses the sample in order to deduce the desired
Standard error is also related to sample size. The larger your
results.
sample, the smaller the standard error.
This bias can be overcome by adhering to Simple Random
You’re not reducing bias or anything by increasing sample size,
Sampling.
only coming closer to the total number in the population.
b. Substitution
Validity and sampling error are somewhat similar. However,
you can estimate population parameters from even small If you substitute one unit for another if some difficulty arises
samples. in studying that particular unit (first one), this leads to some
bias . This is because of the fact that the characteristics pos-
Principles of sample survey sessed by the substituted unit will usually be different from
The theory of sampling is based on the following important those possessed by the unit originally included in the sample.
principles:
c. aulty Demarcation of Sampling units
1. Principle of statistical regularity It is significant in particularly areas surveys such as agricultural
2. Principle of validity experiments in the field or in the crop cutting fields etc.
3. Principle of optimization d. Constant error due to improper choice of the
1. Principle of statistical regularity stresses the desirability statistics for estimating the population parameters
and importance of selecting a sample at random so that each For example while estimating the standard deviation of
and every unit in the population has an equal chance of being population if we divide the sum of squares by n instead of n-
selected in the sample. 1,we get an unbiased estimate of population standard
We get an immediate derivation from this principle is the deviation.
principle of Inertia of large numbers which states that On-sampling Errors
“Other things being equal as the sample size increases, the The non -sampling errors primarily arise at the stages of
results tend to be more reliable and accurate. • Observation
For example , in a coin tossing experiment, the results will be • Ascertainment
approximately 50% heads and 50% tails provided we perform
• Processing of data
the experiment a fairly large number of times.
These are, therefore present in both complete enumeration and
2.Principle of validity means the sample design should
sample survey. Non-sampling errors can occur at every stage of
enable us to obtain valid tests and estimates about the param-
planning or execution of census or sample survey.
eters of the population. The samples obtained by the technique
of probability sampling satisfy this principle. It is very difficult to prepare an exhaustive list of the sources of
non-sampling errors. However some of the more important
ones arise because of following factors:

© Copy Right: Rai University


11.556 81
1. Faulty planning or definition. • Mechanics of publication-the proofing error and the like.
RESEARCH METHODOLOGY

2. Response Errors • Failure of the survey organization to point out the


3. Non- Response bias limitations of the statistics.
4. Errors in coverage Advantages of sampling over complete enumeration
5. Compiling Errors The following are the advantages and/or necessities for
sampling in statistical decision-making:
6. Publication Errors
1. Cost: Cost is one of the main arguments in favour of
Now we will discuss them in detail
sampling, because often a sample can furnish data of
1. Faulty planning or definition: sufficient accuracy and at much lower cost than a census.
As we all know the foremost step in research is explicitly stating
2. Accuracy: Much better control over data collection errors is
the objectives of the study.
possible with sampling than with a census, because a sample
These objectives are then translated into is a smaller-scale undertaking.
• A set of definitions of the characteristics for which data is to 3. Timeliness: Another advantage of a sample over a census is
be collected that the sample produces information faster. This is
• Into a set of specificationsfor collection , processing and important for timely decision making.
publishing. 4. Amount of Information: More detailed information can be
Here Non-Sampling Errors may arise due to obtained from a sample survey than from a census, because
a) Data specification being inadequate and inconsistent it take less time, is less costly, and allows us to take more care
with respect of the objectives of study in the data processing stage.
b) Error due to location of the units and actual 5. Destructive Tests: When a test involves the destruction of
measurement of the characteristics, errors in recording an item under study, sampling must be used. Statistical
the measurements, errors due to ill designed sampling determination can be used to find the optimal
questionnaires. sample size within an acceptable cost.
c) Lack of trained and qualified investigators and Limitations of Sampling
2. Response Errors The advantages of sampling over complete enumeration can be
There arise as a result of the responses furnished by the derived only if
respondents because of following reasons • The sampling units are drawn in a scientific manner,
• Response error may be accidental- e.g, the respondent may • The appropriate sampling technique is used, and
understand a particular question and accordingly furnish • The sample size is adequate
improper information un-intentionally. Sampling theory has its own limitations and problems which
• Prestige Bias may be briefly outlined as
• Self-Interest 1. You have to take proper care in the planning and execution
• Bias due to interviewer of the sample survey, otherwise the results obtained might
• Failure of respondent’s memory be inaccurate and misleading

3. Non- Response bias 2. Until and unless sampling is done by trained and efficient
Non-Response biases occur if you do not obtain full informa- personnel and sophisticated equipment for its planning,
tion from all the sampling units. execution and analysis. In absence of these sampling is not
trustworthy
4. Errors in coverage
If the objectives are not stated concisely in a clear cut manner it 3. If you want to have information of each and every unit of
may lead to: population you will have to go for complete enumeration
only. In that case sampling will not be an appropriate
• Certain units which should not be included also gets method.
included
• Certain units which must be included gets excluded Types of Sampling
The type of enquiry and the nature of data fundamentally
5. Compiling Errors : determines the technique or method of selecting a sample .
Various operations of data processing such as editing and
The procedure of selecting a sample may be broadly classified
coding of the responses, punching of cards, tabulation and
under the following three heads:
summarizing the origional observations made in study are the
potential source of error.Compilation errors are subject to • Non-Probability Sampling Methods: Subjective or
control through verification , consistency check, etc. Judgement Sampling
6.Publication Errors • Probability Sampling
The errors committed during presentation and printing of • Mixed Sampling
tabulated results are basically due to two sources: These we will be studying in detail in the next lecture.

© Copy Right: Rai University


82 11.556
Now, briefly tell me what concepts you have studied today?

RESEARCH METHODOLOGY
Yes, we studied various concepts like population, statistic,
variables-qualitative and quantitative, variable randomness,
characteristics of sample statistic, sampling distribution,
standard error, principles of sample survey , sampling and non-
sampling errors, merits and limitations of sampling.
Notes -

© Copy Right: Rai University


11.556 83
RESEARCH METHODOLOGY

LESSON 14:
DESIGNING SAMPLE

Hello, students today we shall be continuing our discussing on Non-Probability Sampling Methods:
issues in sampling. Before proceeding further let us recaptulate The common feature in non probability sampling methods is
what we had studied in the last lecture first. that subjective judgments are used to determine the population
We have learned that a sample is a part or aggregate selected that are contained in the sample .We classify non-probability
with a view to obtaining information about the whole group sampling into four groups:
also known as population. The population is composed of a A Convenience Samples
number of units. The total number of units in the population B Judgement Samples
and in the sample are known as population size and sample
C Quota Samples
size. We also came to know that sampling is the scientific
technique of drawing a sample. Any characteristic of population D Snowball samples
is called parameter and that of sample is called statistic. Also, A. Convenience Samples
the standard deviation of sampling distribution is called • These types of samples are used primarily for reasons of
standard error, larger the sample size lower will be the standard convenience.
error. We have also studied various sources of sampling and
non-sampling error along with principles of sampling. • It is used for exploratory research and speedy situations.
For the process of statistical inference to be valid we must • It is often used for new product formulations or to provide
ensure that we take a representative sample of our population. gross-sensory evaluations by using employees, students,
peers, etc.
Whatever method of sample selection we use it is vital that the
method is described. Convenience sampling is extensively used in marketing studies
and otherwise.
How do we know if the characteristics of a sample we take
match the characteristics of the population we are sampling? This would be clear from the following examples
The short answer is we don’t. We can, however, take steps that 1. Suppose a marketing research study aims at estimating the
make it as likely as possible that the sample will be representa- proportion of Pan (Beetle leaf) shops in Delhi, which store a
tive of the population. particular drink Maaza. It is decided to take a sample of size
Two simple and effective methods of doing this are making 150. What the investigator does is to visit 150 Pan shops
sure the sample size is large and making sure it is randomly near his place of office as it is very convenient to him and
selected. observe whether a Pan shop stores Maaza or not.

A large sample size is more likely to be representative of a 1£ This is definitely not a representative sample, as most
population than a small one. Pan shops in Delhi had no chance of being selected. It is
10: only those Pan shops which were near the office of the
Think of extreme cases. If we want to know the average height investigator has a chance of being selected
of the population and we select just one person and measure
their height it is unlikely to be close the population average. If 2. The other example where convenience sampling is often
we took 1,000,000 people, measured their heights and took the used is in test marketing. There might be some cities whose
average, this figure would be likely to be close to the population demographic make-ups are approximately the same as
average. national average. While conducting marketing tests for new
products, the researcher may take samples of consumers
Types of Sampling from such cities and obtain consumer evaluations about
The type of enquiry you want to have and the nature of data these products as these are supposed to represent “national”
that you want to collect fundamentally determines the tech- tastes.
nique or method of selecting a sample.
3. A ball pen manufacturing company is interested in knowing
The procedure of selecting a sample may be broadly classified the opinions about the ball pen (like smooth flow of ink,
under the following three heads: resistance to’ breakage of the cover etc.) it is presently
• Non-Probability Sampling Methods: Subjective or manufacturing with a view to modify it to suit customers
Judgement Sampling need. The job is given to a marketing researcher who visits a
• Probability Sampling college near his place of residence and asks a few students (a
convenient sample) their opinion about the ‘ball pen” in
• Mixed Sampling question.
Now let us discuss these in detail. We will start with the non-
4. As another example a researcher might visit a few shops to
probability sampling then we will move on to probability
observe what brand of vegetable oil people are buying so as
sampling.

© Copy Right: Rai University


84 11.556
to make inference about the share of a particular brand he is group. Suppose it is decided to select a sample of size 200 from

RESEARCH METHODOLOGY
interested in. the population. Therefore, samples of size 40, 70 and 90
should come from high income, middle income and low
B. Judgement Samples
income groups respectively. Now the various field workers are
• It is that sample in which the selection criteria are based upon assigned quotas to select the sample from each group in such a
your (researcher’s) personal judgment that the members of way that a total sample of 200 is selected in the same propor-
the sample are representative of the population under study. tion as mentioned above.
• It is used for most test markets and many product tests For example, the first field
conducted in shopping malls.
• Selection is done by non-probability means and are based
If personal biases are avoided, then the relevant experience and upon the researcher’s judgement of appropriate
the acquaintance of the investigator with the population may demographics.
help to choose a relatively representative sample from the
population. It is not possible to make an estimate of sampling D. Snowball Sampling
error as we cannot determine how precise our sample estimates • It is that samples in which the selection of additional
are. respondents (after the first small group of respondents is
Judgement sampling is used in a number of cases, some of selected) is based upon referrals from the initial set of
which are: respondents.
1. Suppose we have a panel of experts to decide about the • It is used to sample low incidence or rare populations
launching of a new product in the next year. If for some • It is done for the efficiency of finding the additional, hard-
reason or the other, a member drops out, from the panel, to-find members of the sample.
the chairman of the panel may suggest the name of another
Advantages of Non-probability Samples:
person whom he thinks has the same expertise and
experience to be a member of the said panel. This new • It is much cheaper to probability samples.
member was chosen deliberately - a case of Judgment • It is acceptable when the level of accuracy of the research
sampling. results is not of utmost importance.
2. The method could be used in a study involving the • Less research time is required than probability samples.
performance of salesmen. The salesmen could be grouped • It often produces samples quite similar to the population of
into top-grade and low-grade performer according to certain interest when conducted properly.
specified qualities. Having done so, the sales manager may • Disadvantages of Nonprobability Samples:
indicate who in his opinion, would fall into which category.
Needless to mention this is a biased method. However in • You cannot calulate Sampling error. Thus, the minimum
the absence of any objective data, one might have to resort required sample size cannot be calculated which suggests that
to this type of sampling. the you (researcher) may sample too few or too many
members of the population of interest.
C. Quota Samples
• You do not know the degree to which the sample is
This is a very commonly used sampling method in marketing
representative of the population from which it was drawn.
research studies.
The research results cannot be projected (generalized) to the
Here the sample is selected on the basis of certain basic
total population of interest with any degree of confidence.
parameters such as age, sex, income and occupation that
describe the nature a population so as to make of it representa- Probability Sampling
tive of the population. Probability sampling is the scientific method of selecting
samples according to some laws of chance in which each unit in
The Investigators or field workers are instructed to choose a
the population has some definite pre-assigned probability of
sample that conforms to these parameters.
being selected in the sample. The different types of probability
The field workers are assigned quotas of the number of units sampling are :
satisfying the required characteristics on which data should be
1. where each unit has an equal chance of being selected.
collected.
2. Sampling units have different probabilities of being selected
However, before collecting data on these units the investigators
are supposed to verify that the units qualify these characteristics. 3. Probability of selection of a unit is proportional to the
sample size.
Suppose we are conducting a survey to study the buying
behavior of a product and it is believed that the buying Simple Random Sampling
behaviour is greatly influenced by the income level of the It is the technique of drawing a sample in such a way that each
consumers. We assume that it is possible to divide our unit of the population has an equal and independent chance of
population into three income strata such as high-income group, being included in the sample.
middle-income group and low-income group. Further it is In this method an equal probability of selection is assigned to
known that 20% of the population is in high-income group, each unit of population at the first draw.
35% in the middle-income group and 45% in the low-income

© Copy Right: Rai University


11.556 85
It also implies an equal probability of selecting in the subse- Mechanical Randomisation or Random Numbers Method
RESEARCH METHODOLOGY

quent draws. The explained method of lottery is very time consuming and
Thus in simple random sample from a population of size N, cumbersome to use if population is very large.
the probability of drawing any unit in the first draw is 1/N.The Therefore the most practical and inexpensive method of
probability of drawing a second unit in the second draw is 1/ selecting a random sample consists in the use of Random
N-1 . Numbers Tables, which has been constructed that each of the
The probability of selecting a specified unit of population at digits 0,1,2,3,4,5,6,7,8,9 appear with approximately the same
any given draw is equal to the probability of its being selected at frequency and independently of each other.
the first draw. If we have to select a simple random sample from a population
of size N(d”99) then the numbers can be combined two by
Selection of a simple random sample:
two to give pairs from 00 to 99.
As we all know Simple Random Sample refers to that method
of selecting a sample in which each and every unit of popula- Similarly if Nd”999 or Nd”9999 and so on, then combining the
tion is given independent and equal chance to be included in the digits three by three ( or four by four and so on ), we get
sample. numbers from 000 to 999 or (0000 to 9999) and so on. Since
each of the digits 0,1,2,3,4,5,6,7,8,9 appear with approximately
But, Random Sample does not depend only upon selection of
the same frequency and independently of each other, so does
units but also on the size and nature of the population.
each of the pairs 00 to 99 or triplets from 000 to 999 or
One procedure may be good and simple for a small sample but quadruplets 0000 to 9999 and so on .
it may not be good for the large population.
Thus, the method of drawing the random sample consists in
Generally, the method of selecting a sample must be indepen- the following steps:
dent of the properties of sampled population.
• Identify the N units in the population with the numbers
Proper precautions should be taken to ensure that your selected from 1 to N
sample is random. Although human bias is inherent in any
sampling scheme administered by human beings.
• Select at random, any page of the random number tables and
pick up the numbers in any row or column or diagonal at
Random selection is best for two reasons - it eliminates bias random.
and statistical theory is based on the idea of random sampling.
• The population units corresponding to the number of unit
We can select a simple random sample through use of tables of selected in step (ii) comprise the random sample.
random numbers, computerized random number generator or
I will tell you about the different sets of random numbers
lottery method. Thus, the three methods of drawing simple
commonly used in practice. The numbers in these tables have
random sample are:
been subjected to various statistical tests for randomness of a
• Mechanical method and using tables of random numbers, series and their randomness has been well established for all
• sealed envelopes (lottery system) etc. practical purposes.
Lottery method 1. Tippets (1927) Random Number Table: (Tracts for
This is the simplest method of selecting a random sample. computers No. 15 Cambridge University Press)
We will illustrate it by means of example for better understand- Tippet number tables consist of 10,400 four digited
ing: numbers, giving in all 10,400 x 4 , i.e.,41600 digits selected at
Suppose, we want to select “r” candidates out of “n”. We random from the British Census Report.
assign the numbers from 1 to n i.e to each and every candidate 2. Fisher and Yates (1938) Tables (in statistical tables for
we assign only one exclusive number. These numbers are then biological, Agricultural and Medical Research) comprise
written on n slips which are made as homogeneous as possible 15,000 digits arranged in twos. Fisher and Yates obtained
in shape, size, colour, etc. these tables by drawing numbers at random from 10th to 19th
These slips are then put in a bag and thoroughly shuffled and digits of A.S.Thomson’s 20- figure logarithmic tables.
then “r” slips are drawn one by one. The “r” candidates 3. Kendall and Babington Smith’s (1939) random tables consist
corresponding to numbers on the slips drawn will constitute a of 1,00,000 digits grouped into 25,000 sets of 4 digited
random sample. random numbers (Tracts for computers No. 24 Cambridge
This method of selecting a simple random sample is indepen- University Press)
dent of the properties of population. Generally in place of slips 4. Rand Corporation (1955) (free oress, Illinois) random
you can use cards also. We make one card corresponding to one number tables consist of one million random digits
unit of population by writing on it the number assigned to consisting of 5 digits each.
that particular unit of pipulation. The pack of cards is a 5. TI-82: Generating Random Numbers
miniature of population for sampling purposes. The cards are You can generate random numbers on the TI-82 calculator
shuffled a number of times and then a card is drawn at random using the following sequence. N is the number of different
from them. This is one of the most reliable methods of values, which could be, and S is the minimum number.
selecting a random sample.

© Copy Right: Rai University


86 11.556
int (N*rand+S) The following set of random numbers came from a popular

RESEARCH METHODOLOGY
If you have two values (A and B) that you need random statistics tables (most statistics textbooks have them):
numbers between, then you can generate them using the 65246356854282020026
following formule I could allocate patients to treatment A if the number were
N=B-A+1 odd and B if it were even. This would result in successive
int (N*rand+A) patients being allocated in the sequence:
Notice it is B-A+1 not B-A. Everyone agrees there are 10 BABBBAABBABBBBBBBBBB
numbers between 1 and 10 (inclusive). But, if you take 10-1, Randomly selected numbers often seem to have patterns in
you get 9, not 10. Also, in the formula above, replace the N by them, like long runs of the same number. This is not a
the actual number of different values. problem if we are conducting a large study, everything evens out
Since the calculator remembers the last formula put in, and over time.
evaluates it when you hit enter, to generate more random But if the above study had stopped after recruiting 20 patients
numbers, just hit enter again. Each time you hit enter, you will then we would have had four patients on treatment A and
get another random number. sixteen on B. This would not be a very good basis for compar-
Merits and Limitations of Simple Random Sampling ing the two treatments.
Merits Therefore, some of the randomly allocated sample prove very
non-random. This type of problem can be eliminated by use of
1. Since samples units are selected at random providing equal
Stratified Random Sampling, in which the population is
chance to each and every unit of population to be selected ,
divided into different strata.
the element of subjectivity or personal bias is completely
eliminated. Now, we will move into details of stratified random sampling.
Therefore, we can say that simple random sample is more Stratified Random Sampling
representative of population than purposive or judgement We have understood that in simple random sampling, the
sampling. variance of the sample estimate of the population is
2. You can ascertain the efficiency of the estimates of the a. Inversely proportional to the sample size, and
parameters by considering the sampling distribution of the b. Directly proportional to the variability of the sampling units
statistic (estimates) in the population.
For example: One measure of calculating precision is sample We also know that the precision is defined as reciprocal of its
size. Sample mean becomes an unbiased mean of popula- sampling variance. Therefore as sample size increases precision
tion mean or a more efficient estimate of population mean as increases.
sample size increases. Apart from increasing the sample size or sampling fraction n/
Limitations N, the only way of increasing the precision of sample mean is
1. The selection of simple random sample requires an up- to - to devise a sampling technique which will effectively reduce
date frame of population from which samples are to be variance, the population heterogeneity. One such technique is
drawn. Although it is impossible to have knowledge about Stratified Sampling.
each and every unit of population if population happens to Stratification means division into layers.
be very large. Past data or some other information related to the character
This restricts the use of simple random sample. under study may be used to divide the population into various
2. A simple random sample may result in the selection of the groups such that
sampling units,which are widely spread geographically and (i) units within each group are as homogeneous as possible and
in such a case the administrative cost of collecting the data (ii) the group means are as widely different as possible.
may be high in terms of time and money.
Thus, if we have a population consisting of N sampling units,
3. Sometime, a simple random sample might give most non- it is divided into k relatively homogeneous mutually disjoint
random looking results, which I will explain with the help (non-overlapping) sub-groups, termed as strata, of sizes N1, N2
of an illustration next. Nk , such that N = “Ni for i =1 to k .
———————————
4. For a given precision, simple random sample usually requires Now you draw a simple random sample of size n i (i=1,2,3,—
larger sample size as compared to stratified random —k) from each stratum.
sampling which we will be studying next.
This type of technique of drawing a sample is called stratified
The limitations of simple random sample will be clear from random sampling and the sample is called stratified random
the example. sampling.
If I were conducting a study looking at two treatments, A and There are two points which you have to keep in mind while
B then one way I could allocate patients to treatment groups drawing a stratified random sample.
would be by using a table of random numbers.
• Proper classification of the population into various strata,
and

© Copy Right: Rai University


11.556 87
• A suitable sample size from each stratum. In systematic sampling you select the first unit at random, the
RESEARCH METHODOLOGY

Both these points are important to be considered because if rest being automatically selected according to some predeter-
your stratification is faulty, it cannot be compensated by taking mined pattern involving regular spacing of units.
large samples. Now let us assume that the population size is N. We number
all the sampling units from 1 to N in some order and a sample
Principle advantages of Stratified Random Sampling
of size n is drawn in such a way that
1. More representative:
N = nk i.e. k = N/n
In an non-stratified random sample some strata may be over
represented, others may be under-represented while some may Where k, usually called the sampling interval, is an integer.
be excluded altogether. Stratified sampling ensures any desired In systematic random sampling we draw a number randomly,
representation in the sample of the various strata in the let us suppose that the number drawn is i d” k and selecting
population. It over-rules the possibility of any essential group the unit corresponding to this number and every kth unit
of the population being completely excluded in the sample. subsequently. Thus the systematic sample of size n will consist
Stratified sampling thus provides a more representative cross of the units
section of the population and is frequently regarded as the i, i+k, i+2k, - - - - - - - - - - - - , i+ (n-1)k
most efficient system of sampling.
The random number i is called the random start and its value
2. Greater Accuracy determines the whole sample.
Stratified sampling provides estimates with increased precision . Merits and Demerits of Systematic Random Sampling
Moreover, stratified sampling enables us to obtain the results
Now students we will discuss the merits and demerits of
of known precision for each stratum.
systematic random sampling
3. Administrative Convenience
Merits
As compared with simple random sample, the stratified
random samples are more concentrated geographically. Accord- I. Systematic sampling is operationally more convenient than
ingly, the time and money involved in collecting the data and simple random sampling or stratified random sampling. It
interviewing the individuals may be considerably reduced and saves your time and work involved.
the supervision of the field work could be allocated with greater II. This sampling is more efficient to simple random sample,
ease and convenience. provided the frame (the list from which you have drawn the
4. Sometimes you will notice that sampling problems may sample units ) is arranged wholly at random
differ markedly in different parts of population. For example Demerits
• Literates and Illiterate .People living in ordinary homes and I. The main disadvantage of systematic sampling is that
people living in institutions, hostels, hospitals etc. systematic sampling is that systematic samples are not in
In such cases we will deal with the problem through stratified general random samples since the requirement in merit two
sampling by regarding the different parts of the population as is rarely fulfilled.
stratum and tackling the problems of the survey within each II. If N is not a multiple of n, then
stratum independently. • The actual sample size is different from that required,
Note: You can allocate the sample sizes for different strata can and
be done in two ways: • Sample mean is not an unbiased estimate of the
1. Proportional allocation population mean.
2. Optimum allocation III. It does not provide the sampling error
In proportional allocation, allocation of n i , the sample size of IV. Systematic sampling may yield highly biased estimates if
each strata is called proportional if the sample fraction is there are periodic features associated with the sampling
constant for each stratum i.e., interval, I.e., if the frame (list) has a periodic feature and k is
n1 = n 2 = - - - - - - - - - - - = nk equal to or a multiple of the period.
N1 N2 Nk Cluster Sampling
Optimum Allocation is another guiding principle in the In this type of sampling you divide the total population ,
determination of the ni depending upon the problem under study, into some recogniz-
able sub-divisions which are termed as clusters and a simple
is to choose them so as to :
random sample of n blocks is drawn.
1. Minimise the variance (i.e., maximize the variance) of
The individuals which you have selected from the blocks
the estimate for (i) fixed sample size, (ii) fixed cost
constitute the sample.
2. Minimise the total cost for fixed desired precision
Notations:
Systematic Random Sampling: N —Total number of clusters ;
If you have the complete and up-to-date list of sampling units n —number of sampled :
is available you can also employ a common technique of
selection of sample, which is known as systematic sampling.

© Copy Right: Rai University


88 11.556
n

RESEARCH METHODOLOGY
M! “ Mi is total number of units in the population.
i=1
Mi !Number of sampling units in the ith cluster;
Y ij ! jth obsevation in the ith cluster ( j = 1, 2, 3 - - - - - - - - Mi, i=
1, 2, 3 - - - - -N)
yij ! jth obsevation in the ith sampled cluster ( j = 1, 2, 3 - - - - - -
- - Mi, i= 1, 2, 3 - - - - -n).
In some situations Mi as well as M are not known.
Notes:
• Clusters should be as small as possible consistent with the
cost and limitations of the survey
• The number of sampling units in each cluster should be
approximately same.
Thus cluster sampling is not to be recommended if we are
sampling areas in the cities where there are private residential
houses, business and industrial complexes, apartment build-
ings, etc., with widely varying number of persons or
households.
Multistage Sampling
One better way of selecting a sample is to resort to sub-
sampling within the clusters , instead of enumerating all the
sampling units in the selected cluster.
This technique is called two-stage sampling, clusters being
termed as primary units and the units within the clusters being
termed as primary units and the units within the clusters as
secondary units.
This technique can be generalized to multistage sampling.
We regard population as a number of primary units each of
which is further composed of secondary stage units and so on ,
till we ultimately reach a stage where desired sampling units are
obtained.
In multi-stage sampling each stage reduces the sample size.
Merits and Limitations
I. Multistage sampling is more flexible as compared to other
methods .It is simple to carry out and results in
administrative convenience by permitting the field work to
be concentrated and yet covering large area.
II. It saves a lot of operational cost as we need the second stage
frame only for those units which are selected in the first stage
sample .
III. It is generally less efficient than a suitable single- stage
sampling of the same size.
This brings an end on today’s discussion on sampling tech-
niques.
Thus in the nutshell we can say that Non probabilistic sampling
such as Convenience sampling, Judgement Sampling and
Quota sampling are sometimes used although representative
ness of such a sample cannot be ensured. Whereas a probabilis-
tic sampling to each unit of the population to be included in
the sample and in this sense it is a representative sample of the
population.
Points to Ponder
• Sampling is based on two premises. One is that there© isCopy Right: Rai University
11.556
enough similarity among the elements in a population that a 89
RESEARCH METHODOLOGY

LESSON 15:
APPLICATIONS OF MARKET RESEARCH

1. Product Research two dimensions of sportiness/conservative and classy/


2. Price Research affordable. This sample of consumers felt Porsche was the
sportiest and classiest of the cars in the study (top right corner).
3. Distribution Research
They felt Plymouth was most practical and conservative
4. Promotion Research (bottom left corner).
Product Research
The main product decisions that need to be considered are the
physical design of the product and its demand potential. Many
companies spend millions of rupees onR & D in order to come
up with a new product that will satisfy consumer needs. We
cover various information requirements and techniques used for
this purpose.
A managerial decision to use a pretest market analysis is justified
if sufficiently accurate predictions can be achieved, the timing of
the analysis is before large investment commitments are
necessary, useful diagnostics for improvement are generated,
and the cost of the analysis is reasonable. In these situations
failures can be reduced, time-to-market can be shortened, and
products improved to increase customer satisfaction
New Product Research: New product development is critical Perceptual Map of Competing Products
to the life of most organizations as there will be uncertainties
Cars that are positioned close to each other are seen as similar
associated with them. Thus, the purpose of marketing research
on the relevant dimensions by the consumer. For example
for them would reduce the uncertainties associated with the new
consumers see Buick, Chrysler, and Oldsmobile as similar. They
products. Four stages of new product development could be
are close competitors and form a competitive grouping. A
seen:
company considering the introduction of a new model will
• Generating New-Product Concepts look for an area on the map free from competitors. Some
• Evaluating and Developing those Concepts perceptual maps use different size circles to indicate the sales
• Evaluating and developing the actual products volume or market share of the various competing products.
• Testing in a Marketing Programme Displaying consumers’ perceptions of related products is only
half the story. Many perceptual maps also display consumers’
Concept Generation: There are two types of concept genera-
ideal points. These points reflect ideal combinations of the two
tion research:
dimensions as seen by a consumer. The next diagram shows a
• Need identification research: study of consumers’ ideal points in the alcohol/spirits product
• Concept Identification space. Each dot represents one respondents ideal combination
of the two dimensions. Areas where there is a cluster of ideal
Need identification research:
points (such as A) indicates a market segment. Areas without
The emphasis in need research is on identifying unfilled needs
ideal points are sometimes referred to as demand voids.
in the market. Following are some examples:
a) Perceptual Maps, in which products are positioned along
the dimensions by which users perceive and evaluate, can
suggest gaps into which new products might fit.
Perceptual mapping is a graphics technique used by
marketers that attempts to visually display the
perceptions of customers or potential customers.
Typically the position of a product, product line, brand,
or company is displayed relative to their competition.
Perceptual maps can have any number of dimensions but the
most common is two dimensions. Any more is a challenge to
draw and confusing to interpret. The first perceptual map below
Perceptual Map of Ideal Points and Clusters
shows consumer perceptions of various automobiles on the

© Copy Right: Rai University


90 11.556
A company considering introducing a new product will look for understanding of unsolved problems associated with a

RESEARCH METHODOLOGY
areas with a high density of ideal points. They will also look for particular task.
areas without competitive rivals. Placing both the ideal points e) In focus–group interviews, product users might discuss
and the competing products on the same map best does this. problems associated with product-use situations. A focus
Some maps plot ideal vectors instead of ideal points. The map group is a form of qualitative research in which a group of
below, displays various aspirin products as seen on the dimen- people, are asked about their attitude towards a product,
sions of effectiveness and gentleness. It also shows two ideal concept, advertisement, idea, or packaging. Questions are asked
vectors. The slope of the ideal vector indicates the preferred in an interactive group setting where participants are free to talk
ratio of the two dimensions by those consumers within that with other group members. In the world of marketing, focus
segment. This study indicates there is one segment that is more groups are an important tool for acquiring feedback regarding
concerned with effectiveness than harshness, and another new products.
segment that is more interested in gentleness than strength. In particular, focus groups allow companies wishing to develop,
package, name, or test market a new product, to discuss, view,
and/or test the new product before it is made available to the
public. This can provide invaluable information about the
potential market acceptance of the product.
In traditional focus groups, a pre-screened (pre-qualified)
group of respondents gathers in the same room. They are pre-
screened to ensure that group members are part of the relevent
target market and that the group is a representative subgroup
of this market segment. There are usually 8 to 12 members in
the group, and the session usually lasts for 1 to 2 hours. A
moderator guides the group through a discussion that probes
attitudes about a client’s proposed products or services. The
discussion is unstructured (or loosely structured), and the
moderator encourages the free flow of ideas. Although the
Perceptual Map of Competing Products with Ideal Vectors moderater is seldom given specific questions to ask, he/she is
often given a list of objectives or an anticipated outline.
Perceptual maps need not come from a detailed study. There are Client representatives observe the discussion from behind a
also intuitive maps (also called judgmental maps or consensus one-way mirror. Participants cannot see out, but the researchers
maps) that are created by marketers based on their understand- and their clients can see in. Usually, a video camera records the
ing of their industry. Management uses its best judgement. It is meeting so that it can be seen by others who were not able to
questionable how valuable this type of map is. Often they just travel to the focus group site. Researchers are examining more
give the appearance of credibility to management’s preconcep- than the spoken words. They also try to interpret facial expres-
tions. sions, body language, and group dynamics. Transcripts are also
created from the video tape.
When detailed marketing research studies are done method-
ological problems can arise, but at least the information is Respondents often feel a group pressure to conform and this
coming directly from the consumer. There is an assortment of can contaminate the results. But group dynamics is useful in
statistical procedures that can be used to convert the raw data developing new streams of thought and covering an issue
collected in a survey into a perceptual map. Preference regression thoroughly.
will produce ideal vectors. Multi dimensional scaling will Types Of Focus Groups
produce either ideal points or competitor positions. Factor Two-way focus group - one focus group watches another
analysis, discriminant analysis, cluster analysis, and logit analysis focus group and discusses the observed interactions and
can also be used. Some techniques are constructed from conclusions
perceived differences between products, others are constructed Dual moderator focus group - one moderator ensures the
from perceived similarities. Still others are constructed from session progresses smoothly, while another ensures that all the
cross price elasticity of demand data from electronic scanners. topics are covered
b) Social and environment trends can be analyzed. Dueling moderator focus group - two moderators deliber-
c) An approach termed benefit structure analysis has product ately take opposite sides on the issue under discussion
users identify the benefits desired and the extent to which the Respondent moderator focus group - one or more of the
product delivers those benefits, for specific applications. The respondents are asked to act as the moderator temporarily
result is an identification of benefits sought that current
products do not deliver. Client participant focus groups - one or more client represen-
tatives participate in the discussion, either covertly or overtly
d) Product users might be asked to keep a diary of a relevant
portion of their activities. Analysing such diaries can provide an Mini focus groups - groups are comprised of 4 / 5 members.

© Copy Right: Rai University


11.556 91
Telesession (or teleconference) focus groups - telephone Lead users are an extremely valuable cluster of customers and
RESEARCH METHODOLOGY

network is used potential customers who can contribute to identification of


On-line focus groups - computers and internet network is future opportunities and evaluation of emerging concepts.
used Understanding these users can provide richness of information
relatively efficiently.
Traditional focus groups can provide accurate information, and
are less expensive than other forms of traditional marketing Eric von Hippel introduced the concept of ‘Lead Users’ in the
research. There can be significant costs however : if a product is mid 1980s. He defined the lead user as those users who display
to be marketed on a nation-wide basis, it would be critical to the following two characteristics:
gather respondents from various locales throughout the They face the needs that will be general in the market place, but
country since attitudes about a new product may vary due to face them months or years before the bulk of that marketplace
geographical considerations. This would require a considerable encounters them
expenditure in travel and lodging expenses. Additionally, the They are positioned to benefit significantly by obtaining a
site of a traditional focus group may or may not be in a locale solution to those needs Where a company has experience within
convenient to a specific client, so client representatives may have a market place, it should be relatively straight forward to identify
to incur travel and lodging expenses as well. those customers who demand special solutions, push existing
Online Focus Groups solutions to the limit or who have customized standard
With the advent of large scale computer networks, such as the products to satisfy their own desires.
Internet, it is now possible to link respondents electronically. Von Hippel suggests that a key element in identifying lead users
Respondents share images, data, and their responses on their is to first identify the underlying trends, which result in these
computer screens. This avoids a significant amount of travel users or customers having a leading position. The lead users are
expenses. For instance, NFO Research, a large market research those who are at the leading edge of these trends.
company, has a system of on-line focus groups which allows
Where possible, lead users should not necessarily be sought
respondents from all over the country to gather, electronically,
from within the usual customer base, it can be useful to look
while avoiding countless logistical headaches. Online groups are
beyond existing customers perhaps to users of complementary
usually limited to 6 or 8 participants. The biggest problem with
or substitute goods or in analogous markets. In addition, the
online focus groups is ensuring that the respondents are
lead users may only have an interest in improvements or
representative of the broader population (including computer
changes to specific elements or attributes of a product.
non-users).
There are few industries of product types where there are no
While such a system does eliminate some of the logistical
lead users who have requirements or demands ahead of the
headaches and travel expenses associated with conducting focus
rest. By targeting these clusters, it is possible to identify
groups, it still requires one or more representatives from a client
opportunities for future products and evaluate emerging
to be physically located with the moderator conducting the
concepts. Where possible, if lead users are sufficiently interested,
focus group. Only in this way can questions be added in real
then they can be considered as a part of the extended product
time to further probe a particular response. Thus, even the
design team. They may even be prepared to share the burden of
online system incurs some travel expenses since a client
investment in order to find a suitable solution.
representative will need to travel to a research site or vice versa.
Furthermore, if today’s lead users do not find appropriate
Accordingly, there is a need for a system and method of
solutions from existing suppliers, then they could well turn
conducting focus groups using remotely located participants,
into tomorrow’s competitors.
including one or more moderators, one or more clients and one
or more respondents, who are all physically remote from each Concept Identification
other. In order to do this, such a system must allow for the During a New-product development process there is usually a
implementation of at least two separate chat discussions to be point where a concept is formed but there is no tangible usable
conducted simultaneously between the three classes of focus product that can be tested. The concept should be defined
group participants to provide an electronic analog to a one-way well enough so that it is communicable. There may be simply a
mirror segregating clients from respondents. In addition, such a verbal description, or there may be a rough idea for a name, a
system must allow and prohibit participation in the different package, or an advertising approach. The aim is to determine if
chat discussions based on the class of the participant. the concept warrants further development and to provide
f) In Lead user analysis, instead of just asking users what guidance on how it might be improved and refined. Conjoint
they have done, their solutions are collected more formally. Lead analysis typically is used to obtain an ideal combination of the
users are those who face needs early that later will be general in a concept’s various features.
market place; they are positioned to benefit significantly by Thus, research questions might include:
solving problems associated with these needs. Once a lead user • Are there any major flaws in the concept?
is identified, the concepts that company or person generates are • What consumer segments might be attracted to it?
tested.
• Is there enough interest to warrant developing it further?
• How might it be altered or developed further?

© Copy Right: Rai University


92 11.556
Most concept testing, however, involves exposing people to the ate. The assessment test will tend to focus on the usability or

RESEARCH METHODOLOGY
concept and getting their reactions. In exposing people to the level of functionality offered and in some cases, may be
concept, the market researcher needs to address a series of appropriate for evaluating early levels of performance. Assum-
questions: ing that the right concept has been chosen, then the assessment
• How are the concepts exposed? test aims to ensure that it has been implemented effectively and
answer more detailed questions, such as:
• To whom are the concepts exposed?
• Is the concept usable?
• To what are they compared?
• What questions are asked?
• Does the concept satisfy all user needs?
It is important to make a distinction between the different
• How does the user use the product and could it be more
effective?
types of testing applied at different stages of the develop-
ment process. This helps the development team to • How will it be assembled and tested and could this be
understand the purpose of each test and consider how data achieved in a better way?
is to be captured. • Can the user complete all tasks as intended?
Different testing methods will have different objectives, Assessment testing typically requires more complex or detailed
approaches and types of modeling. Four general types of models than the exploratory test. A combination of analytical
testing are described in more detail: models, simulations and working mock ups (not necessarily
• Exploratory tests with final appearance or full tooling) will be used.
• Assessment tests The evaluation process is likely to be relatively informal,
• Validation tests including both internal and external stakeholders. Data will
typically be qualitative and based on observation, discussion
• Comparison tests
and structured interview. The study should aim to understand
ISO 9000 tests are also briefly summarised. why users respond in the way that they do to the concept.
Exploratory tests Validation tests
Carried out early in the development process during the fuzzy The validation test is normally conducted late in the develop-
front end, when the problem is still being defined and potential ment process to ensure that all of the product design goals have
solutions are being considered, preferably once the development been met. This may include usability, performance, reliability,
team has a good understanding of the user profile and maintainability, assembly methods and robustness. Validation
customer needs. The objective of the exploratory test is to tests normally aim to evaluate actual functionality and perfor-
examine and explore the potential of preliminary design mance, as is expected in the production version and so activities
concepts and answer some basic questions, including: should be performed in full and not simply walked through.
• What do the users think about using the concept? It is probable that the validation test is the first opportunity to
• Does the basic functionality have value to the user? evaluate all of the component elements of the product
• Is the user interface appropriate and operable? together, although elements may have been tested individually
already. Thus, the product should be as near to representing the
• How does the user feel about the concept?
final item as possible, including packaging, documentation and
• Are our assumptions about customer requirements correct? production processes. Also included within validation tests will
• Have we misunderstood any requirements? be any formal evaluation required for certification, safety or
This type of early analysis of concepts is potentially the most legislative purposes. Compared to an assessment test, there is a
critical of all types of prototyping and evaluation, for if the much greater emphasis on experimental rigour and consistency.
development is based on faulty assumptions or misunder- It may be preferable for evaluation to be carried out indepen-
standing about the needs of the users, then problems are dently from the design team, but with team input on
almost inevitable later on. Data collection will tend to be developing standards and measurement criteria.
qualitative based on observation, interview and discussion with Data from a validation test is likely to be quantitative, based on
the target audience. Ideally, the customer should be asked to use measurement of performance. Normally, this is carried out
the product without training or prompting, to assess the against some benchmark of expected performance. Usability
intuitiveness of controls and instructions. Some quantitative issues may be scored in terms of speed, accuracy or rate of use,
measures may be appropriate, such as time to perform tasks, but should always be quantified. Issues such as desirability may
number of failures or errors. be measured in terms of preference or user ranking. Data
Assessment tests should also be formally recorded, with any failures to comply
While the exploratory test aims to explore the appropriateness with expected performance logged and appropriate corrective
of a number of potentially competing solutions, the assess- action determined.
ment test digs into more detail with a preferred solution at a Comparison tests
slightly later stage of development. The main aim of an A comparison test may be performed at any stage of the design
assessment test is to ensure that assumptions remain relevant process, to compare a concept, product or product element
and that more detailed and specific design choices are appropri- against some alternative. This alternative could be an existing

© Copy Right: Rai University


11.556 93
solution, a competitive offering or an alternative design Sales Potential) model has been developed. Trial levels were
RESEARCH METHODOLOGY

solution. Comparison testing could include the capturing of predicted on the basis of three variables:
both performance and preference data for each solution. The • Product class penetration (PCP), the percentage of
comparison test is used to establish a preference, determine households purchasing at least one item in the product class
superiority or understand the advantages and disadvantages of within one year.
different designs.
• Promotional expenditures-total consumer-directed
ISO 9000 tests promotional expenditures on the product.
ISO 9000 defines a number of test activities: • Distribution of the product-percentage of stores stocking
Design review the product (weighted by the store’s total sales volume).
A design review is a set of activities whose purpose is to Once the model is estimated, it can be applied to other new
evaluate how well the results of a design will meet all quality products. The researcher simply estimates the percentage of
requirements. During the course of this review, problems must household using the product class, the total expenditures
be identified and necessary actions proposed. planned for the new product, and the expected distribution
Design verification level. The model will then estimate the trial level that will be
Design verification is a process whose purpose is to examine obtained.
design and development outputs and to use objective evidence Trial also can be estimated directly using controlled shopping
to confirm that outputs meet design and development input experience. A respondent is exposed to the new product
requirements. promotion and allowed to shop in a simulated store or in an
actual store in which the product is placed. The respondents
Design validation
then have an opportunity to make a “trial” or first purchase of
Design validation is a process whose purpose is to examine
the product.
resulting products and to use objective evidence to confirm that
these products meet user needs. Pretest Marketing: Two approaches are used to predict the
new brand’s market share:
Product Evaluations and Development
Preference Judgments: Here the preference data are used to
The aim is to predict market response to determine whether or
predict the proportion of purchases of the new brand that
not the product should be carried forward.
respondents will make given that the new brand is in their
Use Testing: This gives the users a reasonable time to feel the response set. These estimates for the respondents in the study
product and inquires their reactions and their intentions to buy are coupled with an estimate of the proportion of all people
it. Researchers can contact respondents in shopping centers, by who will have the new brand in their response set., to provide
personal visits to their homes or offices, or on telephone. an estimate of market share. This is also used to analyse the
Limitations concomitant market share loses of other brands. If the firm has
• Due to unclear instructions, a misunderstanding, or lack of other brands in the market, such information can be critical.
cooperation, the respondents may nor use the product Trial and repeat purchase levels: This is based on the
correctly and may therefore report a negative opinion. respondent’s purchase decisions and intentions-to-buy
• The fact that they were given a free sample and are judgments. A trial estimate is based on the percentage of
participating in a test may distort their impressions. respondents who purchase the product in the laboratory, plus
an estimate of the product’s distribution, advertising (which
• Even when repurchase opportunities were made available, will create product awareness), and the number of free samples
such decisions may be quite different than when they are
to be given away. The repeat-purchase rate is based on the
made in a more realistic store situation.
proportion of respondents who make a mail order repurchase
• The users will not accept the product over a long period of of the new brand and the buying-intentions judgments of
time. those who elected not to make a mail order repurchase. The
• They may inflate their intention to buy. Consumers may say product of the trial estimate and the repeat purchase estimate
that they will buy the product but may end up not doing so. become a second estimate of market share.
Blind-use Testing: Even though a product may be proved Test Marketing
superior in the laboratory, the consumer may not perceive it to Test marketing allows the researcher to test the impact of the
be superior. For e.g., Amul sweets, which was perceived as a total marketing program, with all its interdependence, in a
superior by the company by all standards, were introduced in market context as opposed to the artificial context associated
the market. It was supposed to be hit during Diwali time and with the concept and product tests that have been discussed.
advertisements were released to prop up sales. Unfortunately Functions:
the consumers perceived the product as a premium product and
• To gain information and experience with the marketing
did not substitute their purchases from the local Halwai.
program before making a total commitment to it.
Predicting Trial Purchase: To predict trial levels of new,
frequently purchased consumer products, ESP (Estimating
• To predict the program’s outcome when it is applied to the
total market.
Types of Test Market:

© Copy Right: Rai University


94 11.556
• The sell-in test markets are cities in which the product is delay the launch of a new product are more difficult to

RESEARCH METHODOLOGY
sold just as it would be in a national launch. The product has quantify. If a new product launch is delayed, an opportunity
to gain distribution space. to gain a substantial market position might be lost.
• The controlled-distribution scanner markets (CDSM) Pricing Research
are cities for which distribution is pre-arranged and the Research may be used to evaluate alternative price approaches for
purchases of a panel of customers are monitored using new products before launch or for proposed changes in
scanner data. products already in the market.
• Certain parameters that have to be looked into while Pricing Approaches
deciding sell-in test market:
• Gabor and Grainger Method (Price skimming strategy),
• Representativeness: Ideally, the city should be fairly where different prices for a product are presented to
representative of the country in terms of characteristics that respondents, who then are asked if they would buy. A “buy-
will affect the test outcome, such as product usage, attitudes response” curve of different prices, with the corresponding
and demographics. number of affirmative purchase intentions, is produced. The
• Data Availability: Information about Store audit is helpful objective is to generate as much profit as possible in the
in evaluating the test. The selected cities should contain present period.
retailers who will cooperate with store audits. • Multibrand-choice Method (Share penetration strategy),
• Media isolation and Costs: It is desirable to avoid media where respondents are shown different sets of brands in the
spill-over. Using media that “spill-out” into nearby cities is same product category, at different prices, and are asked
wasteful and increases costs. Conversely, “spill-in” media which they would buy. This allows the respondents to take
from nearby cities can contaminate a test. Media cost is into account competitors’ brands, as they normally would
another consideration. outside such a test. Thus, this technique represents a form
• Product flow: It may be desirable to use cities that don’t of simulation of the point of sale. The objective is to
have much “product-spillage” outside the area. capture an increasingly larger market share by offering a lower
• Number: A single city can lead to unreliable results because price. Pricing research for the two different approaches differs
of the variations across cities of both brand sales and substantially in terms of the information sought.
consumer response to marketing programs. Following questions are generally asked with regard to pricing
• Implementing and controlling: The test should be research:
controlled in such a manner that it ensures the marketing • At what price would you consider the product to be so
program is implemented in the test area so as to reflect the expensive that you would not consider buying it? (Too
national program. The test itself may tend to encourage expensive)
those involved to enhance the effectiveness of the marketing • At what price would you consider the product to be priced
program. Salespersons may be more aggressive. Retailers so low that you would feel the quality couldn’t be very good?
may be more cooperative. The competitors may react by (Too cheap)
deliberately flooding the test areas with free samples or in- • At what price would you consider the product starting to
store promotions. Even they can retaliate or can also get expensive, so that it is not out of the question, but you
monitor the results themselves. would have to give some thought to buying it?
• Timing: Normally, a test market should be in existence for (Expensive)
one year, so that all important seasonal/cultural factors can • At what price would you consider the product to be a
be observed and estimated. bargain—a great buy for the money? (Cheap)
• Measurement: The basic measure is sales based on Research for Skimming Pricing:
shipments or warehouse withdrawals. Store audit data This is based on the concept of pricing the product, at the point
provide actual sales figures and are not sensitive to inventory at which profits will be the greatest until market conditions
fluctuations. They also provide information on: distribution, change or supply costs dictate a price change. Under this strategy,
shelf-facings, and in-store promotional activity. Measures the optimal price is the one that results in the greatest positive
such as brand awareness, attitude, trial purchase, and repeat difference between total revenues and total costs. This implies
purchase are obtained directly from the consumer. This that the researcher’s major tasks are to forecast the costs and the
information helps evaluate the marketing program and can revenues over the relevant range of alternative prices.
help interpret sales data. The most useful information
obtained from consumers is whether they bought the Research for Penetration Pricing:
product at least once, whether they were satisfied with it, and This is based on the concept that average unit of production
whether they repurchased it or plan to. costs continue to go down as cumulative output increases.
Potential profits in the early stages of the product life cycle are
• Costs: Costs which are quantifiable, include - development sacrificed in the expectation that higher volumes in later periods
and implementation of the marketing program; preparation will generate sufficiently greater profits to result in overall profit
of test products; administration of the test and collection of for the product over its life.
data associated with the test. The costs and risks that may

© Copy Right: Rai University


11.556 95
Following pricing pattern is adopted to increase market share: Distribution Research
RESEARCH METHODOLOGY

a) Offer a lower price (even below cost) when entering the Traditionally, the distribution decisions in marketing strategy
market. involve:
b) Hold that price constant until unit costs produce a desired • The number and location of salespersons,
percentage markup. • Retail outlets,
c) Reduce price as costs fail to maintain markup at the same • Warehouses, and
desired percentage. • The size of discount to be offered.
Despite the ubiquitous nature of the above questions, research- The discount to be offered to the members in the channel of
ers commonly encounter four limitations when using this distribution usually is determined by what existing or similar
approach for pricing research: products are offered, and also whether the firm wants to follow
I. it provides no competitive information. a “push” or a “pull” strategy.
II. it relies on price awareness. Warehouse and Retail Location Research
III. it is inefficient when evaluating numerous product Location decisions include:
specifications. “What costs and delivery time would result,
IV. it relies on aggregate-level analysis. if we choose one location over another?”
Each limitation is discussed below. The approximate location (optimal location), that will minimize
Provides no competitive information the distance to customers, weighted by the quantities purchased,
A concept test asks respondents to evaluate how likely they will have to be determined. Chain shops with multiple outlets
would be to purchase a specific product without any informa- and franchise operations must decide on the physical location
tion about other products that might be available in the market. of their outlets. Data about surrounding residential
When shopping, consumers generally have the chance to see a neighbourhood, income levels, and competitive stores would
set of competing products and pick one from the set. When help in choosing optimal location.
presented with a set of products to select from, consumers can Number and location of Sales Representatives
make trade-offs between features and price to determine their How many sales representatives should be there in a given
preferred product. In the absence of this comparative task, territory?
respondents may have difficulty answering reliably.
Approaches:
Relies on price awareness • Sales effort approach- when the product line is first
The respondent compares the price presented in the concept to introduced and there is no operating history to provide sales
an internal reference price to determine if the price is fair or not. data. This is done by:
This determination is based on a respondent’s awareness of the
i. Estimating the number of sales calls required to sell to,
current pricing in the category.
and to service, prospective customers in an area for a year.
Inefficient to evaluate various product specifications This will the sum of the number of visits required per
Often, a researcher would like to evaluate a small number of year to each prospect (customer) in the territory.
specific product variations at the same time price is being ii. Estimating the average number of sales calls per
evaluated. For instance, there might be an interest in the representative that can be made in that territory.
market’s willingness to pay for a specific feature or how the
iii. Divide the estimate in step (i) by the estimate in step (ii)
inclusion or exclusion of a product characteristic influences
to obtain the number of sales representatives required.
purchase likelihood. The concept test can be used to evaluate
these various specifications. However, most researchers would • Statistical analysis approach- is used after the sales
suggest that each respondent only evaluate one concept. program is under way. Once a sales history is available from
Therefore to evaluate various product specifications, the total each territory, an analysis can be made to determine if the
sample size must grow. To illustrate, if we wished 200 observa- appropriate number of sales representatives is being used in
tions per cell, and we are only testing three prices (three cells), we each territory. An analysis of actual sales versus market
would require 600 respondents. However, if we have three potential for each sales representative can be made. Also,
alternative product variations, with each variation at three prices, following inferences can be made:
we now have nine cells and would require 1800 respondents. i. Average market potential is less as per each sales
Relies on aggregate-level analysis representative
A concept test will rely on aggregate, or at most subgroup-level ii. Territory, which have too many sales representatives
analysis. That is, this approach will make respondent’s heteroge- iii. Market potential is more but have too few sales
neity difficult to detect and measure. representatives
The traditional concept test can be effectively used in pricing • Field Experiment approach- is also applicable only after
research when the product features are already determined, the the sales program has begun. Experiments are done with the
level of price awareness is high, and the competitive context is calls made, to determine the number and location of sales
such that evaluating a single product is not too limiting. representatives. This is done in two ways:

© Copy Right: Rai University


96 11.556
i. Making more frequent calls on some prospects and less

RESEARCH METHODOLOGY
frequent calls on others, in order to see the effect on
overall sales, keeping the number of sales representatives
unchanged.
ii. Increasing the number of representatives in some
territories and decreasing them in others to determine the
sales effect.
Promotion Research
Here the focus is on the decisions that are commonly made
when designing a promotion strategy. The decision for the
promotion part of a market strategy can be divided into:
• Advertising decisions, which have long-term effects.
• Sales Promotion decisions, which affect the company in
the short term.
Companies spend more time and resources on advertising
research than on sales promotion research because of the greater
risk and uncertainty in advertising research.
Advertising Research: Advertising decisions are more costly
and risky. Advertising research involves generating information
for making decisions in the:
• Awareness stage
• Recognition stage
• Preference stage, and
• Purchasing stage
Most often, advertising research decisions are about advertising
copy. Marketing research helps to determine how effective the
advertisement will be. Research on media decisions is separate
from advertising research.
The effectiveness of an advertisement depends upon the brand
involved and its advertising objectives. Four categories are used
in advertising research:
• Advertisement recognition
• Recall of the commercial and its contents
• The measure of commercial persuasion, and
• The impact on the purchase behavior.
Advertising Recognition: The respondents are tested whether
they can recognize the advertisement as one they have seen
before.
Notes-

© Copy Right: Rai University


11.556 97
UNIT III
DATA ANALYSIS
LESSON 16:
DATA CODING AND ANALYSIS

• It should help in facilitating data interpretation


Student, today we shall be doing the most crucial step in
RESEARCH METHODOLOGY

research process- Data coding and Data Analysis. This stage of B. How Missing Data are Treated
data entry and coding comes after the collection of desired You should have knowledge about :
information is the coding and analysis of data. i) Non-ascertained Information has to be recognized:
Once the information is tabulated, it is easy to perform various information not obtained because of interviewer or
statistical tests for their validity, accuracy and significance. This respondent performance.
step seems very simple, although it is not so. Gathered • Reason for failure to ask question
information should be presented in such a manner that even a
• Failure to obtain appropriate response
layman understands what, why, when and how of information.
• Refusal to answer question (separate)
Data Entry
ii) Inapplicable Information: information does not apply to a
It is the process of taking completed questionnaires\surveys
particular respondent
and putting them into a form that can readily be analyzed.
iii) Unknown information: information as to respondent’s
A series of options need to consider when you enter the
claim of awareness
information you have gathered.
(How to treat “Don’t know” option)
You will first have to decide on a file format and then devise a
code for analysis. C. Entry of Data
You should fix up the number of translation steps between
Decision on File Format
subject’s response and readable data file
It comprises of decisions regarding:
• Computer assisted techniques: 1
• The way the data will be organized in a file
• Digital answer format (Scantron): 3
• Order of information collected
• Entry by hand: 4
• How subject is referenced
• Impacts ability to check quality of data entry (accuracy,
• Constructing individual records
reliability)
• History of 80-column format
D. Clean Data File
• Application to statistics programs
You should examine each data file to ensure each record is
Devise Code for Analysis complete and in order
The main points you want to remember while devising the • You should remove non-legal codes
code for analysis are:
• Then you should replace it with information from original
• Set of rules that translates answers into discrete values response format
• Alphabetical or Numerical depending on measurement scale • Proper importance should be given to verification
• Preserve level of measurement for each item The problem most decision makers must resolve is how to deal
• General Considerations (closed questions): with the uncertainty that is inherent in almost all aspects of
Now, we will discuss these in detail for the better understanding. their jobs. Raw data provide little, if any, information to the
decision makers. Thus, they need a means of converting the raw
A. First of all you should try to make coding data into useful information. In this lecture note, we will
translation simple concentrate on some of the frequently used methods of
• Coding should be done minimizing effort and risk of presenting and organizing data.
coding errors
Frequency Distribution
• Remember the Item-level: Leave #s as #s (#s can be The easiest method of organizing data is a frequency distribu-
nominal). tion, which converts raw data into a meaningful pattern for
• Perform Reverse coding/Unfolding complex response statistical analysis.
formats. The following are the steps of constructing a frequency distribu-
• For Test-level: you code questions in order of appearance. tion:
• You have to be consistent in assigning values with similar 1. Specify the number of class intervals. A class is a group
responses (category) of interest. No totally accepted rule tells us how
• You should identify the question groups within test. many intervals are to be used. Between 5 and 15 class

© Copy Right: Rai University


98 11.556
intervals are generally recommended. Note that the classes first class. Thus, the $600 - $799 class actually includes all data

RESEARCH METHODOLOGY
must be both mutually exclusive and all-inclusive. Mutually from $599.50 inclusive up to but not including $799.50.
exclusive means that classes must be selected such that an
Cumulative Frequency Distribution
item can’t fall into two classes, and all-inclusive classes are
When the observations are numerical, cumulative frequency is
classes that together contain all the data.
used. It shows the total number of observations which lie
2. When all intervals are to be the same width, the following above or below certain key values.
rule may be used to find the required class interval width: Cumulative Frequency for a population = frequency of each class
W = (L - S) / K interval + frequencies of preceding intervals. For example, the
where: cumulative frequency for the above problem is: 3, 5, 9, and 10.
W= class width, L= the largest data, S= the smallest data, K= Presenting Data
number of classes Graphs, curves, and charts are used to present data. Bar charts
Example: are used to graph the qualitative data. The bars do not touch,
Suppose the age of a sample of 10 students are: indicating that the attributes are qualitative categories, variables
20.9, 18.1, 18.5, 21.3, 19.4, 25.3, 22.0, 23.1, 23.9, and 22.5 are discrete and not continuous.
We select K=4 and W=(25.3 - 18.1)/4 = 1.8 which is rounded- Histograms are used to graph absolute, relative, and cumula-
up to 2. The frequency table is as follows: tive frequencies.
Class Interval...............Class Frequency............Relative Frequency Ogive is also used to graph cumulative frequency. An ogive is
18-U-20................................3..................................30% constructed by placing a point corresponding to the upper end
20-U-22................................2..................................20% of each class at a height equal to the cumulative frequency of the
22-U-24................................4..................................40% class. These points then are connected. An ogive also shows the
24-U-26................................1..................................10% relative cumulative frequency distribution on the right side axis.
Note that the sum of all the relative frequency must always be A less-than ogive shows how many items in the distribution
equal to 1.00 or 100%. In the above example, we see that 40% have a value less than the upper limit of each class.
of all students are younger than 24 years old, but older than 22 A more-than ogive shows how many items in the distribution
years old. Relative frequency may be determined for both have a value greater than or equal to the lower limit of each class.
quantitative and qualitative data and is a convenient basis for the A less-than cumulative frequency polygon is constructed by
comparison of similar groups of different size. using the upper true limits and the cumulative frequencies.
What Frequency Distribution Tells Us A more-than cumulative frequency polygon is constracted by
1. It shows how the observations cluster around a central value; using the lower true limits and the cumulative frequencies.
and Pie chart is often used in newspapers and magazines to depict
budgets and other economic information. A complete circle (the
2. It shows the degree of difference between observations.
pie) represents the total number of measurements. The size of a
For example, in the above problem we know that no student is slice is proportional to the relative frequency of a particular category.
younger than 18 and the age below 24 is most typical. The most For example, since a complete circle is equal to 360 degrees, if the
common age is between 22 an 24, which from general informa- relative frequency for a category is 0.40, the slice assigned to that
tion we know to be higher than usual for the students who category is 40% of 360 or (0.40)(360)= 144 degrees.
enter college right after high school and graduate about age 22. Pareto chart is a special case of bar chart and often used in
The students in the sample are generally older. It is possible quality control. The purpose of this chart is to show the key
that the population is made up of night students who work on causes of unacceptable quality. Each bar in the chart shows the
their degrees on a part-time basis while holding full-time jobs. degree of quality problem for each variable measured.
This descriptive analysis provides us with an image of the Time series graph is a graph in which the X axis shows time
student sample, which is not available from raw data. As we will periods and the Y axis shows the values related to these time
see in lecture number 3, Frequency distribution is the basis for periods.
probability theory. Stem-and-leaf plots offer another method for organizing raw
Stated & True Class Limits data into groups. These types of plots are similar to the
True Classes are those classes such that the upper true (or real) limit histogram except that the actual data are displayed instead of
of a class is the same as the lower true limit of the next class. bars. The stem-and-leaf is developed by first determining the
stem and then adding the leaves. The stem contains the higher-
For comparison, the stated class limits and true (real) class limits
valued digits and the leaf contains the lower-valued digits. For
are given in the following table
example, the number 78 can be represented by a stem of 7 and
Stated Limit................True Limits a leaf of 8. Thus, the numbers 34, 32, 36, 20, 20, 22, 54, 55, 52,
$600 - $799.................$599.50 up to but not including $799.50 68, and 63 can be grouped as follows:
$800 - $999.................$799.50 up to but not including $999.50 Stem...............Leaf
In the first column of the above table the data were rounded to 2....................0..0..2
the nearest dollar. For example, $799.50 was rounded up to 3....................2..4..6
$800 and tallied in the second class. Any amount over $799 but 4
under $799.50 was rounded down to $799 and included in the

© Copy Right: Rai University


11.556 99
5....................2..4..5 stems in the range of the data, even if there are some stems
RESEARCH METHODOLOGY

6....................3..8 with no corresponding leaves.


Steps to Construct a Stem and Leaf Plot 3. If the leaves consist of more than one digit, drop the digits
after the first. You may round the numbers to be more
1. Define the stem and leaf that you will use. Choose the units
precise, but this is not necessary for the graphical description
for the stem so that the number of stems in the display is
to be useful.
between 5 and 20.
2. Write the stems in a column arranged with the smallest stem 4. Record the leaf for each measurement in the row
at the top and the largest stem at the bottom. Include all corresponding to its stem. Omit the decimals, and include a
key that defines the units of the leaf.

See the following Figures:

© Copy Right: Rai University


100 11.556
RESEARCH METHODOLOGY
LESSON 17:
PRINCIPLES OF STATISTICAL INFERENCE AND CONFIDENCE INTERVALS

In this chapter we shall learn to apply techniques of statistical To solve problems such as this we have to learn how to use
inference to business situations. I will first introduce the basic characteristics of samples to test an assumption about the
idea underlying statistical inference and hypothesis testing. Our population from which the sample comes from. This in effect is
focus will be more on understanding key concepts intuitively the process of statistical inference.
and less on formulas and calculations, which can be done easily The issue facing the manager in the above example is:
on computers.
• Could a sample of 100 aluminum sheets with average
A business manager in a typical managerial situation needs to thickness of .048 inches have come from a population with a
determine whether results based on samples can be generalized average thickness of .04 inches?
to a population. Different management situations require
• Does the sample estimate of thickness differ from the
different statistical techniques to carryout tests regarding the
population estimate due to sampling error or is it because
applicability of sample statistics to a population.
our fundamental assumption about the mean thickness of
By the End of this Unit you Should be the underlying population is not correct?
Able to • Suppose he believes it to be the first case and accepts the
• Understand the nature of statistical inference consignment, what is the risk that he runs that the
• Understand types of statistical inference consignment is flawed and does not conform to quality
• Understand the theory behind statistical inference
standards of .04 inches?

• Apply sampling theory concepts to confidence intervals and


This is just one example of a very typical managerial situation
estimation where the principles of statistical inference can be put to use to
solving the manager’s dilemma.
Nature of Statistical Inference
Types of Statistical Inference
What is Statistical Inference? Broadly statistical inference falls into two major categories:
We have seen that all managerial/ business situations involve estimation and hypothesis testing. Both are actually two sides
decision making in situations with incomplete information. of the same coin and can be regarded as representing different
When a particular finding emerges from data analysis the aspects of a technique.
manager asks whether the empirical findings represent the true
Below I briefly explain each of them.
picture or have occurred as a result of sampling accident.
1. Estimation
Statistical inference is the process where we generalize from
sample results to a population from which the sample has This is concerned with how we use sample statistics to estimate
been drawn. Thus statistical inference is the process where we population parameters. It is not necessary that an estimate be
extend our knowledge obtained from a random sample which based on statistical data. All managers make quick estimates
is only a small part of the population to the whole population. based on incomplete information, gut feel and intuition.
Thus an estimate of sales for the next quarter can be based
Where Do We Use Statistical Inference? on gut feel or on an analysis of past sales data for the
Let us think of a typical managerial situation: Imagine you are a quarter. Both represent estimates. The difference between a
purchase manager. Your basic problem is to ensure that a estimate based on intuition and one based on a random
consignment of Aluminum sheets supplied to you by a sample is that we can apply the principles of probability
supplier correspond to the required specification of .04 inch allows us to calculate percentage of error variation in an
thickness. How do you go about ensuring this? estimate attributable to sampling variation.
One way would be to accept blindly what ever your suppliers The sample mean for example can be used as an estimate of
claim. Another option would be to audit each and every item. the population mean. Similarly the percentage of occurrence
Clearly this would be both very time consuming and expensive in a sample of an attribute or event can be used to estimate
and would result in an unacceptably low level of productivity. the population proportion . To explain the concept a little
Another option is for the manger to choose a random sample more clearly we can look at a few examples of estimation:
of 100 aluminum and measures them for their thickness. He • University departments make estimates of next years
finds for example that the sheets in the sample have an average enrollments on the basis of last years enrollments in the
thickness of .048 inches. On the basis of past experience with same courses.
the supplier the manager believes that the sheets come from a
• Credit managers make estimates about whether a purchaser
population with a standard deviation of .004inches. On the
will pay his bills on the basis of past behaviour of
basis of this data he has to make a decision whether the to
customers with similar characteristics or their past repayment
accept or reject a consignment of 10,000 sheets.
record

© Copy Right: Rai University


11.556 101
2. Hypotheses testing What is a Sample?
RESEARCH METHODOLOGY

If we find a difference between two samples, we would like A sample is a representative subset of the underlying popula-
to know, is this a “real” difference (i.e., is it present in the tion.
population) or just a “chance” difference (i.e. it could just be For each sample that is taken from a population we can calculate
the result of random sampling error). various sample statistics such as mean and variance. We can take
many such samples from a population and calculate their mean
Hypothesis begins with an assumption called a hypothesis
and standard deviations. Given the existence of sampling
that we make about a population parameter. We then collect
variation it is likely that there is also going to be some variability
sample data and calculate sample statistics such as mean,
in the different estimates of mean and standard deviations.
standard deviation to decide how likely it is that our
This can be explained best with the help of an example:
hypothesized population parameter is correct. Essentially the
Suppose there is a store which sells CDS . We assume it has a
process involves judging whether a difference between a
regular customer base. A random sample of 100 customers is
sample and assumed population value is significant or not.
taken and we find the sample mean age of customers was equal
The smaller the difference the greater the chance that our
to 42years, with a standard deviation of 5 years. However this is
hypothesized value for the mean is correct.
only one possible sample, which could have been taken. A
Some examples of real world situations where we might second different sample may have had a result where mean was
want to test hypotheses: equal to 45 years and standard deviation of 6 years. To change a
• A random sample of 100 south Indian families finds that sample we need only change one of the customers. We would
they consume more of a particular brand of Assam tea per expect samples taken from a population to generate similar if
family than a random sample of 100 North Indian families. not identical sample means. If we take repeated samples such
It could be that the observed difference was caused by that all possible samples are taken then we are likely to obtain a
sampling accident and that there is actually no difference sampling distribution of means.
between the two populations. However if the results are not
What Does this Distribution Look Like?
caused by pure sampling fluctuations then we have a case for
Logically we can conceive that there is only one sample which
the firm to take some further marketing action based on
will contain the youngest possible customers and its mean will
sampling finding.
have the lowest sample mean. Similarly there will be another
· Colgate Palmolive have decided that a new TV ad campaign couple of samples having the lowest 99 customers. These
can only be justified if more than 55% of viewers see the samples will have means, which are slightly higher than the
ads. In this case the company requests a marketing research lowest mean. A somewhat higher number will contain the
company to carryout a survey to assess viewership. The youngest 98 customers and so on.
agency comes back with a ad penetration of 50% for a
The majority of the samples will have a cross section of all age
random sample of 1000. It is now the company’s problem
groups and therefore there would be a clustering of sample
to assess whether the sample viewing proportion is
means around what is likely to be the true population mean.
representative of the hypothesized level of viewership that
The distribution of sample means will look like the normal
the company desires, i.e 55%. Can differences between the
distribution as shown in the figure1 below.
two proportions be attributed to sampling error or is the
ads true viewership actually lower. Sampling Distribution Of Sample Mean
In the next section we shall look at the theory behind Values
statistical inference. The basis of inference remains the same
irrespective of whether the managerial objective is to obtain a
point or interval estimate of a population parameter or to
test whether a particular hypotheses is supported by sample
data or not.
Activities µ

1. What is statistical inference? What are the different types of


Figure 1
inference?
This result follows from the Central Limit theorem : if we take
2. Why do decision makers measure samples rather than entire random samples of size n from a population, the distribution
population? What are the disadvantages of sampling? of sample means will approach that of a normal probability
Theory Behind Statistical I nference distribution. This approximation is closer the larger is n.
We now look at the underlying theoretical basis of statistical We do not actually know what form our population distribu-
inference. The underlying basis of statistical inference is the tion takes: it could be normal or it could be skewed. However it
theory of sampling distributions. doesn’t matter, as the sampling distribution will approximate a
Now we shall briefly review some concepts, which have been normal distribution as long as sufficiently large samples are
dealt with in more detail in the earlier chapter on sampling. taken.

© Copy Right: Rai University


102 11.556
Normal Distribution normal curve lie between plus/ minus any given number of

RESEARCH METHODOLOGY
We now look briefly at some of the key characteristics of the standard deviations from the mean.
normal distribution. These results are summarized below:
The normal sampling distribution can be summarized by its • Approx 68% of all values in a normally distributed
two statistics: population lie within ±1 standard deviation from the mean.
• Mean ö Approximately 16% of the area lies on either side of of the
• Standard deviation s / ”n
population mean lies outside this range. This is illustrated in
the figure 3.
Logically we can see that the mean of the sampling distribution
• Approx 95.5% of all values in a normally distributed
should equal the mean of the population. The standard
deviation of the sampling distribution is given by ó/ ”w population lie within ± 2 standard deviation from the mean.
#v n, where ó is the populatio n standard deviation an d n is Approximately 2.25% of area on either side of the
the sample size. Thus the sampling distribution of the mean population mean lies outside this range. This is illustrated in
can be defined in terms of its mean and standard deviation. the figure4.
• Approx 99.7% of all values in a normally distributed
However We Should Be Clear We are population lie within ±3 standard deviation from the mean .
Talking About Three Different Statistics This is illustrated in the figure5. Only .15% of the area under
Mean Standard the curve on either side of the mean lies outside this range.
Deviation
Sample x s
Population µ s
Sampling distribution of mean µ s / ”n,
The Three Distributions are Illustrated
Below
Sampling distribution of the Population
The two distributions are illustrated in figure2 below. As can
be seen the sampling distribution of the sample mean is far
more concentrated than the population distribution. However
both distributions have the same mean µ.

Figure 3,4,5
Standard Normal Distribution
However we rarely need intervals involving only one, two or
three standard deviations. Statistical tables provide areas under
the normal curve that are contained by any number of standard
deviations (plus/ minus ) from the mean. We do this by
constructing the standard normal distribution which is
standardized .Thus all normal distributions with mean µ and
Application of Sampling Theory Concepts standard deviation ó can be transformed into a standard
To Confidence Intervals normal distribution with µ=0 and s =1. This transformation
Once we have calculated our sample mean we need to know is done using the z statistic where
where it lies in the sampling distribution of the mean in
relation to the true mean of the sampling distribution or the z.= ¯x- µ/ s
population mean. It might be higher than the population mean The distribution of the z statistic represents the standard
or lower, or it might be identical with the population mean. normal distribution with mean µ=0 and standard deviation
While we cannot know for certain where the sample mean lies in s=1
relation to the population mean we can use probability to assess With the normal table we can determine the probability that the
its likely position vis a vis the population mean. sample mean c lies within a certain distance from the popula-
From our earlier classes we know that irrespective of the values tion mean.. The distance from the mean is defined in terms of
of ì and ó, for a normal probability distribution, the total area number of standard deviations away from the mean.
under the normal curve is 1.00. Further specific portions of the

© Copy Right: Rai University


11.556 103
How Do we do This?
RESEARCH METHODOLOGY

This follows from the result that, irrespective of the shape of


the normal curve, the area under the normal curve for a
distance of one ,two or three or any given number of standard
deviations is the same across all curves. Therefore all intervals
containing the same number of standard deviations from the
mean contain the same proportion of the total area under the
curve. Hence we can make use of only one standard normal
distribution.
Using the Standard Normal Probability
Distribution
The figure 6 below shows the raw scale and the standard normal
transformation. The standardized normal variable is z= x-ö/ s
As can be seen from the figure5 below, z actually represents a
transformation or change in measurement scale on the horizon-
tal axis. Thus in the raw scale the µ=50. In the standard normal
scale this value is transformed to µ=0.

Figure 5
The Standard normal probability table is organized in terms of
z values . It only gives the z values for half the area under the
curve. Because the distribution is symmetric: values which hold
for one half of the distribution are true for the other.
So far we have tried to understand the theory of sampling
underlying confidence intervals. We now turn to defining what
exactly is a confidence interval.

© Copy Right: Rai University


104 11.556
RESEARCH METHODOLOGY
LESSON 18:
STATISTICAL INFERENCES AND SAMPLING DISTRIBUTION

In the last lesson we focused on understanding the theoretical Activities


foundations of statistical inference and sampling distributions. 1. From a population known to have a standard deviation of
In this lesson we will look at the concept of confidence interval 1.4, a sample of 60 is taken. The mean is found to be 6.2.
estimation in more detail and attempt to understand it
i. Find the SE of the mean.
intuitively as well as its applications to management situations.
We had also examined the concept of confidence intervals. ii. Establish an interval estimate around the mean, using
Managers are always making estimates. The confidence limits one standard error of the mean.
help define the limits of these estimates. The concept applies as Ans : i. 181 ii. (6.019, 6.381)
we shall see even when we make a non statistical estimate We 2. University of Delhi is conducting a study on the average
shall continue with this concept in this lecture and attempt to weight of the bricks that make up the university’s paths.
understand intuitively what we mean by a confidence interval. Workers were sent to dig up and weigh a sample of 421
We shall also spend some time practicing applications of the bricks . The average sample mean weight was 14.2 kg. It is
principles we have learnt in this lesson to practical situations. well known fact that standard deviation of brick weight was
By the end of this chapter you should be able to : .8kgs.
• Apply concepts of Interval estimates, Confidence intervals i. Find the SE of the mean
and confidence levels to business situations ii. What is the interval around the sample mean that will
• Learn to interpret a confidence interval include the population mean 95.5% of the time?
• Determine factors influencing choice of technique Ans: i. .0390kgs ii. (14.122kgs, 14.278kgs)
• Examine issues relating to the determination of level of Understanding Confidence Levels and
significance Confidence Intervals
Confidence Intervals What Is A Confidence Level?
In the earlier sections we saw that the same proportion of area In statistics the probability associated with an interval estimate
under the normal curve lies within plus/minus any given is called the confidence level. This probability indicates how
number of standard deviations and these can be related to confident we are that the interval estimate will include the
specific probabilities. We can use this property to make a population parameter. A higher probability means more
statement about the probability that that a particular interval – confidence. The most commonly used confidence levels are:
defined in terms of number of standard deviations from the 90%, 95%, 99%. These are not the only confidence levels, we
mean – contains the population mean. can for example have a 95.5% or 80% confidence level.
A confidence interval around a sample mean would be defined as: What is a Confidence Interval?
>x ± zSE This is the range of the estimate we are making. For example
Thus we can see that the concept of confidence interval we can say that the mean income of the population will lie
therefore provides an interval estimate around >x which would between Rs 8000-Rs24000/. Confidence intervals can also be
essentially represent the extent of sampling error. expressed in terms of standard errors than in numerical values:

Examples of Application of Confidence Mean+1.64SE = upper limit of 90% confidence interval


Interval Estimation Mean-1.64SE = lower limit of 90% confidence interval
1. From a population with variance of 185, a sample of 64 What is The Relationship Between
individuals has an estimate of mean =217. Confidence Level and Confidence
i. Find the SE of mean. Interval?
ii. Establish an interval estimate that should include the Somehow we have a perception that a high confidence levels
population mean 68.3% of the time. such as 99% means a high degree of accuracy. In fact it can mean
the very opposite as it will produce a larger confidence interval.
s= 185, n=64 mean =217 Larger confidence intervals mean that estimates are not so
precise and there is an element of fuzziness about them.
i SE of mean = σ / n = 13.60/ 64 = 1.70
These concepts are not just used in statistics but have relevance

ii x ± SE mean = 217 + / - 1.70 = (2125.3, 218.7) to our day-to-day life where we frequently set up confidence
intervals and try to establish the associated confidence level with
that interval level. An example can best illustrate the relation
between the two.

© Copy Right: Rai University


11.556 105
A customer is interested in getting his washing machine Or equivalently
RESEARCH METHODOLOGY

repaired. He discusses the possible time frame for getting his • There is a 95.5% probability that ì lies within ± 2 standard
repair done with the maintenance manager of the service center. errors of all sample means.
The table below shows that as the customer sets tighter and What do these statements mean?
tighter confidence intervals, the service center manager hedges
What we mean is if we select 1000 samples at random from a
by agreeing to a lower and lower level of confidence.
given population and then construct a ±2 standard deviations
Will my machine be repaired .. interval around the mean of each of these samples, we will find
Customer’s Demand Manager’s hedge that approx 95% of these intervals will include the population
.. within a year I am absolutely certain (99%) mean. Similarly the probability is 68.3% that the mean of the
sample will be within ± of 1 standard deviation.
..within a month I am almost positive (95%)
The fig1 below illustrates this graphically. We show the ±2
..within a month I am pretty sure of it(80%)
standard error interval for five different samples. The sample
.. by tomorrow I am not certain that I can manage it (40%) intervals for each sample mean is also shown. are shown in
..immediately There is little chance that of that (1%) figure 7. As we can see only the interval around >x 4 does not
As we can see as the customer sets tighter and tighter confidence contain the population mean.
intervals the manager sets progressively lower confidence levels.
Finally the when the confidence interval is very narrow (will I get
it immediately?) the estimate is associated with a very low level
of confidence as to be useless (1%).
Ultimately there are no free lunches when it comes to dealing with
confidence intervals and levels. The manager has to weigh the
benefit of high confidence levels or certainty associated with a
decision with the cost of having to accept a lower level of accuracy.
Interpreting a Confidence Interval
How do we interpret a confidence interval?
The process of sampling is such that we use one sample to
estimate a population parameter. When we set up a confidence Figure1
interval around a sample mean along with an associated
confidence level , what exactly do we mean? In other words we expect that in 95.5% of samples the
population mean would be located in an interval ±2 standard
This concept can be illustrated more clearly with the help of an error from the sample mean. Ultimately the confidence level for
example: an estimate is based on the expected results if the sampling
A company estimates, on the basis of a sample of 100, that process is repeated many times.
the life of car batteries it manufactures is 36 months. In
probability terms we can make the following statements:
Solving Problems Based On Confidence
Interval Estimation
We can say that we are… It is easy to solve problems on confidence intervals if we understand
• 68.3% confident that the true life in the interval 35.293 to that a confidence level is defined by how many SE from there are on
36.707 months(36±1) standard error either side of the mean. This is indicated by the shaded region in a
• 95.5% confident that the true life in the interval 34.586 to diagram of the normal distribution( see fig 1).
37.414 months(36±2 standard error) The normal tables given in the appendix convert any desired level
• 99.7% confident that the true life in the interval 33.879 to of confidence into standard errors from the mean of a standard
38.121 months(36±3 standard error) normal distribution. Since we have information regarding one
For example we say that we are 95.5% confident that the mean standard error we can calculate the end points or limits by SE
life of a battery lies within 34.586 and 37.414 months. multiplied by the appropriate z statistic. These represent the
limits of our confidence interval. If the population standard
This does not mean that there is a 95.5% probability that deviation is not known we can use sample standard deviation
the mean life of all batteries falls within the interval (s)to estimate the population standard deviation.
established.
Instead it means that if we select many random samples of the
Examples
same size and calculate a confidence interval around the mean 1. Given the following confidence intervals , express the lower
of each of these samples , then in about 95% of these cases the and upper limits in terms of sample mean, and SE.
population mean will lie within that interval. -54% : ¯x ± .74SE
Similarly the following two statements express a confidence -75% : ¯x ± 1.15SE
interval: -95%:
• 95.5% of all sample means are within ± 2 standard -98%:
deviations of the population mean
© Copy Right: Rai University
106 11.556
1. For a population with known variance of 185 , a sample of To solve this problem we need to reduce the problem in terms

RESEARCH METHODOLOGY
64 individuals leads to 217 as an estimate of the mean. of sampling concepts.
i. Find the SE of mean Sample mean = Steve’s estimate of waiting time
ii. Establish an interval estimate that should include the
Standard error of the estimate = 5minutes/customers position
population mean 68.3% of the time.
in the queue
The data is as follows: n=64, Sample mean ¯x =217, ó2=185
a. To set up a 95% confidence interval for the first customer:
i. SE = σ / n
Sample mean = 25, Standard deviation= 5/2=2.5
σ = 185 = 13.6 Confidence interval= 25+/-1.96*2.5= 25+/-4.9mins
SE = 13.6/ 8 = 1.7 b. 15+/- 3.267 mins
ii. 68.3% confidence interval
c. 38+/-1.96mins
z = 1.00
d. 20+/-9.8mins
217 ± 1.7
3. Ena is a frugal under graduate at a University. She is interested e. These are prediction intervals for the next observations
in purchasing a used stereos . She randomly selects 125 rather than confidence intervals for the population mean
newspaper ads and found the average price of a used stereos based on a sample that ahs already been take,
in the sample was Rs3250. She knows the standard Exercises
deviation of used stereos is Rs.615. 1. Define confidence level for an interval estimate.
i. Establish an interval estimate of stereo prices so that 2. Suppose you wish to use a confidence level of 80%. Define
Ena can be 68.3% certain that the population mean upper and lower limits in terms of the sample mean and SE.
price lies within this interval.
3. In what way may an estimate be less meaningful because of
ii. Establish an interval estimate of stereos so that Ena
a) High confidence level ?
can be 95.5% certain that the population mean lies
within this interval. b) Narrow confidence interval?
Again we first write our data: 4. Suppose a sample of 50 is taken from a population with
standard deviation
N=125, ¯x= 3250, s =615 a) Establish an interval estimate for the population mean
that is 95.5 percent certain to include the true
SE= s / n = 615/ 125 =55.6 population mean.
i . 68.3% interval b) Suppose, instead, that the sample size was 5,000.
z.= 1.0 Establish an interval for the population mean that is
95.5 percent certain to include the true population
Rs.3250±55.6
mean.
ii. 95.5% interval
c) Why might estimate (a) be preferred to estimate (b)?
z= 1.96 Rs. 3250±110.2 Why might (b) preferred to (a) ?
3. Steve Kipper has a reputed barbers shop . As each customer 5. Given the following confidence levels, express the lower and
enters Steve yells out the number of minutes that the upper limits of the confidence interval for these levels in
customer can expect to wait before getting his cut. The only terms of ¯x and SE.
statistician in town is frustrated with what he feels are highly
a) 60 percent.
inaccurate point estimates of Steve’s. He determines that the
actual waiting time for any customer is normally distributed b) 70 percent
with mean equal to Steve’s estimate in minutes and standard c) 92 percent.
deviation equal to 5 minutes divided by the customers d) 96 percent.
position in the waiting line. Help Steve’s customers develop
6. Jon Jackobsen, an overzealous graduate student, has just
95 percent probability intervals for the following situations:
completed a first draft of his 700-page dissertation. Jon has
We need to help Steve’s customers determine s typed his paper himself and is interested in knowing the
a) The customer is second in line and Steve’s estimate is 25 average number of typographical errors per page, but does
minutes. not want to read the whole paper. Knowing a little bit about
b) The customer is third in line and Steve’s estimate is 15 business statistics, Jon selected 40 pages at random to read
minutes. and found that the average number of typos per page was
4.3 and the sample standard deviation was 1.2 typos per
c) The customer is fifth in line and Steve’s estimate is 38
page.
minutes.
d) The customer is first in line and Steve’s estimate is 20 a. Calculate the estimated standard error of the mean.
minutes. b. Construct for Jon a 90 percent confidence interval for
e) How are these intervals different from confidence intervals? the true average number of typos per page in his paper

© Copy Right: Rai University


11.556 107
RESEARCH METHODOLOGY

LESSON 19:
MODEL BUILDING AND DECISION MAKING

Friends, after reading this lesson, you should be able to: The decision-making process followed may consist, broadly, of
• Explain the concepts of model building and decision- some or all of the steps given below:
making; 1) Problem definition;
• Discuss the need for model building in managerial research; 2) Identifying objectives, criteria and goals;
• Relate the different types of models to different decision- 3) Generation/ Enumeration of alternative courses of action
making situations; 4) Evaluation of alternatives;
• Describe the principles of designing models for different 5) Selection/ choosing the “best” alternative
types of managerial research/ decision-making situations.
6) Implementation of the selected alternative.
Introduction All the above steps are critical in decision-making situations.
Dear friends you are the future mangers, and for a manager, However, in the fourth and fifth steps; i.e., evaluation and
whichever type of organisation you works in, very often faces selection, models play a fairly important role. In this lesson we
situations where he has to decide/ choose among two or more will concentrate on Model Building and Decision-making.
alternative courses of action. These are called decision-making
situations. Activity
Suppose you have recently bought a Sony Colour TV for your
An example of such a situation would be the point of time
house Describe briefly the decision process you have gone
when you possibly took the decision to joint this management
through before making this choice.
programme. Possibly, you had a number of alternative
management education programmes to choose from. Or, at
worst, maybe you had admission in this programme only. Even
in that extreme type of situation you had a choice -whether to
join the programme or not! You have, depending upon your
own decision-making process, made the decision.
The different types of managerial decisions have been catego-
rized in the following manner;
1) Routine/ Repetitive/ Programmable vs.
Nonroutine /Nonprogramable decisions.
2) Operating vs. Strategic decisions.
The routine/repetitive/programmable decisions are those which
can be taken care of by the manager by resorting to standard
Models and Modelling
Many managerial decision-making situations in organizations
operating procedures (also called “sops” in managerial parlance).
are quite complex. So, managers often take recourse to models
Such decisions the manager has to take fairly often and he/she to arrive at decisions.
knows the information required to facilitate them. Usually the
Model: The dictionary meaning of the word model is “a
decision maker has knowledge in the form of “this is what you
representation of a thing”. It is also defined as the body of
do” or “this is how you process” for such decision-making
information about a system gathered for the purpose of
situations.
studying the system. It is also stated as the specification of a set
Examples of these decisions could be processing a loan of variables and their interrelationships, designed to represent
application in a financial institution and supplier selection by a some real system or process in whole or in part.
materials manager in a manufacturing organization.
The non-repetitive/non-programmable/strategic decisions are
Modelling : Models can be understood in terms of their
those, which have a fairly long-term effect in an organisation.
structure and purpose. . The purpose of modelling for
Their characteristics are such that no routine methods, in terms
managers is to help them in decision-making. The term
of standard operating procedures; can be developed for taking
‘structure’ in models refers to the relationships of the different
care of them.
components of the model.
The element of subjectivity/judgment in such decision-making
In case of large, complex, and untried problem situations the
is fairly high. Since the type of problem faced by the decision
manager is vary about taking decisions based on intuitions. A
maker may vary considerably from one situation to another, the
wrong decision can possibly land the organisation in dire straits.
information needs and the processing required to arrive at the
Here modelling comes in handy. It is possible for the manager
decision may also be quite different.
to model the decision-making situation and try out the
© Copy Right: Rai University
108 11.556
alternatives on it to enable him to select the “best” one. This can

RESEARCH METHODOLOGY
be compared to non-destructive testing in case of manufactur- Start
ing organisations.
Presentation of Models: There are different forms through Yes
Is procurement
which Models can be presented. They are as follows: Quantity > 1000 No

Procurement Procurement
1) Verbal or prose models. price = 10 price = 15

2) Graphical/ conceptual models. Total Cost = 10 * Total Cost = 15 *


Q
3) Mathematical models. Q

4) Logical flow models. Stop Stop

We let us discuss each model one by one -


Verbal Models: The verbal models use everyday English as the Fig.1: A logical flow model material procurement decisions with
language of representation. An example of such model from quantitative discounts allowed.
the area of materials management would be as follows:
Activity 2
“The price of materials is related to the quantum of purchases
for many items. As the quantum, of purchases increases, the Mention below a mathematical model, which has been used for
unit procurement price exhibits a decrease in a step-wise sales forecasting by your organization, or any organization you
fashion. However, beyond a particular price level no further know of.
discounts are available.” ________________________________________________________________________
Graphical Models: I think you can easily understand graphical ________________________________________________________________________
model. The graphical models are more specific than verbal
models. They depict the interrelationships between the different ________________________________________________________________________
variables or parts of the model in diagrammatic or picture form. ________________________________________________________________________
They improve exposition, facilitate discussions and guide
analysis. The development of mathematical models usually Activity 3
follows “graphical models. Think of a production decision situation and present it
Mathematical Models: The mathematical models describe the diagrammatically using logical flow model.
relationships between the variables in terms of mathematical ________________________________________________________________________
equations or inequalities. Most of these include clearly the
objectives, the uncertainties and the variables. These models ________________________________________________________________________
have the following advantages: ________________________________________________________________________
1) They can be used for a wide variety of analysis.
________________________________________________________________________
2) They can be translated into Computer Programs.
________________________________________________________________________
The example of a mathematical model that is very often used
by materials managers is the Economic Order Quantity Role of Modelling in Research in
(EOQ). It gives the optimal order quantity (Q) for a product in Managerial Decision-making
terms of its annual demand (A), the ordering cost per order In the previous sections of this lesson, we have tried to explore
(Co), the inventory carrying cost per unit (Ci) and the purchase the topics of model building and decision-making. However,
cost per unit (Cp). The model equation is as follows: we confined ourselves to bits and pieces of each concept and
their illustration in a comprehensive decision-making situation
has not been attempted. In this section we will look at a
managerial decision-making situation in totality and try to
2 * A * C0 understand the type of modelling which may prove of use to
Q=
Ci * C p the decision maker.
The example we will consider here is the case of co-operative
state level milk and milk products marketing federation. The
Logical Flow Models: The logical flow models are a special federation has a number of district level dairies affiliated to it,
class of diagrammatic models. Here, the model is expressed in each having capacity to process raw milk and convert it into a
form of symbols, which are usually used in computer program- number of milk products like cheese, butter, milk powders,
ming and software development. These models are very useful ghee, shrikhand, etc. The diagrammatic model of the processes
for situations, which require multiple decision points and in this set up is depicted.
alternative paths. These models, once one is familiar with the
The type of decisions which have to be made in such a set up
symbols used, are fairly easy to follow. An example of such a
can be viewed as a combination of short/intermediate term and
model for a materials procurement situation with quantity
long-term ones.
discounts allowed, is as given in Figure 2.

© Copy Right: Rai University


11.556 109
The short-term decisions are typically product-mix decisions like Dynamic models vs Static Models - The consideration of
RESEARCH METHODOLOGY

deciding: time as an element in the model. Static models assume, the


1. Where to produce which product and system to be in a balance state and show the values and
relations for that only. Dynamic models, however, follow the
2 When to produce it.
changes over time that result from the system activities.
The profitability of the organisation depends to a great extent Obviously, the dynamic models are more complex and more
on the ability of the’ management to make these decisions difficult to build than the static models. At the same time, they
optimally. The long term -decisions relate to are more powerful and more useful for most real life situations.
1. The capacity creation decisions such as which type of new Analytical Numerical Models - The analytical and the
capacity to create, when, and at which location(s) and (2) numerical models refer to the procedures used to solve
which new products to go in for. mathematical models. Mathematical models that use analytical
Needless to say, this is a rather complex decision-making techniques (meaning deductive reasoning) can be classified as
situation and intuitive or experience based decisions may be way analytical type models. Those which require a numerical
off from the optimal ones. computational technique can be called numerical type math-
Modelling of the decision-making process and the interrelation- ematical models.
ships here can prove very useful. Deterministic vs. Stochastic Models - The final way of
In absence of a large integrated model, a researcher could attempt classifying models is into the, deterministic and the probabilis-
to model different Subsystems in this set up. For instance “time tic/ stochastic ones. The stochastic models explicitly take into
series forecasting based models could prove useful for taking care consideration the uncertainty that is present in the decision-
of the milk procurement subsystem; for the product demand making process being modelled. We have seen this type of
forecasting one could take recourse, again, to time series or situation cropping up in the case of the milk marketing
regression based models; and for product-mix decisions one federation decision-making. The demand for the products and
could develop Linear Programming based models. the milk procurement, in this situation are uncertain. When we
explicitly build up these uncertainties into our milk federation
We have in this section seen a real life, complex managerial decision-
model then it gets transformed from a deterministic to, a
making situation and looked at the possible models the
stochastic/ probabilistic type of mode.
researcher could propose to improve the decision-making. Similar
models could be ‘built for other decision-making situations. Activity 4
Types of Models Give an example each of the following models used for
Models in managerial system studies have been classified in decision-making.
many ways. The dimensions in describing the models are: a) Macro Model
a) Physical vs. Mathematical,
b) Macro vs. Micro,
c) Static vs. Dynamic
d) Analytical vs. Numerical and
e) Deterministic vs. Stochastic.
Now let us understand each one type and its utility –
Physical Models - in physical models a scaled down replica of b) Micro Model
the actual system is very often created. Engineers and scientists
usually use these models. In managerial research one finds the
utilisation of physical models in the realm of marketing in
testing of alternative packaging concepts.
Mathematical Models - The mathematical models use
symbolic notation and equations to represent a decision-making
situation. The system attributes are represented by variables and
the activities by mathematical functions that interrelate the
variables. We have earlier seen the economic order quantity
model as an illustration of such a model. c) Deterministic Model
Macro vs. Micro Models - The terms macro and micro in
modelling are also referred to as aggregative and disaggregate
respectively. The macro models present a holistic picture of a
decision-making situation in terms of aggregates. The micro
models include explicit representations of the individual
components of the system.

© Copy Right: Rai University


110 11.556
d) Stochastic Model

RESEARCH METHODOLOGY
Objectives of Modelling
The objectives or purposes which underlie the construction of
models may vary from one decision-making situation to
another. In one case it may be used for explanation purposes
whereas in another it may be used to arrive at the optimum
course of action. The different purposes for which modelling is
attempted can be categorised as follows:
1) Description o f the system functioning.
2) Prediction of the future.
3) Helping the decision maker/manager decide what to do.
The first purpose is to describe or explain a system and the
processes therein. Such models help the researcher or the
manager in understanding complex, interactive systems or
processes. The understanding, in many situations, results in
improved decision-making.
An example of this can be quoted from consumer behaviour Model Building/Model Development
problems in the realm o f marketing. Utilising these models the The approach used for model building or model development
manager can -understand the differences in buying pattern of for managerial decision making will vary from one situation to
household groups. This can help him in designing hopefully, another. However, we can enumerate a number of generalized
improved marketing strategies. steps which can be considered as being common to most
The second objective of modelling is to predict future events. modelling efforts. The steps are:
Sometimes the models developed for the description/ explana- 1) Identifying and formulating the decision problem.
tion can be utilised for prediction purposes also. O f course, the 2) Identifying the objective(s) of the decision maker(s).
assumption made here is that the past behaviour is an impor- 3) System elements identification and block building.
tant indicator of the future. The predictive models provide
4) Determining the relevance of different aspects of the system.
valuable inputs for managerial decision-making.
5) Choosing and evaluating a model form
The last major objective of modelling is to provide the manager
inputs on what he should do in a decision-making situation. The 6) Model calibration
objective of modelling here is to optimize the decision of the 7) Implementation
manager subject to the constraints within which he is operating. The decision problem for which the researcher intends develop a
Example: A materials manager may like to order the materials model needs to be identified and formulated properly. Precise
for his organisation in such a manner that the total annual problem formulation can lead one to the right type of solution
inventory related costs are minimum, and the working capital methodology. This process can require, a fair amount of effort.
never exceeds a limit specified by the top management or a Improper identification of the problem can lead to solutions for
bank. The objective of modelling, in this situation, would be to problems, which either do not exist or are not important enough.
arrive at the optimal material ordering policies. Example: A manager stating that the cause of bad performance
Activity 5 of his company was the costing system being followed. A
You, may go through various issues o f any management careful analysis of the situation by a consultant indicated that
journal(s). It is vary likely that you may come across a regression the actual problem lay elsewhere, i.e., the improper product-mix
model for estimating, sales, advertisement expenditure, price or being produced by the company. One can easily see here the
any other variable. Discuss how the model may be used for the radically different solutions/models, which could emerge for
following: the rather different identifications of the problem.
Explanation Purposes The problem identification is accompanied by understanding the
decision process and the objective(s) of the decision maker(s).
i) Prediction of the future value of the dependent variable.
Very often, especially in case of complex problems, one may run
iii) Helping the decision maker decide what to do to achieve a into situations of multiple conflicting objectives. Determination
given object. of the dominant objective, trading-off between the objectives,

© Copy Right: Rai University


11.556 111
and weight the objectives could be some ways of taking care of Activity 6
RESEARCH METHODOLOGY

this problem. The typical “objectives which could feature in such You are the personnel manager of a construction company. If
models can be maximizing profits, minimizing costs, maximiz- you were asked to build a model to forecast the manpower
ing employment, maximizing welfare, and so on. requirements of both skilled and non-skilled workers for the
The next major step in model building is description of the next five-year, list out the steps you may consider for building
system in terms of blocks. Each of the blocks is a part of the the model.
system, which has a few input variables and a few output _____________________________________________________________
variables. The decision-making system as a whole can be
described in terms of interconnections between blocks and can _____________________________________________________________
be represented pictorially as a simple block diagram. For
instance, we can represent a typical marketing system in form of _____________________________________________________________
a block diagram (please refer Figure 4). However, one should
_____________________________________________________________
continuously question the relevance of the different blocks vis-
a-vis the problem definition and the objectives. Inclusion of _____________________________________________________________
the not so relevant segments in the model increases the model
complexity and solution effort. _____________________________________________________________
A number of alternative modelling forms or structures may be possible.
_____________________________________________________________
For instance, in modelling marketing decision-making situa-
tions, one may ask questions such as whether the model Model Validation
justifies assumptions of linearity, non-linearity (but linearizable) When a model of a decision-making situation is ready a final
and so on. Depending upon the modelling situation one may question about its validity should always be raised. The
recommend the appropriate modelling form. modeller should check whether the model represents the real life
The model selection should be made considering its appropri- situation and is of use to the decision maker.
ateness for the situation. One could evaluate it on criteria like A number of criteria have been proposed for model validation.
theoretical soundness, goodness of fit to the historical data, The ones which are considered important for managerial
and possibility of producing decisions, which are acceptable in applications are face validity, statistical validity and use validity.
the given context. In face validity, among other things, we are concerned about the
validity of the model structure. One attempts to find whether
Finished goods the model does things, which are consistent with managerial
warehouse experience and intuition. This improves the likelihood of the
model actually being used.
In statistical validity we try to evaluate the quality of relation-
Marketing Regional Depot ships being used in the, model. The use validity criteria may vary
Manager Officials with the intended use of the model. For instance, for descrip-
tive-’models one would place emphasis on face validity and
goodness of fit.
Wholesalers
Simulation Models
Simulation models are a distinct class of quantitative models,
Competitors Retailers Dealers usually computer based, which are found to be of use for
complex decision problems.
The term ‘simulation’ is used to describe a procedure of
Consumers establishing a model and deriving a solution numerically. A
series of trial and error experiments are conducted on the model
Product flow Information flow to predict the behaviour of the system over a period of time. In
this way the operation of the real system can be replicated.
The final steps in the model development process are related to This is also a technique which is used for decision making under
model calibration and, implementation. This involves assigning conditions of uncertainty. Generally, simulation is listed for
values to the parameters in the mode. When sample data is modelling in conditions where mathematical formulation and
available then we can use statistical techniques for calibration. solution of model are not feasible. This methodology has been
used in numerous types of decision problems ranging from
However, in situations where little or no data is available, one
queuing and inventory management to energy policy modelling.
has to take recourse to subjective procedures. Model implemen-
tation involves training the support personnel and the
management on system use procedures. Documentation of the
model and procedures for continuous review and modifications
are also important here.

© Copy Right: Rai University


112 11.556
RESEARCH METHODOLOGY
Tutorial
1) What do you understand by the term decision-making?
What is the role of models in managerial decision-making?
Explain.
2) Briefly review the different type of models along with their
characteristics.
3) Take a complex decision making situation in your
organisation. Try to identify the blocks in it along with their
interrelationships. Try to prepare a graphical model of this
decision-making. situation.
4) What are the purposes of modelling? Discuss.

© Copy Right: Rai University


11.556 113
RESEARCH METHODOLOGY

LESSON 20:
PRINCIPLE OF HYPOTHESIS TESTING

So far we have talked about estimating a confidence interval statistic. We then determine whether the sample data supports
along with the probability (the confidence level) that the true our hypotheses assumption regarding the average sales growth.
population statistic lies within this interval under repeated
How is this Done?
sampling. We now examine the principles of statistical inference
If the difference between our hypothesized value and the
to hypotheses testing.
sample value is small, then it is more likely that our hypoth-
By the End of this Chapter you Should be esized value of the mean is correct. The larger the difference the
Able to smaller the probability that the hypothesized value is correct.
• Understand what is hypothesis testing In practice however very rarely is the difference between the
• Examine issues relating to the determination of level of sample mean and the hypothesized population value larger
significance enough or small enough for us to be able to accept or reject the
hypothesis prima-facie. We cannot accept or reject a hypothesis
• Apply tests of hypotheses to large to management
about a parameter simply on intuition; instead we need to use
Situations
objective criteria based on sampling theory to accept or reject the
• Use of SPSS package to carry out hypotheses test and hypothesis.
interpretation of computem
Hypotheses testing is the process of making inferences about a
output including p - values population based on a sample. The key question therefore in
What is Hypothesis Testing ? hypotheses testing is: how likely is it that a population such as
one we have hypothesized to produce a sample such as the
What is a hypothesis? one we are looking at.
A hypothesis is the assumption that we make about the
Hypotheses Testing–The theory
population parameter. This can be any assumption about a
population parameter not necessarily based on statistical data. Null Hypothesis
For example it can also be based on the gut feel of a manager. In testing our hypotheses we must state the assumed or
Managerial hypotheses are based on intuition; the market place hypothesized value of the population parameter before we
decides whether the manager’s intuitions were in fact correct. begin sampling. The assumption we wish to test is called the
In fact managers propose and test hypotheses all the time. For Null Hypotheses and is symbolized by Ho.
example: For example if we want to test the hypotheses that the
• If a manager says ‘if we drop the price of this car model by population mean is 500. We would write it as:
Rs15000 , we’ll increase sales by 25000 units’ is a hypothesis.
Ho: µ=500
To test it in reality we have to wait to the end of the year to
and count sales. If we use the hypothesized value of a population mean in a
• A manager estimates that sales per territory will grow on problem we represent it symbolically as: m Ho.
average by 30% in the next quarter is also an assumption or The term null hypotheses has its origins in pharmaceutical
hypotheses. testing where the null hypotheses is that the drug has no effect,
i.e., there is no difference between a sample treated with the
How would the manager go about testing this
drug and untreated samples.
assumption? Suppose he has 70 territories under
him. Alternative Hypothesis
• One option for him is to audit the results of all 70 territories and If our sample results fail to support the hypotheses we must
determine whether the average is growth is greater than or less conclude that something else must be true. Whenever we reject
than 30%. This is a time consuming and expensive procedure. the null hypothesis the alternative hypothesis is the one we have
• Another way is to take a sample of territories and audit sales to accept. This symbolized by Ha .
results for them. Once we have our sales growth figure, it is There are three possible alternative hypotheses for any Ho., i.e.:
likely that it will differ somewhat from our assumed rate. For Ha: µ≠(the alternative hypothesis is not equal to 500)
example we may get a sample rate of 27%. The manager is Ha: µ>500(the alternative hypothesis is greater than 500)
then faced with the problem of determining whether his
assumption or hypothesized rate of growth of sales is Ha: µ<500( the alternative hypothesis is less than 500)
correct or the sample rate of growth is more representative. Understanding Level of Significance
To test the validity of our assumption about the population we The purpose of testing a hypothesis is not to question the
collect sample data and determine the sample value of the computed value of the sample statistics but to make a judg-

© Copy Right: Rai University


114 11.556
ment about the difference between the sample statistic and the acceptable probability, say, 5%, is also the risk we run of

RESEARCH METHODOLOGY
hypothesized population parameter. Therefore the next step, rejecting a hypothesis that is true.
after stating our null and alternative hypotheses, is to decide
The Process of Hypothesis Testing
what criterion do we use for deciding whether to accept or reject
We now look at the process of hypothesis testing. An example
the null hypothesis.
will help clarify the issues involved:
How do we use sampling to accept or reject hypothesis?
Aluminum sheets have to have an average thickness of
Again we go back to the normal sampling distribution. We use .04inches or they are useless. A contractor takes a sample of 100
the result that there is a certain fixed probability associated with sheets and determines mean sample thickness as .0408 inches.
intervals from the mean defined in terms of number of On the basis of past experience he knows that the population
standard deviations from the mean. Therefore our problem of standard deviation for these sheets is .04 inches. The issue the
testing a hypothesis reduces to determining the probability that contractor faces is whether he should , on the basis of sample
a sample statistic such as the one we have obtained could have evidence, accept or reject a batch of 10,000 aluminum sheets.
arisen from a population with a hypothesized mean m.
In terms of hypotheses testing the issue is :
In the hypothesis tests we need two numbers to make our
• If the true mean is .04inches and the standard deviation is
decision whether to accept or reject the null hypothesis:
.004 inches, what are the chances of getting a sample mean
• An observed value or computed from the sample that differs from the population mean (.04 inches) by
• A critical value defining the boundary between the acceptance .0008inches or more?
and rejection region . To find this out we need to calculate the probability that a
Instead of measuring the variables in original units we calculate random sample with mean .408 will be selected from a
a standardized z variable for a standard normal distribution population with ì=.04 and a standard deviation= .004 If this
with mean µ=0.The z statistic tells us how many how many probability is too low we must conclude that the aluminum
standard deviations above or below the mean standardized company’s statement is false and the mean thickness of the
mean (z,<0, z>0) our observation falls. consignment supplied is not .04inches.
We can convert our observed data into the standardized scale Once we have stated out hypothesis we have to decide on a
using the transformation criterion to be used to accept or reject Ho. The level of signifi-
cance represents the criterion used by the decision maker to
x−µ
z= accept or reject a hypothesis. For example if the manager wishes
σx to allow for a 5% level of significance. This means that we reject
The z statistic measures the number of standard deviations the null hypothesis when the observed difference between the
away from the hypothesized mean the sample mean lies. From sample mean and population mean is such that it or a larger
the standard normal tables we can calculate the probability of difference would only occur 5 or less times in every 100 samples
the sample mean differing from the true population mean by a when the hypothesized value of the population parameter is
specified number of standard deviations. For example: correct.It therefore indicates the permissible extent of sampling
variation we are willing to allow whilst accepting the null
• We can find the probability that the sample mean differs
hypothesis. In statistical terms 5% is called the level of signifi-
from the population mean by two or more standard
cance and is denoted by a=.05
deviations.
We now write our data systematically :
It is this probability value that will tell us how likely it is that a
given sample mean can be obtained from a population with a Ho: µ = .04, Ha: µ ≠ .04 a = .045 s = .004
hypothesized mean m. .
Our Sample Data is As Follows
• If the probability is low for example less than 5% , perhaps
n=100, x=.0408
--
it can be reasonably concluded that the difference between
the sample mean and hypothesized population mean is too To test any hypothesis we need to calculate the standard error of
large and the chance that the population would produce such the mean from the population standard deviation
a random sample is too low. σ _x = σ / n

What probability constitutes too low or acceptable level is a = .004 / 100 = .0004
judgment for decision makers to make. Certain situations Next we calculate the z statistic to determine how many
demand that decision makers be very sure about the characteris- standard errors away from the true mean our sample mean is.
tics of the items being tested and even a 2% probability that This gives us our observed value of z which can then be
the population produces such a sample is too high. In other compared with the z critical from the normal tables.
situations there is greater latitude and a decision maker may be
wiling to accept a hypothesis with a 5% probability of chance x−µ
z=
variation. σx
In each situation what needs to be determined are the costs = .0408-.04/.0004=2
resulting from an incorrect decision and the exact level of risk This is demonstrated in the figure 1 below.
we are willing to assume. Our minimum standard for an

© Copy Right: Rai University


11.556 115
Activities
RESEARCH METHODOLOGY

1. What do we mean when we reject a hypothesis on the basis


of a sample?
Interpreting The Level Of Significance
The level of significance is demonstrated diagrammatically
below in figure 3. Here .95 of the area under the curve is where
we would accept the null hypotheses. The two coloured parts
under the curve representing a total of 5% of the area under the
curve are the regions where we would reject the null hypotheses.

Figure1
1. We have now determined that our calculated value of z
indicates that the sample mean lies two standard errors(SE)
to the right of the hypothesized population mean on the
standard normal scale.
2. Our level of significance is 5%. We now determine the critical
value of z at 0.05 level of significance. This value is 1.96.
A word of caution regarding areas of acceptance and rejection.
3. A comparison between the observed z and the z permissible Even if our sample statistic does not fall in the non shaded
by our given level of significance :observed z: 2 Critical z region this does not prove that our Ho is true. The sample
:1.96 results merely do not provide statistical evidence to reject the
4. Since the observed value of z is greater than the critical value hypothesis. This is because the only way a hypothesis can be
of z we can infer that the difference between the value of the accepted or rejected with certainty is for us to know the true
sample mean and the hypothesized population mean is too population parameter. Therefore we say the sample data is such
large at the 5% level of significance to be attributed to as to cause us to not reject null hypotheses.
sampling variation. Hence we reject the null hypothesis. The Selecting a Level of Significance
manager would reject the consignment of aluminum sheets There is no standard or externally given level of significance for
as not meeting the required specification level. testing hypotheses. The level of significance at which we want
Example 2 to test a hypothesis is set externally by the manager based on his
How many standard errors around the hypothesized value evaluation of the costs and benefits associated with acceptance
should we use to be 99.44 certain that we accept the hypothesis or rejection of a null hypothesis. We can however not a few
when it is true? points regarding this issue:
This problem requires that we leave a probability 1-.994 =.0056 1. The higher the level of significance the greater the probability
in the tail. Since it is two tailed test we have to halve this of rejecting Ho when it is true.
probability to determine z such that there is This is illustrated in the figure 4 below, which shows three levels
.0056/2 =.0028 area in each tail. of significance: .01, .1, .50. The location of the sample statistic is
Area under one half of the normal curve =.5-.0028=.4972 also shown in each of the three distributions. It obviously
remains the same.
Looking up the normal tables we find for positive values of z ,
a probability of .4972 is associated with a z =2.77 this is • In fig 4a &b we would accept Ho that the sample mean does
illustrated in the figure 2 below. not differ significantly from the population mean.
• In fig4c we would reject Ho.
Why? Because out level of significance of .5 is so high that we
would rarely accept Ho when it is true and frequently reject Ho
when it is true.
99.44% .0028 Figure 4a, 4b, 4c
.0028

µ
z 0 2.77

Figure 2

© Copy Right: Rai University


116 11.556
sample of products throws up product specifications, which are

RESEARCH METHODOLOGY
on the borderlines of acceptability.
Type2 ( accepting Ho when it is false):
This involves accepting products with defective specifications i.e,
that some customers may get a somewhat defective product. The
costs associated with this maybe recalling the defective parts or
offering a involves giving a special warranty to repair the defect.
Offering a repair warranty is a relatively less costly option.
In this case the manufacturer will set lower levels of significance
to minimize type 1 errors. For example he may set 95% 05 99%
level of significance.
Activities
1.Explain why there is no single level of probability used to
accept or reject a hypothesis.
2.If we reject a hypothesis because it differs from a sample
statistic by more than 1.75 SE, what is the probability that
we have rejected a hypothesis that is in fact true?

3.Define Type1 and 2 errors.


4.In a criminal trial, the null hypothesis is that an individual is
innocent of a certain crime. Would the legal system prefer to
Type 1 and Type2 Errors
commit a type1 or type2 error with this hypothesis.
Type 1 Error: defined as the probability of rejecting Ho when
it is true. This is also the probability of the level of significance. 5.If out goal is to accept a null hypothesis that ì=36.5 with 96%
It is symbolized by a. certainty when it is true. Our sample size is 50 . Diagram the
acceptance and rejection regions for the following hypothesis:
Type2 error: This is the probability of accepting Ho when it is
false. It is symbolized by ß. - µ ≠ 36.5
There exists a tradeoff between these two errors: To get a
low á we have to have a higher ß. - µ>36.5
We can reduce the probability of making a type 1 error if we are
willing to increase the probability of making a type 2 error. With - µ<36.5
reference to the earlier figure, in figure 3 we illustrate a 50% level
6.Your null hypothesis is that the battery for a heart pacemaker
of significance. Here we have reduced the acceptance region .
has an average life of 300 days, with the alternative hypothesis
Thus we will rarely accept a Ho when it is not true. The price
being that the battery life is more than 300 days. You are the
that we have to pay for this higher level of certainty is that we
quality control engineer for the battery manufacturer.
will frequently reject Ho when it is true.
a. Would you rather make a type 1 or type 2 errors?
To deal with this tradeoff, a manager has to weigh the costs and
benefits involved with each type of error before setting the level b. Based on your answer to a. would you choose a high or low
of significance. This is best seen with a example. level of significance.

Example of a Tradeoff One Tailed Tests


So far we have discussed two tailed tests where the sample
Suppose a type 1 error involves the time and trouble of
statistic can differ from the hypothesized mean i.e., it can either
working a batch of chemicals that should have been accepted.
be more than the hypothesized mean or less than the hypoth-
(rejecting Ho when it is true.)
esized mean. Thus a 5 % level overall level of significance
Type2 error means that an entire group of users may be implies that we can test whether 2.5% probability whether it lies
poisoned!! ( accepting Ho when it is false) above the hypothesized mean and 2.5% probability whether it
In this situation the company would prefer to minimize type 2 lies below it.
error and will set a high level of significance. That is it will set very However there may be many cases where we have some prior
high levels of significance ( possibly a > 50%) to get low ßs. information which enables us to test whether the sample
Consider Another Scenario statistic is significantly more than or less than the true popula-
Type 1 error( rejecting Ho when it is true): tion statistic. In this case we go in for a one-tail test.
The costs associated with a type 1 error involves disassembling The null hypothesis continues to be that of no difference. The
an entire engine at the factory – i.e., it is associated with high alternative hypothesis can be one of two
production costs. This happens, for example, if a random test

© Copy Right: Rai University


11.556 117
RESEARCH METHODOLOGY

This is a left tailed test and the coloured region corresponds to


(a) Ho: µ=500
.10 level of significance. The acceptance region consists of 40%
on the left side of the distribution plus the entire 50% on the
(b) Ha: µ>500 – Upper tailed test
right side for a total area of 90%. This is shown in
figure5a&5b.
Or
We can calculate the SE of sample mean dose:
Ha: µ<500 – lower tailed test
σ 2
In this case if we wish to accept or reject a hypotheses at 5% level SE = = = 0.28
n 50
we then determine z critical such that the entire 5% lies on either
the right side (upper tailed test) or on the left side ( lower tailed We can now calculate the standardized z statistic:
test).This is illustrated by the coloured regions in figures 5a&5b.
x − µ 99.75 − 100
z= = = 0.88
SE 0.0289

z. critical= 1.28
Therefore since -.88<1.28, the sample mean lies within the
acceptance region and the hospital can accept the null hypoth-
esis: the observed mean of the sample is not significantly
different from the hypothesized mean dose.
Applications of One Tailed Tests
Many managerial situations call for a one tailed test. Typically if a
problem requires you to test whether the sample statistic is:
Figure 5a • More than a given population statistic
• Less than a given population statistic
a one tailed test is appropriate.
If the problem requires us to assess whether the sample statistic
is not equal to a population statistic then we use a two tailed test.
Example
1. A highway safety engineer decides to test the load bearing
capacity of a bridge that is 20 years old. Considerable data are
available from similar tests on the same type of bridges.
Which type of hypothesis is appropriate
Figure 5b one or two tail test
If the minimum load bearing capacity of this bridge must be
The procedure for testing the hypothesis remains the same as in
10 tons , what are the null and alternative hypotheses?
the two tailed case. The only difference will be in the value of
the z critical, which is determined by the entire level of signifi- The engineer would be interested in whether a bridge of this
cance on only one side of the normal distribution. age could withstand minimum load bearing capacities necessary
for safety purposes . She therefore wants its capacity to be above
An example will clarify the situation:
a certain minimum level; so a one tailed test, i.e., a right tailed
A hospital uses large quantities of packaged doses of a test would be used.
particular drug. Excessive doses will pass harmlessly out of the
The hypotheses are:
system. Insufficient doses do not produce the desired medical
treatment. The hospital has purchased the drug from the same Ho : µ=10 tons Ha : µ>10 tons
manufacturer for many years. The hospital inspects 50 doses 2. Hinton press hypothesizes that the average life of its press is
randomly and finds the mean dose to be 99.75cc. The popula- 14500 hours. The standard deviation of a press life is 2100
tion standard deviation of doses is 2cc. hours. From a sample of 25 presses the company finds
Our problem suggests that the hospital faces a problem if doses sample mean life to be 13000 hours. At a .001 significance
are significantly less than 100cc as patient’s treatment will be level should the company conclude that the average life of
affected. However if the dose is more than 100cc there appears to the press life is less than the hypothesized 14500 hours.
be no major problem. Therefore the null hypotheses remains Our problem requires us to assess whether average press life is
unchanged, we are only interested in testing as the alternative significantly less than the hypothesized press life. Therefore it
hypotheses whether the sample mean strength is significantly is a one tail test.
below 100cc. The hypotheses can be stated as follows: Ho : µ = 14500 Ha : µ < 14500 n = 25, σ = 2100 α = .01
Ho: µ=100cc SE = 2100 / 25 = 420
Ha: µ<100cc Z = 13000 − 14500/ 420 = −3.57

© Copy Right: Rai University


118 11.556
RESEARCH METHODOLOGY
z. critical for a one tail test= -2.33
-3.57<-2.33 implies we should reject Ho. The average life is
significantly less than the hypothesized life.

Activities
1. Aaj Ka films is a film distribution company. They know that
a hit movie runs for an average of 84 days in a city with a
standard deviation of 10 days. The manager of a South
Eastern district is interested in comparing the movies
popularity in his region as compared to the all India average.
He randomly selects 75 theatres in his region and finds they
ran the movie for an average of 81.5 days. The manager
wants to know whether mean running time in the South
East is below that of the national average. Test the
appropriate hypothesis at .01 level of significance.
2. Atlas Sporting goods has implemented a special trade
promotion policy for its stoves. They think the promotion
should result in a significant price change for customers.
Before the promotion began the average retail price of a
stove was $44.95 with a standard deviation of $5.75. After
the promotion they sample 25 retailers and finds mean price
to be $42.95. At a .02 level of significance does Atlas have
reason to believe the average retail price to consumers ahs
decreased?
3. Under what conditions is it appropriate to use a one tailed
test? A two tailed test?
4. The statistics department installed energy efficient lights,
heaters, ACs. Now they want to determine whether the
average monthly energy usage has decreased. Should they
perform a one or two tail test? If their previous average
monthly usage was 3124 KW hours, what are the null and
alternative hypotheses?

© Copy Right: Rai University


11.556 119
RESEARCH METHODOLOGY

LESSON 21:
TESTING OF HYPOTHESIS – LARGE SAMPLES

In the last lecture we learnt about the general principles of We can then check the calculated z with z critical for the appro-
hypothesis testing and how to carryout a test a sample statistic priate level of significance to determine whether to accept or
with a hypothesized population value. We now extend the reject Ho.
principles of hypothesis testing to include testing of hypothesis An example shall make the process clearer.
to sample proportions vis a vis a hypothesized population
.A ketchup manufacturer is in the process of deciding whether
proportion. We will also learn to test for the differences between
to produce a new extra spicy ketchup. The company’s marketing
two samples. That is do two separate samples differ signifi-
research department used a national telephone survey of 6000
cantly from each other. Similarly we try to determine whether
households and found that 335 would purchase extra spicy
the two different sample proportions differ significantly from
ketchup. A much more extensive study made two years ago
each other.
found showed that 5% of households would purchase would
Hypothesis Testing of Proportions purchase the brand then. At a two percent level of significance
So far we have talked about the principles of hypothesis testing should the company conclude that there is an increased interest
where we have compared a sample mean with a hypothesized or in the extra spicy flavour?
population mean. n=6000 Ïp =335/6000=.05583 p Ho =.05 a=.02
We can also apply the principles of hypothesis testing already Ho: p=.05 Ha:p >.05
determined to testing for differences between proportions of
occurrence in a sample with a hypothesized level of occurrence. σ−p= (PHo qHo/n)
Theoretically the binomial distribution is the correct distribu-
= (.05 ∗ .95 / 6000 ) = . 05577
tion to use when dealing with proportions. As sample size
increases the binomial distribution approaches a normal z = − P − PHo/ σ − p = .05583 − . 05/ .05577 = 2. 07
distribution in terms of characteristics. Therefore we can use the
normal distribution to approximate the sampling distribution. z critical for a one tailed test is 2.05 Since the observed z > z Ho : µ1
To do this we need to satisfy the following conditions: critical we should reject
Ha : µ1
np>5, nq>5 Ho and current levels of interest are significantly greater than
where p is the proportion of successes interest two years ago.
q: proportion of failures. Activities
Testing of Hypothesis for Proportions 1.Steve Cutter sells lawn mowers . He is interested in comparing
When testing significance of hypotheses for proportions we the reliability of his lawn mowers with an international
begin with a similar procedure to the earlier case: brand. He knows only 15% of the international brands
require repairs. A sample of 120 of Steve’s customers shows
First we define various proportions or percentages of occurrence that 22 of them required repairs. At .02 level of significance
of a event: is there evidence that Steve’s mowers differ from the
pHo: the population or hypothesized population of success international brand.
qHo Hypothesized value of the population proportion of Ans; Observed z value=1.03< 2.33(z critical) Therefore we
failures accept Ho.
p: sample proportion of successes 3. Macroswift estimated last year that 35% of its potential
q: sample proportion of failures software buyer were planning to wait to purchase the new
Again we calculate a z statistic operating system Window Panes, until an upgrade has been
released. After an advertising campaign to reassure the public
p − p H0 , Macroswift surveyed 3000 people and found 950 who were
z= still skeptical. At the 5% level of significance can the company
σp conclude the proportion of skeptical people has decreased?
where s p is the standard error of the proportion which is Hypothesis Tests of Differences Between
calculated using the hypothesized population proportion . Means
So far we have examined the case where we are testing the
results of a sample against a hypothesized value of a popula-
p H0 q H 0
σp = tion statistic.
n

© Copy Right: Rai University


120 11.556
RESEARCH METHODOLOGY
We now turn to case where we wish to compare the parameters
for two different populations and determine whether these differ
from each other. In this case we are not really interested in the
actual value of the two parameters but the relation between the
two parameter, i.e. is there a significant difference between them.
Example of hypothesis of this type are:
- Whether female employees earn less than males for the same
work.
Figure 1c
- A drug manufacturer may need to compare reactions of one
group of animals administered the drug and the control
group. The mean of this distribution is µ − x1 − − x2 = µ1 − µ2
- A company may want to see if the proportion of
The standard deviation of the distribution of the difference
promotable employees in one installation is different from
between sample means is called the standard error of the
another.
difference between two means.
In each case we are not interested in the specific values of the
individual parameters as the relation between the two parameters. −
σ x1 − − x2 = (σ 1
2
/ n1 + σ 2 2 /n 2 )
The core problem reduces to one of determining whether the
The testing procedure for a hypothesis is similar to the earlier cases.
means from two samples taken from two different populations
is significantly different from each other. If they are not, we can Ho : µ1 = µ2
hypothesize that the two samples are not significantly different Ha : µ1 ≠ µ2
from each other.
The z statistic = (?x1 - ?x 2 ) – (µ1 - µ2) Ho / s ?x1 - ?x2
The Oretical Basis
Shown below are three different but related distributions. Since we will usually we will be testing for equality between the
Figure 1a shows the population distribution for two different two population means hence:
populations 1 and 2. They have respectively the following
characteristics: (µ1 - µ2 ) Ho =0 since µ1 = µ2
An example will make the process clearer:
Mean µ1 and µ2 and standard deviations s 1 and s 2.
A manpower statistician is asked to determine whether hourly
= .05 Figure 1b shows the respective sampling distribution of
wages of semi skilled labour are the same in two cities. The
sample. This distribution is defined by the following statistics:
result of the survey is given in the table below. The company
Mean of sampling distribution of sample means : µ1 and µ2 wants to test the hypothesis at .05 level of significance that there
Standard deviation of sampling distribution of mean or the is no significant difference between the hourly wage rate across
standard error of sampling mean: s ¯x1 and s ¯x2. the two cities.
We have two populations 1, 2 with mean s 1 and s 2 with
standard deviation s 1 and s 2. The associated sampling distribu- City Mean Standard Size of
tion for sampling means ---x1 and ---x2. However what we are now hourly deviation of sample
interested in is the difference between the two values of the
sampling means. i.e., the sampling distribution of the wage sample
differencebetweenthesamplingmean ---
sx1 - ---x2. How do we apex $8.95 $.4 200
derive this distribution ?
Eden $9.10 $.6 175
Suppose we take a random sample from the distribution of
Population 1 and another random sample from the distribu-
tion of Population 2. If we then subtract the two sample Ho : µ1 = µ2
means, we get ---x1 - ---x2 . This difference will be positive if ---x1 is Ha : µ1 ≠ µ2 a = .05
larger than ---x2 and vice versa. By constructing a distribution of
all possible sample differences ---x1 - ---x2 we end up with a Since standard deviation of the two populations are not known
distribution of the difference between sample means shown we estimate óˆ1 and óˆ2 by using the sample standard deviation s 1
below in figure 6c. and s 2

s ˆ1= s1 =.4 s ˆ 2= s2=.6


The estimated standard error of the difference between the two
means is

We then calculate the z statistic :


Figure 1a & 1b

© Copy Right: Rai University


11.556 121
1. A sample of 32 money market funds was chosen on Jan 1.
RESEARCH METHODOLOGY

z. =(?x 1 - ?x2 ) – (µ1 - µ2 ) Ho / s ?x1 - ?x2 1996, and the average annual rate of return over the past 30
days was found to be 3.23% and the sample standard
8.95-9.10-0/.053=-2.83
deviation was.51% . A year earlier , a sample of 38 money
We can mark the standardized difference on a sketch of the market funds showed an average rate of return of 4.36% and
sampling distribution and compare with the critical value of the sample standard deviation was .84%. Is it reasonable to
z=±1.96 in figure 7 As we can see the calculated z lies outside conclude (at a =.05) that money market interest rates
the acceptance region. Therefore we reject the null hypothesis. declined during 1995?
2. Despite the Equal pay act it still appeared that in 1993 men
earned more than women in similar jobs. A random sample
of 38 male machine tool operators found a mean hourly
wage of $11.38, and a sample standard deviation was $1.84.
A random sample of 45 female operators found their mean
wage to be $8.42 and the sample standard deviation was
$1.31. On the basis of these samples is it reasonable to
conclude (at á=.01) that the male operators are earning over
$2.00 more per hour than the female operators?
Tests for differences between proportions for large samples
Example 2 : The analysis for testing differences in proportions of two
samples is broadly similar to earlier case. As long as samples are
1.Two independent samples of observations were collected. For
greater than 30 we can use the normal approximation to
the first sample of 60 elements, the mean was 86 and the
binomial.
standard deviation 6. The second sample of 75 elements had
a mean of 82 and a standard deviation of 9. Suppose we have two sample proportions _ p1 and _ p1 which
measure the probability of occurrence of an event or characteris-
(a) Compute the estimated standard error of the difference
tic in two samples. We wish to test whether the probability of
between the two means.
occurrence is significantly different across the two samples.
(b) Using a = 0.01, test whether the two samples can reasonably _
p1: sample proportion of success in sample 1
be con-sidered to have come from populations with the
same mean. p2: sample proportion of success in sample 2
_

n1 : sample size 1
n2 : sample size 1
Standard Error of The Difference
Between Two Proportions
Since we do not know the population proportions, we need to
They limits of the acceptance region are : ±2.58 estimate them from sample statistics: _ p1, _ p2, _ q1, _ q1
We hypothesize that there is no difference between the two
proportions. In which case our best estimate of the overall
population proportion successes is the combined proportion
of successes in both samples. This can be calculated from the
following formula:
Since 3.09>2.58, we reject Ho and it is reasonable to conclude ˆp=_ p1 _ q1+ _ p2 _ q2 / n 1+ n2
that the two samples come from different populations.
Once we have the estimated proportion of successes in two
Activities populations. The estimated standard error of the difference
1. In 1993, the Financial Accounting Standards Board (FASB) between two proportions is as follows:
was consider-ing a proposal to require companies to report ˆs -p1 - -p2=”( ˆp ˆq/n 1+ˆp ˆq /n 2)
the potential effect of em-ployees’ stock options on earnings The standard z statistic in this case is calculated as :
per share (EPS. A random sample of 41 high-technology
z.=( _ p1 - _ p2)- (p1- p2)Ho/ ˆó -p1
firms revealed that the new proposal would reduce EPS by - -p2

an average of 13.8 percent with a standard deviation of 18.9 This can be best illustrated with the help of an example:
%. A random sample of 35 producers of consumer goods A pharmaceutical company tests two new compounds to reduce
showed that the proposal would reduce EPS by 9.hpercent blood pressure. They are administered to two different groups
on average, with a standard deviation of 8:7 percent. On the of lab animals. In group1 71 of 100 animals respond to drug1
basis of these samples, is it reasonable for the FASB to with lower BPS. In group2 58 of 90 animals showed lower BP
conclude (at á=.10) that the FASB proposal will cause a levels. The company wants to test at .05 level of significance
greater reduction in EPS for high technology firms than for whether there is a difference between the two drugs.
producers of consumer goods?

© Copy Right: Rai University


122 11.556
_
p1: .71 (sample proportion of success in sample 1)

RESEARCH METHODOLOGY
_
q1: .29
n1 : 100
_
p2: .644 (sample proportion of success in sample 2)
_
q2: .356
n2 : 90
Ho: p1= p2 Ha: : p1 ≠ p2 a=.05
The overall population or hypothesized percentage of occur-
Figure 4
rence assumes that there is no difference between the two
population proportions. We therefore estimate it as the
combined proportion of successes in both samples.
ˆp=100(.71)+90(.644)/190=.6789
The estimated standard error of the difference between two
sample proportions is
z= (.71-.644)-0/.0678=.973
z critical at .15 level of significance =.1.04
z. critical for .o5 level of significance is 1.96. Since observed z is less
therefore calculated z <critical z we accept the null hypotheses.
than z critical we accept Ho. This is shown in the figure 3 below:
Hint: If a test is concerned with whether one proportion is
significantly different from another, use a two-tailed test. If the
test asks whether one proportion is significantly higher or lower
than the other, a one tailed test is appropriate.
Activities
1. A large hotel chain is trying to decide whether to convert
more of its roe=.: to nonsmoking rooms. In a random
sample of 400 guests last year, 166 have requested
nonsmoking rooms. This year, 205 guests in a sample of
38C preferred the nonsmoking rooms. Would you
Figure 3 recommend that the hotel chain- convert more rooms to
Example 2: nonsmoking? Support your recommendation: -testing the
For tax purposes a city government requires two methods of appropriate hypotheses at a 0.01 level of significance.
listing property. One requires property owner to appear in person Ans:z calculated = -3.48< -z critical=-2.33 Therefore reject Ho.
before a tax lister. The second one allows the form to be mailed. 2. Two different areas of a large Eastern city are being
The manager thinks the personal appearance method leads to considered as sites for day-care centers. Of 200 households
lower fewer mistakes. She authorizes an examination of 50 surveyed in one section, the proportion in which the
personal appearances, and 75 mail forms. The results show that mother’s worked full-time was 0.52. In another section of
10% of personal appearances have errors whereas 13.3 % of the city -40 percent of the 150 households surveyed had
mails forms had errors. The manager wants to test at the .15 level mothers working at full-time jobs. At the 0.04 level of
of significance, the hypothesis that that personal appearance significance is there a significant difference in the proportions
method produces lower errors. The hypothesis is a one tailed of working mothers in the two areas of the city?
test. The procedure for this as the same as for carrying out a one
tailed test for comparing sample means. The data is as follows: Ans:z calculated =2.23>z critical=2.05. Therefore reject Ho.
1. On Friday 11 stocks in a sample of 40 of the 2500 stocks traded
on the BBSE advanced, i.e., their price of their shares increases.
In a sample of 60 BSE stocks on Thursday , 24 had advanced
At a=.10 , can you conclude that a smaller proportion of BSE
stocks advanced on Friday than did on Thursday?
2. A coal fired power plant is considering two different systems
Since it is a one tail test we do not divide the level of significance
for reducing pollution. The first system has reduced the
on both sides of the normal curve. A .15 level of significance
emission of pollutants to acceptable levels68% as
implies we determine z critical for area under one side of the
determined from 200 air samples. The second more
normal curve i.e., .35 (.5-.15). Specifically it is a left tailed test as
expensive system has reduced the emission of pollutants to
the manager wishes to test that method 1 , i.e., personal
acceptable levels 76% of the time as determined on the basis
appearances, result in significantly lower errors. This is shown
of 250 air samples. If the expensive system is significantly
by the marked of region in the figure4
more effective than the inexpensive system, the management

© Copy Right: Rai University


11.556 123
of the power plant will install the inexpensive system. Which Example
RESEARCH METHODOLOGY

system will be installed if the management uses a significance A machine is used to cut Swiss Cheese into blocks of specified
level of .02 in making its decisions? weight. On the basis of experience the weight of a block has a
In this chapter we will wrap up our analysis of hypothesis standard deviation of .3gm. The machine is currently set to cut
testing for large samples. By now you should have a good idea blocks of 12 gm. A sample of 25 blocks is found to have an
how to apply the principles of hypothesis testing to different average weight of 12.25g. Should we conclude the machine
types of managerial problems. However these days any needs to be recalibrated?
management problem that we may wish to analyze generates Since this is a two tailed test we need to determine the probabil-
such a large volume of data that it is virtually impossible to ity of observing a value of >x atleast as far away from 12 as
analyze and test hypotheses manually. Given the widespread 12.25 or 11.75gm if Ho is true.
availability of computers and statistical packages we can easily We therefore need to calculate the probability
run such tests on the computer. Therefore it becomes impor-
P(>xe”12.25 or >xd”11.75) if Ho is true.
tant to understand and interpret how these tests are run on the
computer. Our Hypothesis Can Be Stated As
Obviously the basic theory and principles of statistical analysis
do not change when a test is carried out on computer. However
there are some differences in the way the level of significance is
presented. Computer outputs usually present the prob value or
p-value. We shall look at what prob values mean and compare
them with conventional tests of significance.
We shall also look at what is considered a good hypothesis test. We can then convert _ x to a standard z score
This is done by measuring the power of a test. This concept
will also be presented in detail.
Probability Values From the normal tables we can find the probability that a z
So far we have tested a hypothesis at a given level of signifi- greater than 2.5 is .5-.4938=.0062
cance. In other words before we take the sample we specify how
Since this is a two tailed test the p- value is 2*.0062=.0124 this
unlikely the observed result will have to be in order for us to
information is shown in the figure 1 below;
reject Ho. For example we test the hypothesis that observing a
sample mean this far away from the true population mean is
less than 5%, where %% is our externally given level of
significance.
There is another way to approach the decision whether to accept
or reject Ho which does not require us to prespecify the level of
significance before taking a sample. In this case we take a
sample, compute mean and ask: suppose Ho were true what is
the probability of getting a sample mean this far away from ìHo.
This is called a probability value or p- value of the sample
mean.
The two methods are equivalent and essentially represent two
sides of a coin. Figure 1
• In the earlier case we prespecify a level of probability and
Given the above information the cheese packer can now decide
compare the observed probability of getting a sample whether to recalibrate the machine or not. As we can see the p –
statistic with the prespecified level of probability(a). value is very low and he probably will not go in for recalibration.
• We now ask what is the probability value of getting such a
If he had he carried out a conventional hypotheses test at .05 level
result. This is termed the p- value of a statistic.
of significance is also illustrated in the figure 1.On the basis of the
Once the p- value is determined the decision maker can then z test he would have rejected the Ho 5 % level. However at a
weigh all relevant factors and decide whether to accept/reject Ho significance level of .01 we would have accepted the hypotheses, as
without being bound by a prespecified level of significance. the critical z value would have been 2.58.
The p- value can also be more informative. For example if we The p- value tells us the largest significance level at which we would
reject a Ho at a=.05, we only know that the observed value was have accepted Ho, i.e, .0124 and the associated z value (±2.5). Thus
atleast 1.96 SE away from the mean. A p- value tells us the at any level of significance above .0124 we would reject Ho.
exact probability of the getting a sample mean 1.96SE away
from the mean.
Uses of P-Values
Use of p values saves the tedium of looking up tables. The
The concept will be made clearer with the help of an example. smaller the probability value, the greater the significance of the
finding. The simple rule of thumb is: As long as a>p reject Ho.

© Copy Right: Rai University


124 11.556
For example if we have a p-value=.01 and a =.05 . Then this Using The Computer to Test The

RESEARCH METHODOLOGY
means that the probability of getting our sample result is .01. Hypotheses
We compare this with our standard of accepting/ rejecting Ho These days in actual managerial situations hypotheses tests are
which is .05. since .05>.01 we reject Ho as the probability of rarely done manually. Therefore it is important that students can
getting such a result is much lower than our level of significance. interpret computer output generated for hypotheses tests by
various standard statistical analysis packages. The most popular
Example 2
of these packages are SPSS and Minitab. Broadly all
The Coffee Institute has claimed that more than 40% of
programmes follow the same principles. Instead of a compar-
American adults regularly have a cup of coffee with breakfast. A
ing the calculated z value with a predetermined level of
random sample of 450 individuals showed that 200 of them
significance, most packages display the prob values or p- values.
were regular coffee drinkers at breakfast. What is the prob value
of a test of hypotheses seeking to show that the Coffee To accept or reject an hypotheses we compare the level of
Institute’s claim was correct. significance (á) and the p- value. If á>p- value we reject Ho at
the relevant level of significance and vice versa.
n.=450 ¯p=200/450=.444
An example will help show how computer outputs results for
Ho:p=.4 Ha:p>.4
hypotheses testing.
The prob value is the probability that ¯p>.4444 or Example
While designing a test it was expected that the average grade
would be 75%, i.e., 56.25 out of 75. This hypotheses was tested
against actual test results for a sample of 199 students.
Activities
1. A car retailer Hunks that a 40000 mile claim for tyre life by the
manufacturer is too high. She carefully records the mileage This hypotheses was tested at a =.05
obtained from a sample of 64 such tyres. The mean turns The computer output for this test is shown below using the
out to be 38,500 miles. The standard deviation of the life of Minitab package. The observed t value for this test was -15.45,
all tyres of this type has previously been calculated by the with an associated (two-tailed) prob value of 0.0000. Because
manu-facturer to be 7,600 miles. Assuming that the mileage this prob value is less than our significance level of á = 0.05, we
is normally distrib-uted, determine the largest significance must reject Ho and con-clude that the test did not achieve the
level at which we would accept the manufacturer’s mileage desired level of difficulty.
claim, that is, at which we would not conclude the mileage is This is shown in Table 1
significantly less than 40,000 miles.
T test for a mean
2. The North Carolina Department of Transportation has
claimed that at most, 18 percent of passenger cars exceed 70
mph on Interstate 40 between Raleigh and Durham. A Test for µ=56.25 vs µ =56.25
random sample of 300 cars found 48 cars exceeding 70 mph.
What is the prob value for a test of hypothesis seeking to Varaible N Mean Stdev SEMean T P -value
show the NCDOT’s claim is correct? -
3. Kelly’s machine shop uses a machine-controlled metal saw to Result 199 45.281 10.014 .710 -15.45 0.00
cut sections of tubing used in pressure-measuring devices.
The length of the sections is normally distributed with a Table 1
standard deviation of 0.06". Twenty-five pieces have been cut T test for difference between two sample means
with the machine set to cut sections 5.00" long. When these Here we test hypotheses of equality of two means.
pieces were measured, their mean length was found to be The university had been receiving many complaints about the
4.97". Use prob values to determine whether the machine caliber of teaching being done by the graduate-student teaching
should be recalibrated because the mean length is assistants. As a result, they decided to test whether stu-dents in
significantly different from 5.00". sections taught by the graduate TAs really did worse in the
4. SAT Services advertises that 80 percent of the time, its exam than those stu-dents in sections taught by the faculty.
preparatory course will increase an individual’s score on the
If we let the TAs’ sections be sample 1 and the fac-ulty’s
College Board exams by at least 50 points on the combined
sections be sample 2, then the appropriate hypotheses for
verbal and quantitative total score. Lisle Johns, SAT’s
testing this concern are
market-ing director, wants to see whether this is a reasonable
claim. Lisle has reviewed the records of 125 students who Ho: µ1 = µ2
took the course and found that 94 of them did, in-deed, Ha: µ1< µ2
increase their scores by at least 50 points. Use prob values to
determine whether SAT’s ads should be changed because the The underlying population is assumed to be equal for both samples.
percentage of students whose scores increase by 50 or more The Minitab output for doing this test is given below. The test
points is significantly different from 80 percent. results are reported assuming that the two population variances

© Copy Right: Rai University


11.556 125
are equal. If we can assume that the two variances are equal,
RESEARCH METHODOLOGY

then the test reported by Minitab is the test using a pooled


The value of 1-ß is a measure of how well the test is working
estimate for s 2. This is shown in table 2
and is known as the power of the test. If we plot the values of
Table2 1 - ß for each value of µ for which the alternative hypothesis is
Two sample T - Test true, the resulting curve is known as a power curve. This can be
explained better with the help of an example.
Example
Instrnum N Mean stdev SE mean
We were deciding whether to accept a drug shipment. Our test
indicates that we should reject the null hypothesis if the
1 89 44.93 9.76 1.0 standardized sample mean is less than - 1.28, that is, if sample
mean dosage is less than 100.00 - 1.28 (0.2829), or 99.64 cc.
In Figure 9a, we show a left-tailed test. In Figure 9b, we show the
2 110 45.6 10.2 .98 power curve which is plotted by computing the values of 1 1 - ß.

T test µ1 = µ2 vs µ1< µ2 : T=-.44 P=.33 Df =197

Both use pooled stdev=10.0

What does the data tell us regarding the efficacy of TA ? The


prob value is quite high, i.e., .33. Therefore if we compare this
with a level of significance of .05 (á = 0.05) we would accept the
null hypotheses that there is no difference in results between
TAs and faculty. In this case we would accept the null hypoth- Figure 2a
eses at any level of significance up to .33.
Measuring The Power of a Test
What should a good hypothesis test do ?
Ideally a and ß (the probabilities of Type I and Type 2 errors)
should both be small. A Type I error occurs when we reject a
null hypothesis that is true. a (the significance level of the test)
is the prob-ability of making a Type I error. Once we decide on
the significance level, there is nothing else we can do about a.
A Type 2 error occurs when we accept a null hy-pothesis that is
false; the probability of a Type 2 error is ß . Ideally a manager
would want a hypothesis to reject a null hypothesis when it is Figure 2b
false. Suppose the null hypothesis is false. Then managers Point C on the power curve in Figure 2 b shows population
would like the hypothesis test to reject it all the time. Unfortu- mean dosage is 99.42 cc. Given that the population mean is
nately, hypothesis tests cannot be foolproof; sometimes when 99.42 cc, we must compute the probability that the mean of a
the null hypothesis is false, a test does not reject it, and a Type 2 random sample of 50 doses from this population will be less
error is made. than 99.64 cc (the point below which we decided to reject the
When the null hypothesis is false, µ (the true population mean) null hypothesis i.e., the value of the dose at which we rejected
does not equal µHo (the hypoth-esized population mean); the null hypothesis. This shown in Figure 2c .
instead, µ equals some other value. For each possible value of µ
for which the alternative hypothesis is true, there is a different
probability ß of incorrectly accepting the null hypothesis.
Of course, we would like this ß (the probability of ac-cepting a
null hypothesis when it is false) to be as small as possible or we
would like 1-ß (the probability of rejecting a null hypothesis
when it is false) to be as large as possible.
Therefore a high value of 1 - ß (something near 1.0) means the
test is working quite well ( i.e it is rejecting the null hypothesis
when it is false); a low value of 1 - ß (something near 0.0)
means that the test is working very poorly ( i.e. not rejecting the
null hypotheses when it is false).
Figure 2c

© Copy Right: Rai University


126 11.556
We had computed the standard error of the mean to be 0.2829 What Does The Power Curve In Figure 2b

RESEARCH METHODOLOGY
cc. So 99.64 cc is (99.64- -99.42)/0.2829 = 0.78 Tell Us?
Thus 99.64 is .78 SE above the true population mean when it As the shipment becomes less satisfactory (as the doses in the
takes a value µ= 99.42 cc. shipment become smaller), our test is more powerful (it has a
greater probability of recognizing that the shipment is unsatisfac-
The probability of observing a sample mean less than 99.64 cc
tory). It also shows us, however, that because of sampling error,
and thus rejecting the null hypothesis is 0.7823, when we take
when the dosage is only slightly less than 100.00 cc, the power of
the true population mean to be µ= 99.42 cc. This is given by
the test to recognize this situation is quite low. Thus, if having
the colored area in Figure 9c. Thus, the power of the test 1 - ß any dosage below 100.00 cc is completely unsatisfactory, the test
at µ = 99.42 is 0.7823. This simply means that if µ = 99.42, the we have been discussing would not be appropriate.
probability that this test will reject the null hypothesis when it is
Example
false is 0.7823.
Before the 1973 oil embargo and subsequent increase in oil
Point D in Figure 9b shows that if the population mean dosage price, petrol usage in the US had grown at a seasonally adjusted
is 99.61 cc. We then ask what is the probability that the mean of rate of .57% per month, with a standard deviation of of .10%
a random sample of 50 doses from this population will be less per month. In 15 randomly chosen months between 1975 and
than 99.64cc and thus cause the test to reject the null hypothesis? 1985, petrol usage grew at an average rate of .33% per month.
This is illustrated in Figure 2d. Here we see that 99.64 is (99.64 - At a .01 level of significance can you conclude that the growth in
99.61)/0.2829, or 0.11 standard error above 99.61 cc. The probabil- the use of gasoline had decreased as a result of the
ity of observing a sample mean less than 99.64cc and thus rejecting embargo?Compute the power of the test for ì=.50, .45 and
the null hypothesis is 0.5438, the colored area in Figure 9d. Thus, .4% per month.
the power of the test (1 - ß ) at µ = 99.61 cc is 0.5438.

Figure 2d Points to Ponder -


Using the same procedure at point E, we find the power of the • Hypothesis testing cab be viewed as a six –step procedure
test at ì = 99.80 cc is 0.2843; this is illustrated as the colored area • Establish a Null Hypothesis as well as alternative
in Figure 2e. Hypothesis. It is a one tail test of significance if the
alternative hypothesis states the direction of
differences. If no direction of difference is given, it is
two tailed test.
• Choose the statistical test on the basis of the
assumption about the population distribution and
measurement level. The form of the data can also be a
factor. In light of these considerations, one typically
chooses the test that has the general power efficiency or
ability to reduce decision error.
• Select the desired level of confidence. While α = 0. 05
is the most frequently used level, many others are also
Figure 2e used. The α is the significant level that we desire and
As we can see the values of 1 - ß con-tinue to decrease to the is set in advance of the study.
right of point E. This is because as the population mean gets • Compute the actual test value of the data.
closer and closer to 100.00 cc, the power of the test (1 -ß ) gets • Obtain the critical test value, usually by referring to a
closer and closer to the probability of rejecting the null hypoth- table for the appropriate type of distribution.
esis when the population mean is exactly 100.00 cc. This
• Interpret the result by comparing the actual test value
probability is nothing but the significance level of the test which
with the critical test value.
in this case is 0.10. The curve terminates at point F, which lies
at a height of 0.10 directly over the population mean.

© Copy Right: Rai University


11.556 127
RESEARCH METHODOLOGY

LESSON 22:
TUTORIAL

1. A manufacturer of petite women’s sportswear has of loans made to women has changed significantly in the last
hypothesized that the average weight of the women its five years?
buying1.ts_clothing is 110 pounds. The company takes two
samples of its customers and finds one sample’s estimate of
the population mean is 98 pounds, and the other sample
produces a mean weight of 122,pqunds. In the test of the
company’s hypothesis that the population mean is 110
pounds versus the hypothesis that the mean does equal 110
pounds, is one of these sample values more likely to lead
accept the null hypothesis? Why or why not?
2. On an average day; about 5 percent of the stocks on the New
York Stock set a new high for the year. On Friday Sept 18th ,
1992 the Dow Jones closed at closed at 3,282,on a robust
volume of over 136 million shares traded. A random
sample of 120, stocks showed that 16 had set new annual
highs that day. Using a significance level of 0.01, should we
conclude that more stocks than usual set new highs on that
day?
3. A finance developed a theory that predicted that closed end
equity funds should sell at a premium of about 5% on
average. Assuming that the discount /premium population
is approximately normally distributed does the sample
information support his theory? Test at .05 level of
significance.
4. A company recently criticized for not paying women as much
as men claims that its average salary paid to all employees is
$23500. From a random sample of 29 women, the average
salary was calculated to be $23,000. If the population
standard deviation is known to be $1250 for these jobs
determine whether we could reasonably, within 2 standard
errors expect to find $23000 as the sample mean if, in fact ,
the company’s claim is true.
5. A manufacturer of vitamins for infants inserts a coupon for
a free sample of its production a package that is distributed
at hospitals to new parents. Historically about 185 of the
coupons have been redeemed. Given current trends for
having fewer children and starting families later, the firm
suspects that today’s parents are better educated on average
and as a result more likely to use vitamin supplements for
their infants. A sample of 1500 new parents redeemed 295
coupons. Does this support at a significance level of 2percent
the firm’s beliefs about today’s new parents.
1. From a sample of 10,200 loans made by a state employees
credit union in the most recent five year period, 350 were
sampled to determine what proportion was made to
women. This sample showed that 39% of the loans were
made to women employees. A complete census of loans 5
years ago showed that 41% were women borrowers. At a
significance level of .02 can you conclude that the proportion

© Copy Right: Rai University


128 11.556
RESEARCH METHODOLOGY
LESSON 23:
TESTS OF HYPOTHESES – SMALL SAMPLES

In this and the next lesson we look at tests of statistical


inference for small samples. Broadly the main theoretical issues
underlying tests of statistical inference are similar to the large
samples. Since the previous few lessons have analyzed these
issues at length we shall not spend too much time on the
theory in this chapter. In this lesson we will briefly review the
main theoretical properties of the t distribution and then
determine principles of statistical inference under various
situations.
By the end of this chapter you should be able to
1. Review of the theoretical aspects of t distribution. Degrees of Freedom
2. Carryout hypothesis testing using the t distribution for small What is degree of freedom?
samples This is defined as the number of values we can choose freely.
3. Apply the principles of hypothesis testing of differences The concept is best illustrated with the help of an example:
between means for small sample sizes. Consider the case: a+b/2=18
4. Carryout tests of differences between means for dependent Given that the mean of these two numbers has to equal 18,
samples. how do we determine values for a and b? Basically we can slot in
Theoretical Aspects of The T Distribution any two values such that they add up to 36. Suppose a=10. then
Theoretical work on the t distribution was done by W.S. Gosset b has to equal 26 given the above constraint. Thus in a sample
in the 1900s. of two where the value of the mean is specified ( i.e., a con-
straint) we are only free to specify one variable. Therefore we
The student’s t distribution is used under two circumstances: have only one degree of freedom.
1. Sample size, n , is less than 30. Another example: a+b+c+d+e+f+g/7=16
2. Where population standard deviation is not known. In this Now we have 7 variables. Given the mean we are free to specify
case t tests may be used even if the sample size is greater 6 variables. The value of the 7 th variable is determined
than 30. automatically.
We also assume that the population underlying a t distribution For a sample size of n we can define a t distribution for degree
is normal or approximately normal. of freedom n-1.
Characteristics of The T Distribution Using The T Distribution Tables
Relationship between the t distribution and normal distribution: • The t table differs in construction from the normal table in
1. Both distributions are symmetrical. However as can be seen that it is more compact. It shows areas under the curve and t
in figure1 the t distribution is flatter than the normal values for a limited number of level of significance (usually
distribution and is higher in the tails and has .01, .05, .10). t values are therefore defined for level of
proportionately less area in the around the mean. This significance and degrees of freedom.
implies that we have to go further out from the mean of a t • A second difference is that we must specify the degrees of
distribution to include the same area under the curve. Thus freedom with which we are dealing. Suppose we are making
interval widths are much wider for a t distribution. an estimate for a n=14, at 90% level of confidence. We
2. There is a different t distribution for every possible sample size. would go down vertically to determine the degrees of
3.As sample size increases, the shape of the t distribution loses freedom (i.e. 13) and then read of the appropriate t value for
its flatness and becomes approximately equal to the normal a level of significance of .1.
distribution. In fact for sample sizes greater than 30 the t · The normal tables focus on the chance of that the sample
distribution becomes less dispersed and approximates a statistic lies within a given number of standard deviations
normal distribution and we can use the normal distribution. on either side of the population mean. The t distribution
Figure 1 tables on the other hand measures the chance that the
observed sample statistic will lie outside it our confidence
interval, defined by a given number of standard deviations
on either side of the mean. A t value of 1.771 shows that
if we mark off plus and minus 1.771ó x = on either side of

© Copy Right: Rai University


11.556 129
the mean then we enclose 90% of the area under the curve.
RESEARCH METHODOLOGY

T Values For One Tailed Tests


The area outside these limits, i.e., that of chance error, will be
10%.This is shown in the figure 2 below. Thus if we are The procedure for using t tests for a one tailed test is conceptu-
making an estimate at the 90% confidence limit we would ally the same as for a one tailed normal test. However the t
look in the t tables under the .1 column (1.0-.9=.1). This is tables usually give the area in both tails combined at a specific
actually a or the probability of error. level of significance.
Figure 2 For a one tailed test t test, we need to determine the area located
in only one tail. For example to find the appropriate t value for
a one tailed test at a level of significance of .05 with 12 degrees
of freedom we look in the table under the .10 column opposite
12 degrees of freedom. The t value is 1.782. This is because the
.10 column represents .10 of the area contained under both tails
combined. Therefore it also represents .05 of the area contained
in each tail separately.
Exercise
Reading The T Table
Find one tail value for n=13, á=.05 % → degrees of free-
A sample excerpt from the t table is presented below in table 1.
dom=12
We can use it to read of t values for different levels of signifi-
cance, degrees of freedom. T value for one tail test we need to look up the value under the
.10 column t= ±1.782
Table 1
Find one tail t values for the following:
• n=10, a =.01
• n=15, a =.05

Hypothesis Testing Using The T


Distribution
The procedure for hypothesis testing using the t test is very
similar to that followed for the normal test. Instead of
calculating the z statistic we calculate a t statistic.
The Formula For The T Statistic is

x−µ
t=
σˆ x

where σ̂ x is the estimated standard error of the sample means.


The t test is the appropriate test to use when population
standard deviation is not known and has to be estimated by the
sample standard deviation.
Example σ̂ s where s is the sample standard deviation
For the following sample sizes and significance levels find the
appropriate t values: σˆ
σˆ x =
1. n=28, a =.05 ’! degrees of freedom= 28-1 =27 n
t=±2.048 This represents the basic t test. Variants of this formula are
2. n=10, 99% ! degrees of freedom=9 developed to meet the requirements of different testing
t=±3.250 situations. We shall look at more common types of problems
briefly. As the theoretical basis of hypothesis is the same as the
Exercise normal distribution and has been dealt with in detail in the last
1. Find t values for the following: chapter, we shall focus on applications of the t test to various
2. n=13, 90% situations.
3. n=25, 95% 1. Hypotheses testing of means
4. Given the following sample sizes and t values find the The t test is used when :
corresponding confidence levels: 1. the sample size is <30
• n=27, t=±2.056 or
• n=5, t=±2.132 2. When population standard deviation not
• n=18 t=±2.898 known and has to be estimated by the
sample standard deviation.

© Copy Right: Rai University


130 11.556
3. When a population is finite and the sample accounts Therefore since –2.44< -1.729 we reject the personnel managers

RESEARCH METHODOLOGY
for more than 5% of the population we use the finite hypotheses that the true mean of employees being tested is 90.
population multiplier and the formula for the standard is This is also illustrated diagrammatically in figure 3
modified to; Figure 3
σˆ N −n
σˆ x =
n N −1
Two Tailed Test
The specification of the null and alternative hypotheses is
similar to the normal distribution. .05 of area

Ho: µ= µo
Ha: µ= µo
-2.44 -1.729 1.729
This is tested at a prespecified level of significance á
The t statistic is Exercises
x−µ 1. Given a sample mean 83, Given a sample mean of 94.3, a
t= sample standard deviation of 12.5 and a sample size of G -
σˆ x size of 22, test the hypothesis that the value of the
The calculated t value should be compared with the table t population mean is 70 against the alternative the hypothesis
value. If t calculated< t critical we accept the null hypotheses that that it is more than 100. Use the 0.025 significance level.
there is no significant difference between the sample mean and 2. If a sample of 25 observations reveals a sample mean of 52
the hypothesized population mean. If the calculated t value > t a sample variance of 4.2, test the hypothesis that the
critical we reject the null hypotheses at the given level of population mean is 05 against the alternative hypothesis
significance. that it is some other value. Use the .01 level of significance. .
An example shall make the process clearer: 3. Picosoft, Ltd., a supplier of operating system software for
A personnel specialist is a corporation is recruiting a large personal computers, was planning the initial public offering
number of employees. For an overseas assignment. She of its stock in order to raise sufficient working capital to
believes the aptitude scores are likely to be 90. a management finance the development of a new seventh-generation
review finds the mean scores for 20 test results ot be 84 with a integrated system. With current earnings $1.61 a share,
standard deviation of 11. Management wish to test the Picosoft and its underwriters were contemplating an offering
hypotheses at the .10 level of significance that the average price of $21, or about 13 times earnings. In order to check
aptitude score is 90. the appropriateness of this price, they randomly chose seven
publicly traded software firms and found that their average
Our data is as follows
price/ earnings ratio was 11.6, and the sample standard
Ho : µ = 90 Ha : µ = 90 deviation was 1.3. At á = .02 can Picosoft conclude that the
a=.10 n=20 stocks of publicly traded software firms have an average P /
As we can see this represents a two-tailed test. E ratio that is significantly different from 13?
Degrees of freedom=19 4. The data-processing department at a large life insurance
To find t critical we look under the t table under the .10 column, company has in-stalled new color video display terminals to
which gives the t value for .05 under both sides of the t curve. t. replace the monochrome units it previously used. The 95
=1.729 operators trained to use the new machines aver-aged 7.2
hours before achieving a satisfactory level of performance.
As population standard deviation is not known we estimate it :
Their sample variance was 16.2 squared hours. Long
σˆ s = 11 where s is the sample standard deviation experience with operators on the old monochrome terminals
Standard error of sampling mean showed that they averaged 8.1 hours on the machines before
their performances were satisfactory. At the 0.01 sig-nificance
level, should the supervisor of the department conclude that
σˆ 11 the new terminals are easier to learn to operate?
σˆ x = = = 2 .46
n 20 Tests For Differences Between Means –
Small Samples
Again broadly the procedure for testing whether the sample
x − µ 84 − 90
t= = = −2.44 means from two different samples are not significantly different
σˆ x 2.46 from each other is the same as for the large sample case. The
differences are in the calculation of the standard error formula
and secondly in the calculation of the degrees of freedom.

© Copy Right: Rai University


11.556 131
RESEARCH METHODOLOGY

Degrees of Freedom Ho: µ1= µ1 Ha: µ1 > µ1


In the earlier case where we had tested the sample against a The next step is to calculate estimate of the population variance
hypothesized population value, we had used a t distribution :
with n-1 degrees of freedom. In this case we have n 1 –1
( n1 − 1) s12 + (n2 − 1)s 22 (12 − 1)(15) 2 + (15 − 1)(19) 2
degrees of freedom for sample 1 and n 2 –1 for sample 2. When s 2p = = = 17.35
we combine the sample to estimate the pooled variance we have n1 + n 2 − 2 12 + 15 − 2
n1 + n 2 –2 degrees of freedom . Thus for example if n 1 =10
and n 2 = 12 the combined degrees of freedom = 20 1 1 1 1
σˆ x1 − x2 = s p + = 17.35 +
Estimation of Sample Standard Error of n1 n2 12 15 = 6.72
The Difference Between Two Means.
We then calculate the t statistic for the difference between two
In large samples had assumed the unknown population
means:
variances were equal and we estimated σˆ by s 12 and s 22 .
This is not appropriate for small samples. We assume the x1 − x 2 92 − 84
underlying population variances are equal: ó 12= ó 22 we estimate t= = = 1.19
σˆ x1 −x2 6.72
population variance as a weighted average of s 12 and s 22 where
the weights are numbers of degrees of freedom in each sample. since it is a one tailed test at the .05 level of significance we look
in the .1 column against 25 degrees of freedom.
( n1 − 1) s + ( n2 − 1) s
2 2
s 2p = 1 2
t. critical at .05 level of significance= 1.708
n1 + n2 − 2
Since calculated t< t critical , we accept the null hypothesis that
One we have our estimate for population variance we can then the first method is significantly superior to the second.
use it to determine standard error of the difference between two Exercises
sample means, i.e we get an equation for the estimate standard
1. A consumer research organization routinely selects several car
error of x1 − x 2
models each year and evaluates their fuel efficiency. In this
1 1 year’s study of two small cars it was found the average
σˆ x1 − x2 = s p + mileage for 12 cars of brand A was 27.2km/litre with a
n1 n2 standard deviation of 3.8litres. 9 brand B cars were tested
and they averaged 32.1km per litre. With a standard deviation
The null hypotheses in this case is
of 4.3 km per litre. At á=.01 should the survey conclude that
Ho: µ1 = µ1 Ho: µ1 = µ1 brand a cars have lower mileage than brand B cars?
2. Connie Rodrigues, the Dean of Students at Mid State
x1 − x 2
t= College, is wondering about grade distributions at the
σˆ x1 − x2 school. She has heard grumbling that the GPAs in the
Business School are about 0.25 lower than those in the
An example will help make this clearer:
College of Arts and Sciences. A quick random sampling
A company investigates two programmes for improving the produced following GPAs.
sensitivity of its managers. One was a more informal one
Business: 2.86 2.77 3.18 2.80 3.14 2.87 3.19 3.24 2.91 3.00
whereas the second involved more formal classroom instruc-
tion. The informal programme is more expensive and the Arts &
president wants to know at the .05 level of significance whether Sciences 2.83 3.35 3.32 3.36 3.63 3.41 3.37 3.45 3.43. 3.44 3.17 3.26 3.18
this expenditure has resulted in greater sensitivity. 12 managers Do these data indicate that there is a factual basis for the
were observed for the first method and 15 for the second. The grumbling? State and test appropriate hypotheses at á = 0.02.
sample data is as follows:
3. A credit-insurance organization has developed a new high-tech
method of training new sales personnel. The company
sampled 16 employees, who were trained the original way and
found average daily sales to be $688 and the sample standard
Programme Mean No. of Estimated standard
deviation was $32.63. They also sampled 11 employees who
sensitivity managers deviation of were trained using the new method and found average daily
index observed sensitivity of the sales to be $706 and the sample standard deviation was $24.
programme. At á = 0.05, can the company conclude that average daily sales
have increased under the new plan?
1 92% 12 15%
4. To celebrate their first anniversary, Randy Nelson decided to
2 82% 15 19%
buy diamond earrings for his wife Debbie. He was shown
nine pairs with marquise gems weighing approximately 2
carats per pair. Because of differences in the colors and

© Copy Right: Rai University


132 11.556
qualities of the stones, the prices varied from set to set. The

RESEARCH METHODOLOGY
average price was $2,990, and the sample standard deviation
was $370. He also looked at six pairs with pear-shaped
stones of the same 2-carat approximate weight. These
earrings had an average price of $3,065, and standard
deviation was $805. On the basis of this evidence, can Randy
conclude (at a significance level of 0.05) that pear-shaped
diamonds cost more on average, than marquise diamonds?
References
Levin and Rubin Statisitcs for Management

© Copy Right: Rai University


11.556 133
RESEARCH METHODOLOGY

LESSON 24:
NON –PARAMETRIC TESTS

So far we have discussed a variety of tests that make inferences The second disadvantage is that these tests are not as sharp or
about a population parameter such as the mean or the popula- efficient as parametric tests. The estimate of an interval using a
tion proportion. These are termed parametric tests and use non-parametric test may be twice as large as for the parametric
parametric statistics from samples that come from the popula- case. When we use nonparametric tests we trade off sharpness
tion being tested. To use these tests we make several restrictive in estimation with the ability to make do with less information
assumptions about the populations from which we drew our and to calculate faster.
samples. For example we assumed that our underlying What happens when we use the wrong test in the wrong
population is normal. However underlying populations are not situation?
always normal and in these situations we need to use tests,
Generally, parametric tests are more powerful than non-
which are not parametric.
parametric tests (e.g., the non-parametric method will have a
Many such tests have been developed which do not make greater probability of committing a Type II error - accepting a
restrictive assumptions about the shape of the population false null hypothesis)
distribution. These are known as non-parametric or distribu-
tion free tests. There are many such tests, we shall learn about Exercise
some of the more popular ones. 1. What is the difference between the kinds of questions
answered by parametric tests and those by non-parametric
Non Parametric Tests
tests?
What are Non Parametric Tests?
Important Types of Nonparametric Tests
Statistical tests that do not require the estimate of population
Since the theory behind these tests is beyond the scope of our
variance or mean and do not state hypotheses about parameters
course we shall look at relevant applications and methodologies
are considered non-parametric tests. When do you use non-
for carrying out some of the more important non parametric
parametric tests?Non-parametric tests are appropriately used
tests.
when one or more of the assumptions underlying a particular
parametric test has been violated (generally normality or Non-parametric tests are frequently used to test hypotheses
homoscedasticity). Generally, however the t-test is fairly robust with dependent samples.
to all but the severest deviations from the assumptions. The Dependent Sample Tests are
How do you know if the data are normally distributed? There 1. The signed test of paired data where positive and negative
are several techniques are generally used to find out if a popula- signs are substituted for quantitative values.
tion has an underlying normal distribution. these include:
2. Mcnemar test
goodness of fit (low power), graphical assessment (not
quantitative), Shapiro-Wilk’s W (n<50), or D’Agostino-Pearson 3. Cochran
Test K2 (preferred). 4. Wilcoxon test
As noted non parametric tests do not make the assumption of Non-parametric Tests For Independent
normality about the population distribution. The hypotheses Samples are
of a non-parametric test are concerned with something other 1. Chi square test
than the value of a population parameter.
2. Kolmogorov –Smirnov one sample test
The main advantage of non-parametric methods is that they do
Of these tests we shall cover the McNemar test, the Mann
not require that the underlying population have a normal or any
Whitney U test, the Kologomorov smironov test and the
other shaped distribution. The main disadvantages of these
Wilcoxon test.
tests are that they ignore a certain amount of information. For
example to convert data to non parametric data we can convert Table of Equivalent Parametric and Non-
numerical data to non parametric form by replacing numerical parametric Tests
values such as 113.45, 189.42, 76.5, 101.79 by either ascending or PARAMETRIC NON-PARAMETRIC
descending order ranks. Therefore we can replace them by 1,2, 3,
4, and 5. How ever if we represent 189.42 by 5, we lose some Independent t-test Mann-Whitney U test
information, which is contained in the value 189.42. 189.42 are
Paired t-test Wilcoxon Signed-Rank Test
the largest value and this represented by the rank 5. However
the rank 5 could also represent 1189.42, as that would also be When Do We Use Non Parametric Tests?
the largest value. Therefore use of ranked data leads to some We use non-parametric tests in least one of the following five
loss of information. types of situations :

© Copy Right: Rai University


134 11.556
1. The data entering the analysis are enumerative; that is, We now look at a few of the non parametric tests in more

RESEARCH METHODOLOGY
counted data represent the number of observations in each details including their applications.
category or cross-category.
McNemar Test
2. The data are measured and/or analyzed using a nominal This test is used for analyzing research designs of the before
scale of measurement. and after format where the data are measured nominally. The
3. The data are measured and/or analyzed using an ordinal samples therefore become dependent or related samples. The
scale of measurement. use of this test is limited to the case where a 2x2 contingency
4. The inference does not concern a parameter in the table is involved. The test is most popularly used to test
population distribution; for example, the hypothesis that a response to a pre and post situation of a control group.
time-ordered set of observations exhibits a random pattern. We can illustrate its use with an example:
5. The probability distribution of the statistic upon which the A survey of 260 consumers was taken to test the effectiveness
analysis is based is not dependent upon specific information of mailed coupons and its effect on individuals who changed
or conditions (i.e., assumptions) about the population(s) their purchase rate for the product. The researcher took a
from which the sample(s) are drawn, but only upon general random sample of consumers before the release of the
assumptions, such as a continuous and/or symmetric coupons to assess their purchase rate. On the basis of their
population distribution. responses they were divided into groups as to their purchase
According to these criteria, the distinction of non-parametric is rate (low, high). After the campaign they were again asked to
accorded either because of the complete the forms and again classified on their purchase rate.
Table 1 shows the results from our sample.
1. The level of measurement used or required for the analysis,
as in types 1,2, 3 . That is we use either counted or ordinal or Table1
nominal scale data. After
campign
2. The type of inference, as in type 4. We do not make Before High Low Ttotal
inferences about population parameters such as the mean. campign purchse purchase rate
3. The generality of the assumptions made about the rate
population distribution, as in type 5. That is we do not Low 70(A) 180(B) 210
know or make assumptions about the specific form of the purchase
underlying population distribution. rate
High 80(C) 30(D) 150
Non-parametric VS. Distribution-free purchse
Tests: rate
As we have seen non-parametric tests are those used when 260
some specific conditions for ordinary tests are violated?
Distribution-free tests are those for which the procedure is valid
for all different shapes of the population distribution. Cases that showed a change in the before and after the campaign
For example, the Chi-square test concerning the variance of a in terms of their purchase response were placed in cells A and
given population is parametric since this test requires that the D. this was done as follows:
population distribution be normal. On the other hand the • An individual is placed in cell A if he or she changed from a
Chi-square test of independence does not assume normality low purchase to a high purchase rate.
condition, or even that the data are numerical. The • Similarly he’s placed in D if he goes from high to a low rate.
Kolmogorov-Smirnov test is a distribution-free test, which is • If no change is observed in his rate he is placed on cells BorC.
applicable to comparing two populations with any distribution
of a continuous random variable. The researches wishes to determine if the mail order campaign
was a success. We shall now briefly outline the various steps
Standard Uses of Non Parametric Tests involved in applying the McNemar test.
Mann-Whitney: To be used with two independent groups
(analogous to the independent groups t-test) we may use the Step1
Mann-Whitney Rank Test as a non-parametric alternative to We state the null hypotheses. This essentially states that there is
Students T-test when one does not have normally distributed no perceptible or significant change in purchase behavior of
data. individuals. Thus for individuals who change their purchase
rate this means that the probability of those changing from
Wilcoxon: To be used with two related (i.e., matched or high to low equals low to high. This equal to .5.
repeated) groups (analogous to the dependent samples t-test)
Kruskall-Wallis: To be used with two or more independent Ho: P(A)=P(D)
groups (analogous to the single-factor between-subjects Ha: P (A)‘“P(D)
ANOVA) To test the null hypotheses we would examine the cases of
Friedman: To be used with two or more related groups change from cells A to D.
(analogous to the single-factor within-subjects ANOVA)

© Copy Right: Rai University


11.556 135
Step 2 to the sampling distribution indicates where there is a diver-
The level of significance is chosen, for example á= .05 gence between the two distributions is likely to occur due to
chance or if the observed difference is due to a result of
Step 3
preference.
We now have to decide on the appropriate test to be used. The
McNemar test is appropriate because the study is a before and Suppose for example that a sample of 200 homeowners and we
after study and the data are nominal. The study involves the got the following shade preference distribution;
study of two related variables. Very light: 80
Bright: 40
The McNemar test involves calculating the chi square value as
Dark: 20
given by the formula below:
The manufacturer asks whether these results indicate a prefer-
2
Chisquare=(|A-D | -1/A+D
) ence.
That is we calculate the absolute difference between the A and D The data is shown in table 2 below.
cells.
Step 4
The decision rule
For á=. 05, the critical value of ÷2 is 3.84 for degree of freedom
1. Therefore we will reject the null hypotheses if the calculated ÷2 Rank of shade chosen
exceeds the critical value from the tables.
Very light bright dark
Step 5
We now actually calculate the test statistic: light
The calculated x2 =( |A-D | -1) 2/A+D =(70-30 -1)2/
100=15.21
F=no. of
80 60 40 20
Step 6 homeowners
Draw a statistical conclusion.
choosing
Since calculated x2 exceeds the critical value, we reject the null
hypotheses and we can infer that the mail coupon campaign was that rank
successful in increasing the purchase rate of the product under
study.
When an analysis involves more than two variables we use the Fo(X)=
Cochran Q test. For situations where involving repeated .25 .50 .75 1.00
theoretical
observations where the dependent variable can take on only two
values; either 0 or 1. cumulative
Tests for Ordinal Data distribution
So far the test we have discussed is applicable only to nominal
data. We now look at a test, which is specifically designed for of choices
ordinal data.
under Ho
Kolmogorov –Smirnov One Sample Test
This test is similar to the chi square test of goodness of fit.
This is because it looks at the degree of agreement between the .40 .70 .90 1.00
Sn(X)=
distribution of observed values and some specified theoretical
distribution (expected frequencies). The Kolmogorov Smirov cumulative
test is used if we want to compare the distribution on an
ordinal scale. distribution
We can look at an example to see how this test works: of observed
A paint manufacturer is interested in testing for four different
choices
shades of a colour: very light, very light, bright and dark. Each
.15 .20 .15 0.0
respondent is shown four prints of the shades and asked to |Fo(X)-
indicate his preference. If colour shade is unimportant, the
photos of each shade should be chosen equally. Except for Sn(X)|
random differences. If colour shades are important the
respondents should prefer one of the extreme shades.
Since shade represents a natural ordering, the Kolmogorov test
is applied to test the preference hypothesis. The test involves
specifying the cumulative frequency distribution. We then refer
We would carry out the test as follows: The Null Hypothesis
1. Specification of null hypotheses; Ho is that there would be Ho is that there is no difference in attitudes of the two groups
no difference among the shades. Ha: there is a difference in of accounts towards bank services.. Ha is that there is a
the shades of the new colour. significant difference difference in attitudes of the two groups.
2. 2. The level of significance: The test would be conducted at The level of significance for this test is .05.
the 55 level.
Ranking
3. Decision regarding statistical test. Here the Kolmogorov t-
Smirnov test is appropriate because the data measured are of raw
ordinal and we are interested in comparing the above
frequency distribution with a theoretical distribution. scores
4. This test focuses on the largest value of the deviations Regular commercial
among observed and theoretical proportions.
D= max |Fo (X)-Sn (X)| Regul Commercial
Where Fo (X) is the specified cumulative frequency ar accounts
distribution under Ho for any value of x and is the accou
proportion of cases expected to have scores equal to or less
than x. nts 48 27 21
So(X): observed cumulative frequency distribution of a 52 12 19
random sample of N observations where X is any possible
score. 19 69 7 13
5. The decision rule: 70 13 29 30
If the researcher chooses a a =. 05, the critical value of D for
large samples is given by the formula 1.36√n where n is the
77 73 3 9
sample size. For example the critical value is .96. the decion 14 15 2 28
rule therefore states that null hypotheses will be rejected if
computed D >.96. 83 50 14 20
6. Calculation of test statistic: 87 61 10 17
The theoretical probability distribution is calculated by taking
each class frequency as if it were under the null hypotheses, 68 21 8 26
i.e., in our case they would be equal. It is then expressed as a 72 47 1 22
fraction of the total sample. Thus row 2 would be calculated
as 50/200=. 25, 100/200=. 5, 150/200=. 75, 200/200=1 76 80 25 5
The observed cumulative frequency of row 3 is found by 80/
90 36 15 24
200=. 40, 140/200=. 70, 180/200=. 90
The calculated D value is the point of greatest divergence 26 78 18 6
between cumulative observed and theoretical distributions.
66 71 4 11
In our example this is .20
7. Drawing a statistical conclusion: Since calculated D value (.20) 60 65 23 16
exceeds the critical value of .096, the null hypotheses of no
81
difference among shades are quickly rejected.
Mann – Whitney U test 46
This is used for data, which is ordinal and can be ranked. This
test makes use of the actual ranks of the observations as a
means of testing the hypotheses about the identity of two
population distributions. The Mann whiteny Test is used because the data is ordinal and
An Example Will Illustrate This Test converted into ranks.. Also the samples are independent.
To illustrate its use we will examine regular and commercial The formula for the Mann whitney U value is :
account satisfaction data from Table 3. The table contains the U1= n1 70n2 +n1 (n1 + 1)/2 - R
attitude scores obtained from 30 customers (15 regular accounts U2 = n1 n2 - U1
and 15 commercial accounts). The scores from the combined
samples were then ranked in terms of their magnitude of the Analysis of Differences
original score (columns 3 and 4 of Table3). That is, the highest Where n1 and n2 are the two sample sizes and R1 and R2are the
score gets rank 1, the next highest 2, and so on. sums of the ranks for each group. Letting regular accounts be
sample 1 and the commercial accounts be sample 2, we find that
n1 = 15 n2=15 R1 = 198 R2= 267 the ad campaign designed to mea-sure the awareness of services
RESEARCH METHODOLOGY

The critical value for the statistic U* is found in Appendix I. For offered. After the ad campaign, the bank administered the same
on á = 0.05, n1 = 15 and n2 = 15, the critical value for the questionnaire to the same group of people. The bank wished
Mann-Whitney statistic is U* = 64 for a two-tailed test. For this to determine whether there was any change in the aware-ness of
test, the null hypothesis will be rejected if the computed value, services offered due to the ad campaign. Both the before-and
U, is 64 or less. Otherwise, it will not be rejected. This decision after ad campaign scores is presented in Table 4
is just the opposite of the decision- making procedure we
followed for most of the other tests of
significance. Table 4

Calculation of the Test Consumer Awareness of Bank Services Offered


Statistic. DIFFER
BEFORE AD AFTER AD OVERALL
Therefore, ENCE RANK OF
U1 = (15)(15) + 15(15 + 1) - 198 Consumer Campaign Campaign In Score Rank Of
Rank
2 + SCORES SCORES
= 225 + 120 - 198
1 82 87 5 6.5 6.5
= 345 - 198
2 81 4 3 4 4
= 147
And 3 89 84 -5 6.5
U2 = (15)(15) - 147 = .225 - 147 4 74 76 2 2 2.5 6.5
= 78 5 68 78 10 9 9
The Mann-Whitney U is the smaller of
the two U values calcu-lated. In this case Solution
the Mann-Whitney U is 78. Inspection of the formulas for
calculating U will indicate that the more similar the two groups Step 1. The Null Hypothesis. The null hypothesis to be tested
are in their attitudes, the smaller the R values will be and the is that there is no difference in awareness of services offered
larger the value of U. Therefore we are testing the probability of after the ad campaign. The alternative hyp6thesis would be that
obtaining a value the size of the smallest of the two groups if there was an aware-ness of the services after the ad campaign.
the two groups are indeed similar in their attitudes. Step 2. The Level of Significance. It was decided that á = 0.05.
Step J.
Drawing a Statistical Conclusion:
The Statistical Test.
Since the computed U is larger than the critical U*, it does not
fall in the critical region and the null hypothesis is not rejected. The Wilcoxon test is appropriate be-cause the study is of related
For this test the computed value must be less than the critical samples and in which the data measured, is ordinal and the
value U* to reject the null hypothesis. Once again, the evidence differences can be ranked in magnitude. The test statistic
does not support a difference in the attitudes between the two calculated is the T value. Since the direction of the difference is
groups. predicted, a one-tailed test is appropriate.
The Kruskal- Wallis test is an extension of the Mann-Whitney U Step 4. The Decision Rule.
t situations where more than two independent samples are The critical value of the Wilcoxon T is found Appendix J for n
being compared example, this test could be used if the = 10 at the 0.05 level of significance and a one-toiled test are 10.
customers were divided into three or groups based on some This indicates that a computed Tvalue of less than 10, the
criterion, such as regular accounts, commercial aCC4 and charge critical value, rejects the null hypothesis. The argument is similar
accounts. to that, of the Mann-Whitney U statistic.
Signed Rank or Wilcoxon Test Step 5. Calculate the Test Statistic. The procedure for the test is
This test is the complement of the Mann – Whitney U test and very simple. The signed difference between each pair of
is used when ordinal data on two samples are involved and the observations is found. Then these differences are ranked-
two samples are related or dependent. This test is therefore ordered without regard to their algebraic sign. Finally, the sign
suited to a pre-test and post-test situation. The test can also be of the difference is attached to the rank for that difference. The
used for interval or ratio data when the assumptions underlying test statistic, T, is the smaller of the two sums of the ranks. For
the parametric z or t test cannot be met. our example, T = 6.5 since the smaller sum is associated with
the negative difference.
An Example Illustrates This Rule
To illustrate the procedure, suppose that our bank in the If the null hypothesis is true the sums of positive and negative
previous ex-amples wanted to test the effectiveness of an ad ranks should be approximately equal. However the larger the
campaign intended to enhance the awareness of the bank’s difference between the underlying population, the smaller would
service features. The bank ad-ministered a questionnaire before be the value of T since it is defined as the smaller of the ranks.

© Copy Right: Rai University


138 11.556
Step6 We draw a statistical conclusion; Discussion Questions

RESEARCH METHODOLOGY
since the computed T value is 6.5 is less than the critical T valie 1. What analytical technique(s) would be appropriate for making
of 120; the null hypothesis, which states that there is no a formal analysis?
difference in the awareness of bank services, is rejected. 2. What is the null hypothesis to be tested?
Exercises 3. Run an analysis of the data using the 0. .05 level of
Small Grocers Association significance. What conclusions can you draw?
The Small Grocers Association is a group of independent grocers References:
who have banded together so that they may compete with the 1. Levin and Rubin management Statistics
larger supermarket chains. By making large purchases as a group, 2. Luck and rubin Marketing Research
it is able to pay less than if it were to purchase as an individual
store. Because of its newfound purchasing power, the association 3. Dr Ashrams web site
wants to advertise that its prices are really no high; the neighbor-
hood stores that is a member of the association than in
supermarkets. It has chosen one Eroduct”-a 5-pound ham-with
which to make its point in the advertisement.
At its recent monthly meeting, the executive board brought the
advertising campaign to the members for approval. One
member stated that’ ‘we better make sure of our claim.
The board of directors agreed, and asked the association’s
marketing director to check this out before the copy was
submitted to the newspaper. To check on this claim, the
marketing director obtained a random sample of prices from
six neighborhood association member stores and nine super-
markets
Discussion Questions
1 What analytical technique would be appropriate for making 0
formed analysis?
2 What ore the null and the alternative hypotheses to be tested?
3 Run an analysis of the data using the 0.05
National Motors, Inc.
National Motors is a manufacturer of motor scooters. As part
of their operating policy, the executives wished to determine
whether there was a difference be-tween the dealer’s customers
and the company’s dealers in terms of satisfaction with the
company’s warranty policy. National Motor’s marketing research
de-partment developed a questionnaire utilizing Liket-type
statements that encom-passed a full range of service and
warranty questions. The researchers believed the data obtained
from the questionnaires were ordinal. The questionnaires were
mailed to a random sample of customers who had returned the
National Motors’s warranty card and a second mailing of the
questionnaires were sent to a sample
Dealers Customers Dealers

74 92 43 49
81 42 23 32
35 54 88 52
59 59 55 27
90 83 67 81
33 30 53 68
82 34 85 51
68 54 70 65
56 39 30 25
46 65 75 46

© Copy Right: Rai University


11.556 139
RESEARCH METHODOLOGY

LESSON 25:
CHI-SQUARE TEST

Student, you all aware of chi-square test for goodness of fit and • Sample variance and population variance multiplied by the
independence of attributes degrees of freedom.
• This occurs when the population is normally distributed
with population variance ó 2.
Symbolically, Chi-square is defined as ,(with usual notations)

Properties of the Chi-Square


• Chi-square is non-negative. Since it is the ratio of two non-
negative values, therefore must be non-negative itself.
• Chi-square is non-symmetric.
The Chi Square distribution is a mathematical distribution that
is used directly or indirectly in many tests of significance. The • For each degree of freedom, we have one chi-square
most common use of the chi square distribution is to test distributions
differences between proportions. Although this test is by no • The degree of freedom when working with a single
means the only test based on the chi square distribution, it has population variance is n-1.
come to be known as the chi square test. The chi square
Chi-Square Probabilities
distribution has one parameter, its degrees of freedom (df). It
Since the chi-square distribution isn’t symmetric, the method
has a positive skew; the skew is less with more degrees of
for looking up left-tail values is different from the method for
freedom. The mean of a chi square distribution is its df. The
looking up right tail values.
mode is df - 2 and the median is approximately df -0 .7.
• Area to the right - just use the area given.
Key Definitions
• Area to the left - the table requires the area to the right, so
Chi-square distribution subtract the given area from one and look this area up in the
A distribution obtained from the multiplying the ratio of table.
sample variance to population variance by the degrees of • Area in both tails - divide the area by two. Look up this area
freedom when random samples are selected from a normally for the right critical value and one minus this area for the left
distributed population critical value.
Contingency Table Degree of Freedom Which Aren’t in the
Data arranged in table form for the chi-square independence test Table
Expected Frequency When the degrees of freedom aren’t listed in the table, there are
The frequencies obtained by calculation. a couple of choices that you have.
Goodness-of-fit Test • You can interpolate.
A test to see if a sample comes from a population with the 1. This is probably the more accurate way.
given distribution. 2. Interpolation involves estimating the critical value by
Independence Test figuring how far the given degrees of freedom are between
A test to see if the row and column variables are independent. the two df in the table and going that far between the critical
values in the table.
Observed Frequency
The frequencies obtained by observation. These are the sample For Example: Most people born in the 70’s didn’t have to learn
frequencies. interpolation in high school because they had calculators which
would do logarithms (we had to use tables in the “good old”
Chi-Square Distribution days).
• You can go with the critical value which is less likely to cause
The chi-square ?2 distribution is obtained from the values of
you to reject in error (type I error).
the ratio of the

© Copy Right: Rai University


140 11.556
1. For a right tail test, this is the critical value further to the right • It has a chi-square distribution.

RESEARCH METHODOLOGY
(larger). • The value of the test statistic doesn’t change if the order of
2. For a left tail test, it is the value further to the left (smaller). the categories is switched.
3. For a two-tail test, it’s the value further to the left and the 2. Test for Independence
value further to the right. Note, it is not the column with the In the test for independence, the claim is that the row and
degrees of freedom further to the right, it’s the critical value column variables are independent of each other.
which is further to the right.
This is the null hypothesis.
Uses of Chi-square test The multiplication rule said that if two events were indepen-
1.Goodness-of-fit Test dent, then the probability of both occurring was the product of
The idea behind the chi-square goodness-of-fit test is to see if the probabilities of each occurring. This is key to working the
the sample comes from the population with the claimed test for independence.
distribution. If you end up rejecting the null hypothesis, then the assump-
Another way of looking at that is to ask if the frequency tion must have been wrong and the row and column variable
distribution fits a specific pattern. are dependent. Remember, all hypothesis testing is done under
Two values are involved, an observed value, which is the the assumption the null hypothesis is true.
frequency of a category from a sample, and the expected The test statistic used is the same as the chi-square goodness-of-
frequency, which is calculated based upon the claimed distribu- fit test. The principle behind the test for independence is the
tion. same as the principle behind the goodness-of-fit test.
The idea is that if the observed frequency is really close to the The test for independence is always a right tail test.
claimed (expected) frequency, then the square of the deviations In fact, you can think of the test for independence as a good-
will be small. ness-of-fit test where the data is arranged into table form. This
The square of the deviation is divided by the expected frequency table is called a contingency table.
to weight frequencies.
A difference of 10 may be very significant if 12 was the expected
frequency, but a difference of 10 isn’t very significant at all if the
expected frequency was 1200.
The test statistic has a chi-square distribution when the
If the sum of these weighted squared deviations is small, the following assumptions are met
observed frequencies are close to the expected frequencies and
• The data are obtained from a random sample
there would be no reason to reject the claim that it came from
that distribution. Only when the sum is large is the reason to • The expected frequency of each category must be at least 5.
question the distribution. Therefore, the chi-square goodness- The following are properties of the test for independence
of-fit test is always a right tail test. • The data are the observed frequencies.
• The data is arranged into a contingency table.
• The degrees of freedom are the degrees of freedom for the
row variable times the degrees of freedom for the column
variable. It is not one less than the sample size, it is the
The test statistic has a chi-square distribution when the product of the two degrees of freedom.
following assumptions are met • It is always a right tail test.
• The data are obtained from a random sample • It has a chi-square distribution.
• The expected frequency of each category must be at least 5. • The expected value is computed by taking the row total
This goes back to the requirement that the data be normally times the column total and dividing by the grand total
distributed. You’re simulating a multinomial experiment • The value of the test statistic doesn’t change if the order of
(using a discrete distribution) with the goodness-of-fit test the rows or columns are switched.
(and a continuous distribution), and if each expected
• The value of the test statistic doesn’t change if the rows and
frequency is at least five then you can use the normal
columns are interchanged (transpose of the matrix)
distribution to approximate (much like the binomial).
The following are properties of the goodness-of-fit test Applications of Chi-square Test
• The data are the observed frequencies. This means that there
is only one data value for each category. Therefore, ... 1. Goodness of fit
Habitat Use. If you are looking for habitat selection or avoid-
• The degrees of freedom is one less than the number of
ance in a species (e.g., black bear), you can use a goodness of fit
categories, not one less than the sample size. test to see if the animals are using habitat in proportion to its
• It is always a right tail test. availability. If you have collected 100 radio-locations on a bear or
bears, you would expect that, if the species were not selecting or

© Copy Right: Rai University


11.556 141
Contingency Tables and Tests for Independence of Factors:
RESEARCH METHODOLOGY

avoiding habitat, you would find the radio-locations spread in


each habitat type depending on its general availability (If 90% Survival. Is the probability of being male or female indepen-
of the area was lowland conifer, you would expect 90% of the dent of being alive or dead? Let’s use data on ruffed grouse.
locations to occur in that habitat type). Imagine you generate the You collect mortality data from 100 birds you radio-collared and
following data from your spring bear study: test the following hypothesis:
Ho : The bears are using habitat in proportion to its availability. Ho: The 2 sets of attributes (death and sex of bird) are
unrelated (independent).

1. Fill in the expected number of radio-locations in each cover type.


2. Calculate the chi-square value for these bears.
3. Given that the degrees of freedom are (r-1)*(c-1)= 4, the
critical value for chi-square (at alpha = 0.05) is anything
greater than 9.49. Would you accept or reject the null Expected values for each cell can be can be calculated by multi-
hypothesis? What does that mean in layman’s terms? plying the row total by the column total and dividing by the
Survival. Suppose you are investigating the population grand total.
dynamics of the deer herd at Sandhill and hypothesize that a EX: Expected Value = 70 * 67 / 100 = 46.9
constant death rate of 50% per year has existed over the past
several years with a stable herd. You sample the population at 1. Calculate the chi-square value.
random and classify deer into age groups as indicated: 2. Knowing the critical value for 1 degree of freedom (alpha =
0.05) is anything greater than 3.84146, what can you conclude
about the independence of these 2 factors? rates?
II. Chi Square Test for Independence (2-Way chi-
square)
(SPSS Output)
A large-scale national randomized experiment was conducted in
the 1980s to see if daily aspirin consumption (as compared to
The expected value, (X), of the first age group is obtained from an identical, but inert placebo) would reduce the rate of heart
the formula: attacks. This study (The Physicians Health Study) was described
X + (d + d2 + ... dn-1 )X = T in one of the episodes of the statistics video series A Against
X + (.5 + .52 + ... .55 ) X = 253 All Odds.@ Here are the actual results from the study using
X + (.96875) X = 253 22,071 doctors who were followed for 5 years:
1.96875 X = 253 Heart Attack
X = 129 Absent Present
Subsequent expected values are computed by applying the Aspirin 10,933 104 —>11,037
expected 50% death rate (d) for each succeeding year.
Aspirin
1) State the null hypothesis
Placebo 10,845 189 —>11,034
2) Calculate the Chi-square value.
21,778 293 (22,071 is grand total)
3) Knowing the critical value for 5 degrees of freedom (* =
0.05) is 11.0705, what do you conclude about the “fit” Data > Weight Cases > Weight Cases by > click over Count
between the observed and hypothesized death rates? Are Analyze > Descriptive Statistics > Cross tabs
there significant differences?? Row: aspirin
Model Selection: Many programs develop predictive equations Column: heartatt
for data sets. A common test for the “fit” of a model to the Statistics > Chi-square
data is chi-square goodness-of-fit test. For example, program
DISTANCE, which develops curves to estimate probabilities Cells > row percentages
of detection, uses discrete distance categories (e.g., 0-5m, 5-10m, chi-square = 25.01 critical chi-square = 3.84
etc.) to see how well the model predicts the number of objects Statistical decision: Reject H0
that should be seen in each distance category (Expected) versus
what the data actually show (Observed). II)

© Copy Right: Rai University


142 11.556
Conclusion Column probabilities Let pA be the probability that a defect will

RESEARCH METHODOLOGY
A chi-square analysis indicated that there was a significant be of type A. Likewise, define pB, pC, and pD as the probabilities
relationship between aspirin condition and incidence of heart of observing the other three types of defects. These probabili-
attacks, chi-square (1, N=22,071)= 25.01, p<.001. A greater ties, which are called the column probabilities, will satisfy the
percentage of heart attacks occurred for participants taking the requirement
placebo (M=1.7%) compared to those taking aspirin (M=0. pA + pB + pC + pD = 1
Part II Row probabilities By the same token, let pi (i=1, 2, or 3) be the
row probability that a defect will have occurred during shift i,
Application of chi-square test
where
Source:http://www.itl.nist.gov/div898/handbook/prc/
p1 + p2 + p3 = 1
section4/prc45.htm
Multiplicative Law of Probability
Product and Process Comparisons
Then if the two classifications are independent of each other, a
Comparisons based on data from more than two processes
cell probability will equal the product of its respective row and
How can we compare the results of classifying according to column probabilities in accordance with the Multiplicative Law
several categories of Probability
Contingency Table approach Example of obtaining column and row probabilities
When items are classified according to two or more criteria, it is For example, the probability that a particular defect will occur in
often of interest to decide whether these criteria act indepen- shift 1 and is of type A is (p1) (pA). While the numerical values of
dently of one another. the cell probabilities are unspecified, the null hypothesis states
For example, suppose we wish to classify defects found in that each cell probability will equal the product of its respective
wafers produced in a manufacturing plant, first according to the row and column probabilities. This condition implies indepen-
type of defect and, second, according to the production shift dence of the two classifications. The alternative hypothesis is that
during which the wafers were produced. If the proportions of this equality does not hold for at least one cell.
the various types of defects are constant from shift to shift, In other words, we state the null hypothesis as H0: the two
then classification by defects is independent of the classification classifications are independent, while the alternative hypothesis
by production shift. On the other hand, if the proportions of is Ha: the classifications are dependent.
the various defects vary from shift to shift, then the classifica-
To obtain the observed column probability, divide the column
tion by defects depends upon or is contingent upon the shift
total by the grand total, n. Denoting the total of column j as c j,
classification and the classifications are dependent.
we get
In the process of investigating whether one method of
classification is contingent upon another, it is customary to
display the data by using a cross classification in an array
consisting of r rows and c columns called a contingency table.
A contingency table consists of r x c cells representing the r x c
possible outcomes in the classification process.
Similarly, the row probabilities p1, p2, and p3 are estimated by
Let Us Construct An Industrial Case dividing the row totals r1, r2, and r3 by the grand total n,
Industrial example A total of 309 wafer defects were recorded and respectively
the defects were classified as being one of four types, A, B, C, or
D. At the same time each wafer was identified according to the
production shift in which it was manufactured, 1, 2, or 3.
Contingency table classifying defects in wafers according to type Denote the observed frequency of the cell in row i and column
and production shift jof the contingency table by n ij. Then we have
These counts are presented in the following table
Type of Defects
Shift A B C D Total

Expected Cell Frequencies


1 15(22.51) 21(20.99) 45(38.94) 13(11.56) 94
Denote the observed frequency of the cell in row i and column
2 26(22.9) 31(21.44) 34(39.77) 5(11.81) 96
jof the contingency table by n ij. Then we have
3 33(28.50) 17(26.57) 49(49.29) 20(14.6 3) 119

Total 74 69 128 38 309

(Note: the numbers in parentheses are the


Estimated expected cell frequency when H0 is true.
expected cell frequencies).

© Copy Right: Rai University


11.556 143
In other words, when the row and column classifications are going back to the birth records at the hospital to locate the sex
RESEARCH METHODOLOGY

independent, the estimated expected value of the observed cell of the next child born at
frequency n ij in an r x c contingency table is equal to its respective that hospital, and using these children as controls.
row and column totals divided by the total frequency
Sex of case (cot death) child
Sex of Control Child Male Female
Male 68 37
Female 59 53
The estimated cell frequencies are shown in parentheses in the
a. Calculate the Odds Ratio (OR) of Cot Death for girls versus
contingency table above.
boys.
Exercises b. What is the Odds Ratio (OR) of Cot Death for boys versus
1. Consider the following data: girls?
Glasses Worn? c. What additional information is needed about the controls to
Marital Status None Contacts ensure that these data are correct?
Spectacles 5 A Market researcher interested in the business publication
Never Married 157 67 74 reading habits of the purchasing agents has assembled the
following data
Currently Married 135 22 132
Business publication frequency of first choice
Previously Married 102 81 85
A 35
Use a c2 procedure to determine if there is any association
between marital status and B 30
what eyewear is used. Can you think of another factor which C 45
may explain this table? D 55
2 Consider the following data regarding Sudden Infant Death
Syndrome (SIDS) • Test the null hypothesis (á = 0.05 ) that there is no difference
State of Residence Tasmania Queens land among frequencies of first choice of tested publications
Status at Age 2 37 88 • If the choice of A and C and that of B and D are aggregated,
Cot Death 6 21 test the null hypothesis at á = 0.05 that there is no
differences.
Still Alive 214 601
6. When should the correction for continuity be used?
(a) Calculate the probability (risk) of Cot Death for each state
7. A die is suspected of being biased. It is rolled 24 times with
(b) Calculate the Relative Risk (RR) of Cot Death for Tasma-
the following result.
nian infants vs QLD infants.
3. Consider the following data for the risk of having a stroke, Outcome Frequency
which was obtained by
1 8
selecting male patients with stroke and determining their
smoking status, and finding a 2 4
group of male controls with the same age distribution, and 3 1
determining the smoking
4 8
status of those controls.
Smoking Status 5 3
Stroke Smokers NonSmokers 6 0
YES (cases) 44 30
NO (controls) 29 45 Conduct a significance test to see if the die is biased.
a Calculate the Odds Ratio (OR) of stroke for smokers vs 8. When can you use either a Chi Square or a z test and reach
nonsmokers. the same conlcusion?
b What can be said about the absolute risk (probability) of 9. Ten subjects are each given two flavors of ice-cream to taste
stroke for the two groups and say whether they like them. One of the 10 liked the first
(smokers and non-smokers)? flavor and seven out of 10 liked the second flavor. Can the
4. Consider the following data for the risk of Cot Death / Chi Qquare test be used to test whether this difference in
Sudden Infant Death Syndrome propotions (.10 versus .70) is significant? Why or why not?
(SIDS). The data are obtained by selecting all cases of Cot Death 10.A recent experiment investigated the relationship between
in one year, and then smoking and urinary incontinence. Of 322 subjects in the

© Copy Right: Rai University


144 11.556
study who were incontinet, 113 were smokers, 51 were 2. Chi-Square Test-Goodness of Fit

RESEARCH METHODOLOGY
former smokers, and 158 had never smoked. Of 284 control A number of marketing problems involve decision situations
subjects who were not incontinent, 68 were smokers, 23 were in which it is important for a marketing manager to know
former smokers, and 193 had never smoked. Do a whether the pattern of frequencies that are observed fit well
significance test to see if there is a relationship between with the expected ones. The appropriate test is the c2 test of
smoking and incontinence goodness of fit. The illustration given below will clarify the role
of c2 in which only one categorical variable is involved.
Appendix
Source: http://davidmlane.com/hyperstat/chi_square.html Problem: In consumer marketing, a common problem that
any marketing manager faces is the selection of appropriate
Glimpses into Application of Chi-Square Tests in Marketing
colors for package design. Assume that a marketing manager
By P.K. Viswanathan wishes to compare five different colors of package design. He is
Adjunct Professor and Management Consultant interested in knowing which of the five is the most preferred
Chennai-India one so that it can be introduced in the market. A random
sample of 400 consumers reveals the following:
Preamble
In this article, an attempt is made to bring into sharp focus the
Package Color Preference by Consumers
use of χ2 in marketing function. By no means, the coverage is
exhaustive. The aim is to make the reader appreciate the
conceptual framework of Chi-Square analysis through problem Red 70
illustrations in marketing. The ideas presented in this article
certainly can be extended to many decision situations in Blue 106
marketing that can fruitfully employ chi-square tests.
Green 80
1. Chi-Square ( ) Analysis- Introduction
χ2
Pink 70
Consider the following decision situations:
1) Are all package designs equally preferred? 2) Are all brands Orange 74
equally preferred? 3) Is their any association between income
level and brand preference? 4) Is their any association between Total 400
family size and size of washing machine bought? 5) Are the
attributes educational background and type of job chosen Do the consumer preferences for package colors show any
independent? The answer to these questions require the help of significant difference?
Chi-Square (χ2) analysis. The first two questions can be un- Solution: If you look at the data, you may be tempted to infer
folded using Chi-Square test of goodness of fit for a single that Blue is the most preferred color. Statistically, you have to
variable while solution to questions 3, 4, and 5 need the help of find out whether this preference could have arisen due to
Chi-Square test of independence in a contingency table. Please chance. The appropriate test statistic is the c2 test of goodness
note that the variables involved in Chi-Square analysis are of fit.
nominally scaled. Nominal data are also known by two names- Null Hypothesis: All colors are equally preferred.
categorical data and attribute data. Alternative Hypothesis: They are not equally preferred
The symbol χ2 used here is to denote the chi-square distribution Observed Expected
Package
whose value depends upon the number of degrees of freedom Frequencies Frequencies
(d.f.). As we know, chi-square distribution is a skewed distribu- Color
(O) (E)
tion particularly with smaller d.f. As the sample size and
therefore the d.f. increases and becomes large, the χ2 distribution Red 70 80 100 1.250

approaches normality. Blue 106 80 676 8.450

χ2 tests are nonparametric or distribution-free in nature. This Green 80 80 0 0.000


means that no assumption needs to be made about the form Pink 70 80 100 1.250
of the original population distribution from which the samples
are drawn. Please note that all parametric tests make the Orange 74 80 36 0.450
assumption that the samples are drawn from a specified or Total 400 400 11.400
assumed distribution such as the normal distribution.
For a meaningful appreciation of the conditions/assumptions Please note that under the null hypothesis of equal preference
involved in using chi-square analysis, please go through the for all colors being true, the expected frequencies for all the
contents of hyperstat on chi-square test meticulously. colors will be equal to 80. Applying the formula

© Copy Right: Rai University


11.556 145
Analyze the cross-tabulation data above using chi-square test of
RESEARCH METHODOLOGY

independence and draw your conclusions.


Solution
Null Hypothesis: There is no association between the brand
we get the computed value of chi-square ( χ2 ) = 11.400 preference and income level (These two attributes are independent).
Alternative Hypothesis: There is association between brand
The critical value of χ2 at 5% level of significance for 4 degrees preference and income level (These two attributes are dependent).
of freedom is 9.488. So, the null hypothesis is rejected. The Let us take a level of significance of 5%.
inference is that all colors are not equally preferred by the
consumers. In particular, Blue is the most preferred one. The In order to calculate the χ2 value, you need to work out the
marketing manager can introduce blue color package in the expected frequency in each cell in the contingency table. In our
market. example, there are 4 rows and 4 columns amounting to 16
3. Chi-Square Test of Independence elements. There will be 16 expected frequencies. For calculating
The goodness-of-fit test discussed above is appropriate for expected frequencies, please go through hyperstat. Relevant data
situations that involve one categorical variable. If there are two tables are given below:
categorical variables, and our interest is to examine whether Observed Frequencies (These are actual
these two variables are associated with each other, the chi-square frequencies observed in the survey)
(χ2) test of independence is the correct tool to use. This test is
very popular in analyzing cross-tabulations in which an Brands
investigator is keen to find out whether the two attributes of
interest have any relationship with each other. Brand1 Brand2 Brand3 Brand4 Total
The cross-tabulation is popularly called by the term “contin- Income
gency table”. It contains frequency data that correspond to the
categorical variables in the row and column. The marginal totals Lower 25 15 55 65 160
of the rows and columns are used to calculate the expected
frequencies that will be part of the computation of the χ2 Middle 30 25 35 30 120
statistic. For calculations on expected frequencies, refer hyperstat
Upper
on χ2 test. 50 55 20 22 147
Problem: A marketing firm producing detergents is interested Middle
in studying the consumer behavior in the context of purchase Upper 60 80 15 18 173
decision of detergents in a specific market. This company is a
major player in the detergent market that is characterized by Total 165 175 125 135 600
intense competition. It would like to know in particular
whether the income level of the consumers influence their Expected Frequencies (These are calculated on the assumption
choice of the brand. Currently there are four brands in the of the null hypothesis being true: That is, income level and
market. Brand 1 and Brand 2 are the premium brands while brand preference are independent)
Brand 3 and Brand 4 are the economy brands.
Brands
A representative stratified random sampling procedure was
Brand1 Brand2 Brand3 Brand4 Total
adopted covering the entire market using income as the basis of
selection. The categories that were used in classifying income Income
level are: Lower, Middle, Upper Middle and High. A sample of
Lower 44.000 46.667 33.333 36.000 160.000
600 consumers participated in this study. The following data
emerged from the study. Middle 33.000 35.000 25.000 27.000 120.000
Cross Tabulation of Income versus Brand chosen (Figures in Upper Middle 40.425 42.875 30.625 33.075 147.000
the cells represent number of consumers)
Upper 47.575 50.458 36.042 38.925 173.000
Brands
Total 165.000 175.000 125.000 135.000 600.000
Brand1 Brand2 Brand3 Brand4 Total
Income Note: The fractional expected frequencies are retained for the
purpose of accuracy. Do not round them.
Lower 25 15 55 65 160
Calculation
Middle 30 25 35 30 120
Compute
Upper Middle 50 55 20 22 147
Upper 60 80 15 18 173 There are 16 observed frequencies (O) and 16 expected frequen-
Total 165 175 125 135 600 cies (E). As in the case of the goodness of fit, calculate this c2

© Copy Right: Rai University


146 11.556
value. In our case, the computed c2 =131.76 as shown below:

RESEARCH METHODOLOGY
12 18.549 21.026 23.337 26.217 32.910
Each cell in the table below shows (O-E)2/(E)
13 19.812 22.362 24.736 27.688 34.528
Brand1 Brand2 Brand3 Brand4
14 21.064 23.685 26.119 29.141 36.123
Income
15 22.307 24.996 27.488 30.578 37.697
Lower 8.20 21.49 14.08 23.36 16 23.542 26.296 28.845 32.000 39.252
Middle 0.27 2.86 4.00 0.33 17 24.769 27.587 30.191 33.409 40.790

Upper Middle 2.27 3.43 3.69 3.71 18 25.989 28.869 31.526 34.805 42.312

Upper 3.24 17.30 12.28 11.25 19 27.204 30.144 32.852 36.191 43.820
20 28.412 31.410 34.170 37.566 45.315
and there are 16 such cells. Adding all these 16 values, we get χ 2
21 29.615 32.671 35.479 38.932 46.797
=131.76
22 30.813 33.924 36.781 40.289 48.268
The critical value of χ2 depends on the degrees of freedom. The 23 32.007 35.172 38.076 41.638 49.728
degrees of freedom = (the number of rows-1) multiplied by 24 33.196 36.415 39.364 42.980 51.179
(the number of colums-1) in any contingency table. In our case,
there are 4 rows and 4 columns. So the degrees of freedom =(4- 25 34.382 37.652 40.646 44.314 52.620

1). (4-1) =9. At 5% level of significance, critical χ2 for 9 d.f = 26 35.563 38.885 41.923 45.642 54.052
16.92. Therefore reject the null hypothesis and accept the 27 36.741 40.113 43.195 46.963 55.476
alternative hypothesis.
28 37.916 41.337 44.461 48.278 56.892
The inference is that brand preference is highly associated with
income level. Thus, the choice of the brand depends on the income 29 39.087 42.557 45.722 49.588 58.301
strata. Consumers in different income strata prefer different brands. 30 40.256 43.773 46.979 50.892 59.703
Specifically, consumers in upper middle and upper income group
prefer premium brands while consumers in lower income and 31 41.422 44.985 48.232 52.191 61.098
middle-income category prefer economy brands. The company 32 42.585 46.194 49.480 53.486 62.487
should develop suitable strategies to position its detergent
products. In the marketplace, it should position economy brands 33 43.745 47.400 50.725 54.776 63.870
to lower and middle-income category and premium brands to 34 44.903 48.602 51.966 56.061 65.247
upper middle and upper income category.
35 46.059 49.802 53.203 57.342 66.619
Chi Square Table
Upper critical values of chi-square distribution with 36 47.212 50.998 54.437 58.619 67.985
degrees of freedom 37 48.363 52.192 55.668 59.893 69.347
Probability of exceeding the critical value 38 49.513 53.384 56.896 61.162 70.703
0.10 0.05 0.025 0.01 0.001
39 50.660 54.572 58.120 62.428 72.055
1 2.706 3.841 5.024 6.635 10.828
40 51.805 55.758 59.342 63.691 73.402
2 4.605 5.991 7.378 9.210 13.816
41 52.949 56.942 60.561 64.950 74.745
3 6.251 7.815 9.348 11.345 16.266
42 54.090 58.124 61.777 66.206 76.084
4 7.779 9.488 11.143 13.277 18.467
43 55.230 59.304 62.990 67.459 77.419
5 9.236 11.070 12.833 15.086 20.515
44 56.369 60.481 64.201 68.710 78.750
6 10.645 12.592 14.449 16.812 22.458
45 57.505 61.656 65.410 69.957 80.077
7 12.017 14.067 16.013 18.475 24.322
0.10 0.05 0.025 0.01 0.001
8 13.362 15.507 17.535 20.090 26.125
9 14.684 16.919 19.023 21.666 27.877
46 58.641 62.830 66.617 71.201 81.400
10 15.987 18.307 20.483 23.209 29.588
47 59.774 64.001 67.821 72.443 82.720
11 17.275 19.675 21.920 24.725 31.264
48 60.907 65.171 69.023 73.683 84.037

© Copy Right: Rai University


11.556 147
RESEARCH METHODOLOGY

49 62.038 66.339 70.222 74.919 85.351 88 105.372 110.898 115.841 121.767 134.746
50 63.167 67.505 71.420 76.154 86.661 89 106.469 112.022 116.989 122.942 135.978
51 64.295 68.669 72.616 77.386 87.968 90 107.565 113.145 118.136 124.116 137.208
52 65.422 69.832 73.810 78.616 89.272 91 108.661 114.268 119.282 125.289 138.438
53 66.548 70.993 75.002 79.843 90.573 92 109.756 115.390 120.427 126.462 139.666
54 67.673 72.153 76.192 81.069 91.872 93 110.850 116.511 121.571 127.633 140.893
55 68.796 73.311 77.380 82.292 93.168 94 111.944 117.632 122.715 128.803 142.119
56 69.919 74.468 78.567 83.513 94.461 95 113.038 118.752 123.858 129.973 143.344
57 71.040 75.624 79.752 84.733 95.751 96 114.131 119.871 125.000 131.141 144.567
58 72.160 76.778 80.936 85.950 97.039 97 115.223 120.990 126.141 132.309 145.789
59 73.279 77.931 82.117 87.166 98.324 98 116.315 122.108 127.282 133.476 147.010
60 74.397 79.082 83.298 88.379 99.607 99 117.407 123.225 128.422 134.642 148.230
61 75.514 80.232 84.476 89.591 100.888 100 118.498 124.342 129.561 135.807 149.449
62 76.630 81.381 85.654 90.802 102.166 Lower critical values of chi-square distribution with
63 77.745 82.529 86.830 92.010 103.442 degrees of freedom

64 78.860 83.675 88.004 93.217 104.716


Probability of exceeding the critical value
65 79.973 84.821 89.177 94.422 105.988
0.90 0.95 0.975 0.99 0.999
66 81.085 85.965 90.349 95.626 107.258
1. .016 .004 .001 .000 .000
67 82.197 87.108 91.519 96.828 108.526
2. .211 .103 .051 .020 .002
68 83.308 88.250 92.689 98.028 109.791
3. .584 .352 .216 .115 .024
69 84.418 89.391 93.856 99.228 111.055
4. 1.064 .711 .484 .297 .091
70 85.527 90.531 95.023 100.425 112.317
5. 1.610 1.145 .831 .554 .210
71 86.635 91.670 96.189 101.621 113.577
6. 2.204 1.635 1.237 .872 .381
72 87.743 92.808 97.353 102.816 114.835
7. 2.833 2.167 1.690 1.239 .598
73 88.850 93.945 98.516 104.010 116.092
8. 3.490 2.733 2.180 1.646 .857
74 89.956 95.081 99.678 105.202 117.346
9. 4.168 3.325 2.700 2.088 1.152
75 91.061 96.217 100.839 106.393 118.599
10. 4.865 3.940 3.247 2.558 1.479
76 92.166 97.351 101.999 107.583 119.850
11. 5.578 4.575 3.816 3.053 1.834
77 93.270 98.484 103.158 108.771 121.100
12. 6.304 5.226 4.404 3.571 2.214
78 94.374 99.617 104.316 109.958 122.348
13. 7.042 5.892 5.009 4.107 2.617
79 95.476 100.749 105.473 111.144 123.594
14. 7.790 6.571 5.629 4.660 3.041
80 96.578 101.879 106.629 112.329 124.839
15. 8.547 7.261 6.262 5.229 3.483
81 97.680 103.010 107.783 113.512 126.083
16. 9.312 7.962 6.908 5.812 3.942
82 98.780 104.139 108.937 114.695 127.324
17. 10.085 8.672 7.564 6.408 4.416
83 99.880 105.267 110.090 115.876 128.565
18. 10.865 9.390 8.231 7.015 4.905
84 100.980 106.395 111.242 117.057 129.804
19. 11.651 10.117 8.907 7.633 5.407
85 102.079 107.522 112.393 118.236 131.041
20. 12.443 10.851 9.591 8.260 5.921
86 103.177 108.648 113.544 119.414 132.277
21. 13.240 11.591 10.283 8.897 6.447
87 104.275 109.773 114.693 120.591 133.512

© Copy Right: Rai University


148 11.556
RESEARCH METHODOLOGY
22. 14.041 12.338 10.982 9.542 6.983 61. 47.342 44.038 41.303 38.273 32.459
23. 14.848 13.091 11.689 10.196 7.529 62. 48.226 44.889 42.126 39.063 33.181
24. 15.659 13.848 12.401 10.856 8.085 63. 49.111 45.741 42.950 39.855 33.906
25. 16.473 14.611 13.120 11.524 8.649 64. 49.996 46.595 43.776 40.649 34.633
26. 17.292 15.379 13.844 12.198 9.222 65. 50.883 47.450 44.603 41.444 35.362
27. 18.114 16.151 14.573 12.879 9.803 66. 51.770 48.305 45.431 42.240 36.093
28. 18.939 16.928 15.308 13.565 10.391 67. 52.659 49.162 46.261 43.038 36.826
29. 19.768 17.708 16.047 14.256 10.986 68. 53.548 50.020 47.092 43.838 37.561
30. 20.599 18.493 16.791 14.953 11.588 69. 54.438 50.879 47.924 44.639 38.298
31. 21.434 19.281 17.539 15.655 12.196 70. 55.329 51.739 48.758 45.442 39.036
32. 22.271 20.072 18.291 16.362 12.811 71. 56.221 52.600 49.592 46.246 39.777
33. 23.110 20.867 19.047 17.074 13.431 72. 57.113 53.462 50.428 47.051 40.519
34. 23.952 21.664 19.806 17.789 14.057 73. 58.006 54.325 51.265 47.858 41.264
35. 24.797 22.465 20.569 18.509 14.688 74. 58.900 55.189 52.103 48.666 42.010
36. 25.643 23.269 21.336 19.233 15.324 75. 59.795 56.054 52.942 49.475 42.757
37. 26.492 24.075 22.106 19.960 15.965 76. 60.690 56.920 53.782 50.286 43.507
38. 27.343 24.884 22.878 20.691 16.611 77. 61.586 57.786 54.623 51.097 44.258
39. 28.196 25.695 23.654 21.426 17.262 78. 62.483 58.654 55.466 51.910 45.010
40. 29.051 26.509 24.433 22.164 17.916 79. 63.380 59.522 56.309 52.725 45.764
41. 29.907 27.326 25.215 22.906 18.575 80. 64.278 60.391 57.153 53.540 46.520
42. 30.765 28.144 25.999 23.650 19.239 81. 65.176 61.261 57.998 54.357 47.277
43. 31.625 28.965 26.785 24.398 19.906 82. 66.076 62.132 58.845 55.174 48.036
44. 32.487 29.787 27.575 25.148 20.576 83. 66.976 63.004 59.692 55.993 48.796
45. 33.350 30.612 28.366 25.901 21.251 84. 67.876 63.876 60.540 56.813 49.557
46. 34.215 31.439 29.160 26.657 21.929 85. 68.777 64.749 61.389 57.634 50.320
47. 35.081 32.268 29.956 27.416 22.610 86. 69.679 65.623 62.239 58.456 51.085
48. 35.949 33.098 30.755 28.177 23.295 87. 70.581 66.498 63.089 59.279 51.850
49. 36.818 33.930 31.555 28.941 23.983 88. 71.484 67.373 63.941 60.103 52.617
50. 37.689 34.764 32.357 29.707 24.674 0.90 0.95 0.975 0.99 0.999
51. 38.560 35.600 33.162 30.475 25.368
52. 39.433 36.437 33.968 31.246 26.065 89. 72.387 68.249 64.793 60.928 53.386
53. 40.308 37.276 34.776 32.018 26.765 90. 73.291 69.126 65.647 61.754 54.155
54. 41.183 38.116 35.586 32.793 27.468 91. 74.196 70.003 66.501 62.581 54.926
55. 42.060 38.958 36.398 33.570 28.173 92. 75.100 70.882 67.356 63.409 55.698
56. 42.937 39.801 37.212 34.350 28.881 93. 76.006 71.760 68.211 64.238 56.472
57. 43.816 40.646 38.027 35.131 29.592 94. 76.912 72.640 69.068 65.068 57.246
58. 44.696 41.492 38.844 35.913 30.305 95. 77.818 73.520 69.925 65.898 58.022
59. 45.577 42.339 39.662 36.698 31.020 96. 78.725 74.401 70.783 66.730 58.799
60. 46.459 43.188 40.482 37.485 31.738 97. 79.633 75.282 71.642 67.562 59.577

© Copy Right: Rai University


11.556 149
widowed 100.0 1.6
RESEARCH METHODOLOGY

98. 80.541 76.164 72.501 68.396 60.356


6.3
99. 81.449 77.046 73.361 69.230 61.137
100. 82.358 77.929 74.222 70.065 61.918 4 2 1 1 4 8
divorced 25.0 12.5 12.5 50.0 3.2
Appendix 3.2 1.6 1.6 6.3
Source : Internet
Marketing Research Analysis - Crosstabulation Column 63 63 62 63 251
Marketing Research (MGT 461-1) Total 25.1 25.1 24.7 25.1 100.0
Professor Novak Chi-Square Value DF Significance
Crosstabulation —————— ———— —— —————
How To: Pearson 19.94944 9 .01823
From the SPSS menus (see pages 221-225) choose: Likelihood Ratio 18.34734 9 .03135
• Statistics —> Summarize —> Crosstabs Mantel-Haenszel test for .36234 1 .54721
1. Select the following options: linear association
• Row(s): V76 (marital status) Minimum Expected Frequency - .988
• Column(s): V3 (Country Club) Cells with Expected Frequency < 5 - 12 OF 16 ( 75.0%)
• From the Statistics submenu, click on Chi-square and Approximate
Phi and Cramer’s V Statistic Value ASE1 Val/ASE0 Significance
• From the Cells submenu, click on Observed, Row, ———— ———— ———— ———— ———
Column
Phi .28192 .01823 *1
• Select OK to run.
Cramer’s V .16277 .01823 *1
2. As you will see, we combine categories 2, 3, and 4
*1 Pearson chi-square probability
of V76 to form a new variable, “MARSTAT”
(married/non-married) which has categories 1 and 2. Number of Missing Observations: 1
We then rerun the original analysis using MARSTAT. Note:
3 Rerun, your analysis, with the following options: • Chi-square is statistically significant (p=.01823 < .05).
• Row(s): MARSTAT (recoded marital status) • However, the number of cells with expected frequency
• Column(s): V3 (Country Club) greater than five is 75%, well above the 20% value we use a
cut-off. Thus, the significance test is not valid.
• From the Statistics submenu, click on Chi-square and
Phi and Cramer’s V • We must combine categories and re-run the crosstabulation.

• From the Cells submenu, click on Observed, Second Crosstab


Expected, Unstandardized Residuals, Standardized MARSTAT recoded marital status by V3 club membership
Residuals V3 Page 1 of 1
• Select OK to run. Count
First Crosstab Row Pct Alden Chalet Chestnut Lancaste
V76 marital status by V3 club membership Col Pct Ridge r Row
V3 Page 1 of 1 1 2 3 4 Total
Count MARSTAT
Row Pct Alden Chalet Chestnut Lancaste 1 55 57 60 58 230
Col Pct Ridge r Row married 23.9 24.8 26.1 25.2 91.6
1 2 3 4 Total 87.3 90.5 96.8 92.1
V76 1 55 57 60 58 230 2 8 6 2 5 21
married 23.9 24.8 26.1 25.2 91.6 not married 38.1 28.6 9.5 23.8 8.4
87.3 90.5 96.8 92.1 12.7 9.5 3.2 7.9
2 2 5 1 1 9 Column 63 63 62 63 251
single 22.2 55.6 11.1 11.1 3.6 Total 25.1 25.1 24.7 25.1 100.0
3.2 7.9 1.6 1.6
3 4 4 Chi-Square Value DF Significance

© Copy Right: Rai University


150 11.556
—————— ———— —— —————— 2.7 .7 -3.2 -.3

RESEARCH METHODOLOGY
Pearson 3.80446 3 .28337 1.2 .3 -1.4 -.1
Likelihood Ratio 4.20774 3 .23989
Column 63 63 62 63 251
Mantel-Haenszel Total 25.1% 25.1% 24.7% 25.1%
test for 1.72155 1 .18949 100.0%
linear association
Chi-Square Value DF
Minimum Expected Frequency - 5.187
Significance
Approximate —————————— —————- ————
Statistic Value ASE1 Val/ASE0 Significance -
———— ———— -——— ——— ————
Phi .12311 .28337 Pearson 3.80446 3
*1 .28337
Cramer’s V .12311 .28337 Likelihood Ratio 4.20774 3
*1 .23989
Mantel-Haenszel
test for 1.72155 1
Note:
.18949
• Chi-square is not statistically significant (p=.28337 > .05). linear association
• Effect size. The phi-coefficient (.12311 here) is used to
measure the magnitude (size) of the association between Minimum Expected Frequency - 5.187
row and column variables. Phi is independent of sample Approximate
size, and is calculated as the square root of chi-square divided Statistic Value ASE1 Val/
by the sample size —> sqrt(3.804/251). The phi-coefficient ASE0 Significance
is interpreted as follows: -—————— ——— ———— ———
• 1 = small —
Phi .12311
• 3 = moderate
.28337 *1
• 5 = large Cramer’s V .12311
• If the Chi-square statistic were significant, we would .28337 *1
conclude that there was a significant association between *1 Pearson chi-square probability
country club membership and marital status. We would then Number of Missing Observations: 1
use:
This Table Prints
• Row probabilities to compare the marital status groups
• Observed counts, O[ij]
(i.e., rows).
• Expected counts, E[ij], estimated under the null hypothesis
• Column probabilities to compare the country clubs (i.e.
that there is no association between row and column
columns).
variables.
E[ij] = N[i+]N[+j]/N[++], where
Third Crosstab • N[i+] = sum of observed counts in row “i”
• N[+j] = sum of observed counts in column “j”
MARSTAT recoded marital status by V3 club
membership • N[++] = sum of all observed counts
Example: O[11] = (230)(63)/(251) = 57.7
V3 Page 1 of 1 • Residual = R[ij] = O[ij] - E[ij] = the difference between the
Count observed counts, and the counts expected under the null
Exp Val Alden Chalet Chestnut Lancaste hypothesis
Residual Ridge r Row
• Standardized Residuals = R[ij]/sqrt(E[ij]). The sum of
Std Res 1 2 3 4 Total
squared standardized residuals equals the chi-square statistic.
MARSTAT
Thus, standardized residuals are components of chi-square.
1 55 57 60 58 230
It is often useful to inspect the pattern of standardized
married 57.7 57.7 56.8 57.7 91.6%
residuals to determine the nature of significant association
-2.7 -.7 3.2 .3
between two variables.
-.4 -.1 .4 .0

2 8 6 2 5 21
not married 5.3 5.3 5.2 5.3 8.4%

© Copy Right: Rai University


11.556 151
RESEARCH METHODOLOGY

LESSON 26:
ANALYSIS OF VARIANCE (ANOVA)

2. MSA
Students, the tests we have learned up to this point allow us to F=
test hypotheses that examine the difference between only two MSW
means. 3. α =
• Analysis of Variance or ANOVA will allow us to test the 4. If Fcalc < Fc-1,n-c,a Do Not Reject H0.
difference between 2 or more means. If Fcalc > Fc-1,n-c,a Reject H0.
• ANOVA does this by examining the ratio of variability 5. ANOVA Table
between two conditions and variability within each
condition.
Source of Degrees of Sums of Mean Squares F
A t-test would compare the likelihood of observing the
Variation Freedom Squares
difference in the mean number of words recalled for each group.
An ANOVA test, on the other hand, would compare the Among c-1 SSA SSA MSA
MSA = F=
variability that we observe between the two conditions to the Groups c −1 MSW
variability observed within each condition. Recall that we
measure variability as the sum of the difference of each score
Within n-c SSW SSW
from the mean. When we actually calculate an ANOVA we will MSW =
n− c
use a short-cut formula. Groups
Thus, when the variability that we predict (between the two
groups) is much greater than the variability we don’t predict Total n-1 SST
(within each group), and then we will conclude that our
treatments produce different results. 6 Accept null hypothesis or reject null hypothesis
Xij = the i observation in the j group
th th
7 There are no significant differences among the c means or
nj = the number of observations in group j there is at least one inequality among the c means.
n = the total number of observations in all groups combined An Illustrative Numerical Example for
c = the number of groups or levels c nj ANOVA
∑∑ X ij
Let us introduce the ANOVA in simplest forms by numerical
illustration.
j =1 i =1
X = grand mean = X = Example:
n
Consider the following (small, and integer, indeed for illustra-
X j = mean for group j tion while saving space) random samples from three different
The following formulas are found on populations.
Hypothesis:
( )
c nj

SST = ∑∑ X ij − X
2
Null Hypothesis, H0: µ1 = µ2 = µ3,
j =1 i =1 Alternative Hypothesis,HA: at least two of the means are not
equal.
( )
c 2

SSA = ∑ n j X j − X (Not all m j are equal)


j =1 Significance level a?= 0.05,
the critical value from F-table is : F 0.05, 2, 12 = 3.89.

SSW = ∑∑ (X ij − X j )
c nj
2 Sample 1 Sample 2 Sample 3
j =1 i =1 2 3 5
Hypothesis test format: 3 4 5
1 3 5
1. H0:µ1 = µ2 = … = µc
3 5 3
H1: Not all µj are equal 1 0 2
SUM 10 15 20
Mean 2 3 4

© Copy Right: Rai University


152 11.556
Now, construct the ANOVA table for this numerical example by

RESEARCH METHODOLOGY
Demonstrate that, SST= SSB+SSW
plugging the results of your computation in the ANOVA
Computation of sample SST Table.
With the grand mean = 3, first, start with taking the difference
between each observation and the grand mean, and then The ANOVA Table
square it for each data point. Sources of Sum of Degrees of Mean F-
Variation Squares Freedom Squares Statistic
Sample 1 Sample 2 Sample 3 Between
10 2 5 2.30
Samples
1 0 4
Within
26 12 2.17
0 1 4 Samples
Total 36 14
4 0 4
0 4 0 Conclusion: There is not enough evidence to reject the null
4 9 1 hypothesis Ho.
Logic Behind ANOVA:
SUM 9 14 13
First, let us try to explain the logic and then illustrate it with a
simple example.
Therefore SST=36 with d.f = 15-1 = 14
In performing ANOVA test, we are trying to determine if a
Computation of sample SSB certain number of population means are equal.
Second, let all the data in each sample have the same value as the To do that, we measure the difference of the sample means and
mean in that sample. This removes any variation WITHIN. compare that to the variability within the sample observations.
Compute SS differences from the grand mean. That is why the test statistic is the ratio of the between-sample
Sample 1 Sample 2 Sample 3 variation (MST) and the within-sample variation (MSE). If this
ratio is close to 1, there is evidence that the population means
1 0 1 are equal.

1 0 1 Here’s a Hypothetical Example


“many people believe that men get paid more in the business
1 0 1 world than women, simply because they are male. To justify or
reject such a claim, you could look at the variation within each
1 0 1
group (one group being women’s salaries and the other being
1 0 1 men salaries) and compare that to the variation between the
means of randomly selected samples of each population. If the
SUM 5 0 5 variation in the women’s salaries is much larger than the
variation between the men and women’s mean salaries, one
Therefore SSB = 10, with d.f = 3-1 = 2 could say that because the variation is so large within the
women’s group that this may not be a gender-related problem.”
Computation of sample SSW
Now, getting back to our numerical example, we notice that:
Third, compute the SS difference within each sample using their given the test conclusion and the ANOVA test’s conditions, we
own sample means. This provides SS deviation WITHIN all may conclude that these three populations are in fact the same
samples. population.
Sample 1 Sample 2 Sample 3 Therefore, the ANOVA technique could be used as a measuring
0 0 1
tool and statistical routine for quality control as described
below using our numerical example.
1 1 1 Construction of the Control Chart for
1 0 1
the Sample Means
Under the null hypothesis the ANOVA concludes that µ1 = µ2
1 4 1 = µ3; that is, we have a “hypothetical parent population.”
The question is, what is its variance? The estimated variance is
1 9 4
36 / 14 = 2.75.
SUM 4 14 8 Thus, estimated standard deviation is = 1.60 and estimated
standard deviation for the means is 1.6 / 5 = 0.71.
SSW = 26 with d.f = 3(5-1) = 12 Under the conditions of ANOVA, we can construct a control
Results are: SST = SSB + SSW, and d.fSST = d.fSSB + d.fSSW, as chart with the warning limits = 3 ± 2(0.71); the action limits =
expected. 3 ± 3(0.71). The following figure depicts the control chart.

© Copy Right: Rai University


11.556 153
Manova Y By Gp(1,5)/Print=Homogeneity(bartlett)/
RESEARCH METHODOLOGY

Npar Tests K-w Y By Gp(1,5)/


Finish
ANOVA like two population t-test can go wrong when the
equality of variances condition is not met.
General Rule of 2, Homogeneity of Variance: Checking
the equality of variances
For 3 or more populations, there is a practical rule known as the
“Rule of 2”. According to this rule, one divides the highest
variance of a sample by the lowest variance of the other sample.
Given that the sample sizes are almost the same, and the value
of this division is less than 2, then, the variations of the
populations are almost the same.
Example: Consider the following three random samples from
three populations, P1, P2, P2

P1 P2 P3

25 17 8

25 21 10

20 17 14

18 25 16

13 19 12

6 21 14

5 15 6

22 16 16

25 24 13

10 23 6
The summary statistics and the ANOVA table are computed to
be:
Variable N Mean St.Dev SE Mean
P1 10 16.90 7.87 2.49
P2 10 19.80 3.52 1.11
P3 10 11.50 3.81 1.20

Analysis of Variance
SPSS program for ANOVA: More Than Two Independent
Means: Source DF SS MS F p-value
$Spss/Output=4-1.out1 Factor 2 79.40 39.70 4.38 0.023
Title ‘Analysis Of Variance - 1st Iteration’ Error 27 244.90 9.07
Data List Free File=’a.in’/Gp Y Total 29 324.30
Oneway Y By Gp(1,5)/Ranges=Duncan With an F = 4.38 and a p-value of .023, we reject the null at a = 0.05.
/Statistics Descriptives Homogeneity This is not good news, since ANOVA, like two sample t-test, can
go wrong when the equality of variances condition is not met
Statistics 1

© Copy Right: Rai University


154 11.556
Self Assessment

RESEARCH METHODOLOGY
Normal Probability Plot

. 999
. 99
. 95

Probability
. 80

. 50

. 20
. 05
. 01
In order to compare the effectiveness of three tax-preparation . 001
methods ten tax preparers were randomly assigned to one of
the methods. They were all given a hypothetical return con- -3 -2 -1 0
RESI1
1 2 3

trived for the purpose of the experiment. The number of Average: 0


StDev : 2.449 49
Kol mo gorov-Smirnov Normali ty Test
D+ : 0.208 D-: 0.193 D : 0.20 8

minutes each person required for completion of the return was N: 10 App roxi ma te P-Val ue > 0.15

recorded. Is there a significant difference in the average time to


prepare the returns among the three methods? Use a=.05.
One-way Analysis of Variance
Method
Analysis of Variance for time
I II III
Source DF SS MS F P
15 10 18
method 2 98.40 49.20 6.38 0.026
20 15 19
Error 7 54.00 7.71
19 11 23
Total 9 152.40
14
Individual 95% CIs For Mean
Solution of the self assessment exercise by computer
Based on Pooled StDev
MINITAB
Level N Mean StDev ——+————+————
Interpreting the Output +————+-
1. Examine the p-value for the Bartlett’s test 1 4 17.000 2.944 (———*———)
p-value < α => reject Ho => variances unequal => STOP 2 3 12.000 2.646 (———*———)
3 3 20.000 2.646 (———*———)
p-value ³ α => DNR Ho => variances equal => continue ——+————+————+————
2. Examine the p-value for the Kolmogorov-Smirnov test +-

p-value < α => reject Ho => not normal=> STOP Pooled StDev = 2.777 10.0 15.0 20.0 25.0

Tukey’s Pairwise Comparisons


p-value ³ α => DNR Ho => normally dist => continue
Family error rate = 0.0500
3. Examine the p-value for the ANOVA test Individual error rate = 0.0214
p-value < α => reject Ho => at least one significant difference Critical value = 4.17
among means => continue Intervals for (column level mean) - (row level mean)
1 2
p-value ³ α => DNR Ho => no significant differences among
2 -1.255
means => STOP
11.255
4. Examine the confidence intervals for each combination of
3 -9.255 -14.687
means
3.255 -1.313
interval contains 0 (signs unlike) => no significant difference
Example of How An Anova Should Be Written Up
interval doesn’t contain 0 (signs same) => significant difference
Check of Assumption Of Equal Variances
Homogeneity of Variance
H0: The variances are equal.
Bartlett’s Test (normal distribution)
H1: The variances are not equal
Test Statistic: 0.033
Homogeneity of Variance
P-Value : 0.984
Bartlett’s Test (normal distribution)
Test Statistic: 0.033
P-Value : 0.984

© Copy Right: Rai University


11.556 155
Since the p-value = 0.984 > 0.05, we DNR Ho. Family error rate = 0.0500
RESEARCH METHODOLOGY

Therefore, there are no significant differences among the Individual error rate = 0.0214
variances. The assumption of equal variances is met. Critical value = 4.17
Check the Assumption of Normality Intervals for (column level mean) - (row level mean)
. H0: The residuals fit a normal distribution. 1 2
H1: The residuals do not fit a normal distribution. 2 -1.255
Normal Probability Plot 11.255
3 -9.255 -14.687
.999 3.255 -1.313
.99
.95
Methods 2 and 3 are significantly different from each other.
Method 1 is not significantly different from either method 2 or
Probability

.80
.50 3. By examining the means displayed with the ANOVA
.20
analysis, I would recommend Method 2 as the method tax
.05
.01 prepares should use since the group using this method requires
.001 significantly less time to prepare returns on average than the
-3 -2 -1 0 1 2 3
group using method 3.
Average: 0
RESI1
Kolm ogorov-Sm irnov Normal ity Test Self Assessment
StDev: 2.44949 D+: 0.208 D-: 0.193 D : 0.208
N: 10 Approx im ate P-Valu e > 0.15 Alison, a fellow psychology major, decides to replicate Gregory’s
honors thesis study (see example for independent-samples t-
test) of fragrance and memory, but adds an additional
Since the p-value > 0.15 > 0.05, we DNR Ho. condition. There are three groups in Alison’s study: (a) read
passage on scented paper and test recall using paper scented
Therefore, the normality assumption is met. with same fragrance, (b) read passage and test recall using
unscented paper, and (c) read passage on scented paper but test
Check For Differences Among The Means
recall using unscented paper. She records the same dependent
1. H0: µ 1 = µ 2 = µ 3 variable (number of facts correctly recalled) and statistically
H1: at least one µj not equal compares the three groups of scores. Here are the data that were
2. ANOVA recorded:
Scented paper Unscented paper Scented/
3. α= 0.05
both times both times unscented paper
4. If the p-value < α reject H0.
32 23 22
If the p-value ³ α do not reject H0.
29 20 17
5. One-way Analysis of Variance
26 14 15
Analysis of Variance for time
mean = 29.00 mean = 19.00 mean = 18.00
Source DF SS MS F P
s= 3.00 s = 4.58 s = 3.61
code 2 98.40 49.20 6.38 0.026
Enter the data using two columns. Name the first column
Error 7 54.00 7.71 “group” and use a 1, 2, or 3 (1= both scented, 2=both
Total 9 152.40 unscented, 3=scented/unscented) to designate the group for
Individual 95% CIs For Mean the score. Name the second column “recall,” and type in the
recall score. Your data entry will look like this:
Based on Pooled StDev
Group Recall
Level N Mean StDev ——+———+——+———+-
1 32
1 4 17.000 2.944 (———*———)
1 29
2 3 12.000 2.646 (———*———)
1 26
3 3 20.000 2.646 (———*———)
2 23
——+————+————+————+- 2 20
Pooled StDev = 2.777 10.0 15.0 20.0 25.0 2 14
6. Since the p-value = 0.026 < 0.05, we reject Ho. 3 22
7. Therefore, there is at least one significant difference in the 3 17
average time it takes to complete the tax return among the three
3 15
methods.
Tukey’s pairwise comparisons

© Copy Right: Rai University


156 11.556
Null hypothesis H0: (µ1 = µ2 = µ3,

RESEARCH METHODOLOGY
Alternative Hypothesis, HA: at least two of the means are not
equal.
(Not all m j are equal)
Analyze > Compare Means > One-Way ANOVA
Dependent list: recall
Factor: group
Post-hoc > Turkey
Options > Descriptive
F= 7.74 critical F= 5.14 Statistical
decision: Reject H0
Source SS df MS F p
Between 222.00 2 111.00 7.74 .022
Within 86.00 6 14.33
Summary of Tukey results (see SPSS output)
Group 1 compared to group 2: statistically significant
Group1 compared to group 3: statistically significant
Group 2 compared to group 3: not statistically significant
Means from descriptives section of output: Group 1: 29.00
Group 2: 19.00
Group 3: 18.00
Conclusion
A one-way ANOVA indicated that there were significant
differences in recall across the three fragrance conditions, F
(2,6)= 7.74, p<.022. Post-hoc Tukey comparisons indicated
that recall was significantly better using scented paper during
both reading and recall (M=29.00) than for either scented paper
during reading only (M=18.00), or no scented paper at all
(M=19.00).
If the results had NOT been statistically significant, the
Tukey tests would not have been performed, and the
conclusion would read:
A one-way ANOVA indicated that recall did not differ signifi-
cantly across the three fragrance conditions.

© Copy Right: Rai University


11.556 157
RESEARCH METHODOLOGY

LESSON 27:
APPLICATIONS OF ANOVA

To, under stand the advanced theory and its application the Group 1 Group 2
below article is going to prove very useful. This article has been Observation 1 2 6
taken from the website www. Quikmba.com. Observation 2 3 7
In the class you are expected to discuss on use of this advanced Observation 3 1 5
with special focus on Latin Square Design. In the class you will Mean 2 6
be doing more exercises on ANOVA Sums of Squares (SS) 2 2
This article includes a general introduction to ANOVA and a Overall Mean 4
discussion of the general topics in the analysis of variance Total Sums of Squares 28
techniques, including repeated measures designs, , unbalanced
and incomplete designs, contrast effects, post-hoc comparisons, The means for the two groups are quite different (2 and 6,
assumptions, etc. respectively).
Basic Ideas The sums of squares within each group are equal to 2. Adding
The Purpose of Analysis of Variance them together, we get 4.
In general, the purpose of analysis of variance (ANOVA) is to If we now repeat these computations, ignoring group member-
test for significant differences between means. ship, that is, if we compute the total SS based on the overall
Elementary Concepts provides a brief introduction into the mean, we get the number 28. In other words, computing the
basics of statistical significance testing. If we are only comparing variance (sums of squares) based on the within-group variabil-
two means, then ANOVA will give the same results as the t test ity yields a much smaller estimate of variance than computing it
for independent samples (if we are comparing two different based on the total variability (the overall mean). The reason for
groups of cases or observations), or the t test for dependent this in the above example is of course that there is a large
samples (if we are comparing two variables in one set of cases difference between means, and it is this difference that accounts
or observations). for the difference in the SS.
If you are not familiar with those tests you may at this point In fact, if we were to perform an ANOVA on the above data,
want to “brush up” on your knowledge about those tests by we would get the following result:
reading Basic Statistics and Tables. MAIN EFFECT
Why the Name Analysis of Variance? SS df MS F p
It may seem odd to you that a procedure that compares means Effect 24.0 1 24.0 24.0 .008
is called analysis of variance. However, this name is derived Error 4.0 4 1.0
from the fact that in order to test for statistical significance
As you can see, in the above table the total SS (28) was
between means, we are actually comparing (i.e., analyzing)
partitioned into the SS due to within-group variability (2+2=4)
variances.
and variability due to differences between means (28-(2+2)=24).
• The Partitioning of Sums of Squares
SS Error and SS Effect. The within-group variability (SS) is
• Multi-Factor ANOVA usually referred to as Error variance. This term denotes the fact
• Interaction Effects that we cannot readily explain or account for it in the current
The Partioning of Sums of Squares design. However, the SS Effect we can explain.
At the heart of ANOVA is the fact that variances can be divided Namely, it is due to the differences in means between the
up, that is, partitioned. groups. Put another way, group membership explains this
variability because we know that it is due to the differences in
Remember that the variance is computed as the sum of squared
means.
deviations from the overall mean, divided by n-1 (sample size
minus one). Significance Testing
Thus, given a certain n, the variance is a function of the sums The basic idea of statistical significance testing is discussed in
of (deviation) squares, or SS for short. Partitioning of variance Elementary Concepts. Elementary Concepts also explains why very
works as follows. many statistical test represent ratios of explained to unexplained
variability. ANOVA is a good example of this.
Consider the following data set:
Here, we base this test on a comparison of the variance due to
the between- groups variability (called Mean Square Effect, or
MS effect) with the within- group variability (called Mean Square
Error, or Mserror; this term was first used by Edgeworth, 1885).

© Copy Right: Rai University


158 11.556
Under the null hypothesis (that there are no mean differences Gender. Imagine that in each group we have 3 males and 3

RESEARCH METHODOLOGY
between groups in the population), we would still expect some females. We could summarize this design in a 2 by 2 table
minor random fluctuation in the means for the two groups
when taking small samples (as in our example). Experimental Experimental
Therefore, under the null hypothesis, the variance estimated Group 1 Group 2
based on within-group variability should be about the same as Males 2 6
the variance due to between-groups variability. We can compare 3 7
those two estimates of variance via the F test (see also F 1 5
Distribution), which tests whether the ratio of the two variance Mean 2 6
estimates is significantly greater than 1. Females 4 8
In our example above, that test is highly significant, and we 5 9
would in fact conclude that the means for the two groups are 3 7
significantly different from each other. Mean 4 8
Summary of the Basic Logic of ANOVA
To summarize the discussion up to this point, the purpose of Before performing any computations, it appears that we can
analysis of variance is to test differences in means (for groups or partition the total variance into at least 3 sources: (1) error
variables) for statistical significance. This is accomplished by (within-group) variability, (2) variability due to experimental
analyzing the variance, that is, by partitioning the total variance group membership, and (3) variability due to gender.
into the component that is due to true random error (i.e., (Note that there is an additional source — interaction — that we
within- group SS) and the components that are due to will discuss shortly.)
differences between means. These latter variance components What would have happened had we not included gender as a
are then tested for statistical significance, and, if significant, we factor in the study but rather computed a simple t test? If you
reject the null hypothesis of no differences between means, and compute the SS ignoring the gender factor (use the within-group
accept the alternative hypothesis that the means (in the popula- means ignoring or collapsing across gender; the result is
tion) are different from each other. SS=10+10=20), you will see that the resulting within-group SS
Dependent and independent variables. The variables that are is larger than it is when we include gender (use the within-
measured (e.g., a test score) are called dependent variables. The group, within-gender means to compute those SS; they will be
variables that are manipulated or controlled (e.g., a teaching equal to 2 in each group, thus the combined SS-within is equal
method or some other criterion used to divide observations to 2+2+2+2=8).
into groups that are compared) are called factors or independent This difference is due to the fact that the means for males are
variables systematically lower than those for females, and this difference in
Multi-Factor ANOVA means adds variability if we ignore this factor. Controlling for
In the simple example above, it may have occurred to you that error variance increases the sensitivity (power) of a test.
we could have simply computed a t test for independent This example demonstrates another principal of ANOVA that
samples to arrive at the same conclusion. And, indeed, we makes it preferable over simple two-group t test studies: In
would get the identical result if we were to compare the two ANOVA we can test each factor while controlling for all others;
groups using this test. this is actually the reason why ANOVA is more statistically
However, ANOVA is a much more flexible and powerful powerful (i.e., we need fewer observations to find a significant
technique that can be applied to much more complex research effect) than the simple t test.
issues. Interaction Effects
Multiple Factors There is another advantage of ANOVA over simple t-tests:
The world is complex and multivariate in nature, and instances ANOVA allows us to detect interaction effects between variables,
when a single variable completely explains a phenomenon are rare. and, therefore, to test more complex hypotheses about reality.
For example, when trying to explore how to grow a bigger Let us consider another example to illustrate this point. (The
tomato, we would need to consider factors that have to do with term interaction was first used by Fisher, 1926.)
the plants’ genetic makeup, soil conditions, lighting, tempera- Main effects, two-way interaction. Imagine that we have a
ture, etc. sample of highly achievement-oriented students and another
Thus, in a typical experiment, many factors are taken into of achievement “avoiders.” We now create two random halves
account. One important reason for using ANOVA methods in each sample, and give one half of each sample a challenging
rather than multiple two-group studies analyzed via t tests is test, the other an easy test.
that the former method is more efficient, and with fewer We measure how hard the students work on the test. The
observations we can gain more information. means of this (fictitious) study are as follows
Let us expand on this statement. Achievement- Achievement-
Controlling for factors. Suppose that in the above two-group oriented avoiders
Challenging Test 10 5
example we introduce another grouping factor, for example,
Easy Test 5 10

© Copy Right: Rai University


11.556 159
How can we summarize these results? Is it appropriate to A General Way To Express Interactions
RESEARCH METHODOLOGY

conclude that (1) challenging tests make students work harder, A general way to express all interactions is to say that an effect is
(2) achievement-oriented students work harder than achieve- modified (qualified) by another effect. Let us try this with the
ment- avoiders? None of these statements captures the essence two-way interaction above. The main effect for test difficulty is
of this clearly systematic pattern of means. modified by achievement orientation.
The appropriate way to summarize the result would be to say For the three-way interaction in the previous paragraph, we may
that challenging tests make only achievement-oriented students summarize that the two-way interaction between test difficulty
work harder, while easy tests make only achievement- avoiders and achievement orientation is modified (qualified) by gender.
work harder. If we have a four-way interaction, we may say that the three-way
In other words, the type of achievement orientation and test interaction is modified by the fourth variable, that is, that there
difficulty interact in their effect on effort; specifically, this is an are different types of interactions in the different levels of the
example of a two-way interaction between achievement orientation fourth variable.
and test difficulty. As it turns out, in many areas of research five- or higher- way
Note that statements 1 and 2 above describe so-called main effects. interactions are not that uncommon.
Higher order interactions Appendix
While the previous two-way interaction can be put into words Repeated-Measures Analysis of Variance
relatively easily, higher order interactions are increasingly difficult
SPSS -outputA
to verbalize. Imagine that we had included factor Gender in the
Does background music affect thinking, specifically semantic
achievement study above, and we had obtained the following
processing? To investigate this,10 participants solved anagrams
pattern of means:
while listening to three different types of background music. A
Females Achievement- Achievement- within-subjects design was used, so all participants were tested
oriented avoiders in every condition. Anagrams were chosen of equal difficulty,
Challenging Test 10 5 and randomly paired with the different types of music. The
Easy Test 5 10 order of conditions was counterbalanced for the participants.
Males Achievement- Achievement- To eliminate verbal interference, instrumental Amuzak@
oriented avoiders medleys of each music type were used, which were all 10
Challenging Test 1 6 minutes in length. The music was chosen to represent Classical,
Easy Test 6 1 Easy listening, and Country styles. Chosen for use were
Beethoven, the BeeGees, and Garth Brooks. The number of
anagrams solved in each condition was recorded for analysis.
How could we now summarize the results of our study? Here are the data recorded for the participants:
Graphs of means for all effects greatly facilitate the interpreta-
Beethoven BeeGees Brooks
tion of complex effects. The pattern shown in the table above
(and in the graph below) represents a three-way interaction 14 14 16
between factors. 16 10 12
11 8 9
17 15 15
13 10 12
14 8 11
15 10 8
12 12 12
10 13 14
13 11 10
Use 3 columns to enter the data just as they appear above. Call
them “bthoven” “beegees” and “brooks”
SPSS does not perform post-hoc comparisons for repeated-
measures analyses.You will have to use the formula for the
Thus we may summarize this pattern by saying that for females Tukey test and calculate the critical difference by hand for this
there is a two-way interaction between achievement-orientation problem. Remember, the general formula is:
type and test difficulty: Achievement-oriented females work Tukey This gives you a critical difference (CD). Any two means
harder on challenging tests than on easy tests, achievement- which differ by this amount (the CD) or more are significantly
avoiding females work harder on easy tests than on difficult different from each other.
tests. For males, this interaction is reversed. As you can see, the Note: MSwg = MS within groups (from ANOVA table
description of the interaction has become much more involved. output)

© Copy Right: Rai University


160 11.556
n = number of scores per condition $Spss/Output=A.out

RESEARCH METHODOLOGY
q = studentized range statistic (Table in back of textbook) Title ‘Problem 4.2 Chi Square; Table 4.18’
H0: (mu #1 = mu #2 = mu #3) Data List Free File=’a.in’/Freq Sample Nom
H1: (1 or more mu values are unequal) Weight By Freq
Analyze > General Linear Model > Repeated Measures Variable Labels
Within-Subjects factor name: music Sample ‘Sample 1 To 4’
Number of levels: 3 Nom ‘Less Or More Than 8’
click Add > Define Value Labels
Click over variables corresponding to level 1, 2, 3 Sample 1 ‘Sample1’ 2 ‘Sample2’ 3 ‘Sample3’ 4 ‘Sample4’/
Options > Descriptive Statistics Nom 1 ‘Less Than 8’ 2 ‘Gt/Eq To 8’/
(optional, if you want) Plots > Horizontal axis: music Crosstabs Tables=Nom By Sample/
F= 4.42 critical F= 3.55 Statistical decision: Reject Statistic 1
H0 Finish
Source SS df MS F p
Between 29.87 2 14.93 4.42 .027
Subjects 91.50 9 10.17
Within 60.80 18 3.38
Tukey = 2.098 = 2.10; Any two groups differing by 2.10 or
more are signficantly different with the Tukey test at alpha = .05.
Summary of Tukey Results
Means from descrptives section of output: Group 1: 13.50
Group 2: 11.10
Group 3: 11.90
1 compared to 2: difference of 2.40 (greater than our CD, so this
is statistically significant)
1 compared to 3: difference of 1.60 (less than our CD, so this is
not statistically significant)
2 compared to 3: difference of .80 (less than our CD so this is
not statistically significant
Conclusion
A one-way repeated-measures ANOVA indicated that there were
significant differences in the number of anagrams solved across
the three background instrumental (muzak) conditions, F
(2,18)= 4.42, p<.027. Post-hoc Tukey comparisons indicated
that more anagrams were solved while listening to Beethoven
(M=13.5) than while listening to the BeeGees (M=11.1
ANOVA: More Than Two Independent Means: SPSS
program
$Spss/Output=A.out
Title ‘Analysis Of Variance - 1st Iteration’
Data List Free File=’a.in’/Gp Y
Oneway Y By Gp(1,5)/Ranges=Duncan
Statistics 1
Manova Y By Gp(1,5)/Print=Homogeneity(bartlett)/
Npar Tests K-w Y By Gp(1,5)/

Finish
Chi Square Test: Dependency

© Copy Right: Rai University


11.556 161
UNIT IV
LESSON 28: MULTIVARIATE ANALYSIS
APPLICATION OF CORRELATION
TECHNIQUE IN RESEARCH METHODOLOGY

Students, we all know that research methods provide a sound Diagram: Scatter plots showing data sets with different correlations
RESEARCH METHODOLOGY

back ground for decision-making. Managers in day-to-day life Note:


make decisions –personal and professional based on predic-
1. We can also perform a hypothesis test to test the significance
tions of the future events.
of the correlation coefficient.
Now, the question arises: How will you predict for future?
• The null hypothesis in this case would be that ,”there is no
What forms the base for forecasts?
correlation between the two variables (i.e. the population
Decision-makers basically rely on relationships, which can either correlation coefficient, r, is zero).
be intuitive or calculative. If you know how the known is
2. It is harder to spot a correlation close to zero, but these are
related to the future events, it will prove very useful in decision-
the ones we come across most often
making. Today we will be emphasizing on determination of
relationship between variables. 3. You should also remember that
Regression and Correlation analysis helps us in determining the • The correlation coefficient is unaffected by units of
nature and strength of the relationship between two variables. measurement
Basically correlation technique helps us in determining the • Correlations of less than 0·7 should be interpreted
degree to which the variables are related to each other. cautiously
You should be aware – how the correlation co-efficient is • Correlation does not imply causation
interpreted? What are the basic problems associated with the • Overall, I do not find correlation to be a very useful
use of correlation coefficient? When you should be using the technique
correlation coefficients? 4. Correlation should not be used when:
When you start using correlation technique you ask the • There is a non-linear relationship between variables
question
• There are outliers
“Is there a linear relationship between them?” • There are distinct sub-groups
If the answer is yes, we use the Pearson’s correlation coefficient. For example
The (Pearson) correlation coefficient, r, is a measure of linear a Healthy controls with diseased cases
association between two continuous variables. It is a measure b If the values of one of the variables is determined in
of how well the data fit a straight line. advance.
The range of correlation coefficient is : 1 ³ r ³ -1 E.g Picking the doses of a drug in an experiment measuring
• If r > 0 we have a Positive correlation its effect
• If r < 0 we have a Negative correlation Two examples of when not to use a correlation coefficient
• If r = 0 we have No correlation a When there is a non-linear relationship;
It is the custom that we on X-axis we measure independent b when distinct subgroups are present
variable and on y-axis we measure dependent variable.

In both of these examples the correlation coefficient quoted is spurious.


Spurious correlations crop up all the time:
Correlation basically measures the degree of concurrence of two
variables. It is nothing but the concurrence of two set of data
sometimes you may get spurious correlations
• The price of petrol shows a positive correlation with the
divorce rate over time

© Copy Right: Rai University


162 11.556
• Number of deaths from heart attacks in a population rises

RESEARCH METHODOLOGY
with incidence of long-sightedness over time
• Maximum daily air temperature and number of deaths of Don’t worry about it, we won’t be finding it this way.
cattle were positively correlated during December 2003 .
This formula can be simplified through some simple algebra
• If we repeatedly measure two variables on the same and then some substitutions using the SS notation discussed
individual over a period of time e.g. a child’s height and earlier.
ability to read, then we will tend to see a correlation
Why are variables correlated?
If two variables, A and B, are correlated then there are four
possibilities:
If you divide the numerator and denominator by n, then you
The result occurred by chance i.e The relationship may be
get something which is starting to hopefully look familiar.
coincidental
Each of these values have been seen before in the Sum of
• A influences (‘causes’) B (or, B influences A. Not the same
Squares notation section. So, the linear correlation coefficient can
thing!)
be written in terms of sum of squares.
• i.e There is a direct cause and effect relationship or There is a
This is the formula that we would be using for calculating the
reverse cause and effect relationship
linear correlation coefficient if we were doing it by hand.
• A and B are influenced by some other variable(s) i.e
• The relationship may be caused by a third variable
This can happen in two ways:
• C may ‘cause’ both A and B
Since we are using the SPSS we will not be calculating correlation
I.e The relationship may be caused by complex interactions by hand.
of several variables e.g. increased consumption of sugar
Although you all should know that what do you mean by
increases the number of caries a person has and increases
correlation coefficient and how is it calculated
their weight. Does more weight cause more caries?
The correlation coefficient “r” is defined as
• A may lead to an increase in C which ‘causes’ B
Sum of squares( XY) divided by [Sum of squares (X) multi-
e.g. low income may increase chance of smoking which
plied by Sum of squares(Y) ]
increases chance of death from lung cancer. Does low income
I hope students that the calculation and concepts relating to
cause lung-cancer?
correlation would be clear .We can now proceed on to the use of
Calculation of Pearson’s Correlation correlation in decision-making
Co-efficient 1. Coefficient of Determination
You all have studied the notation SS -the sum of squares. This • This calculates the proportion of the variation in the actual
notation will make the calculations very easy. values which can be predicted by changes in the values of the
Therefore with usual notations we have, independent variable
• Denoted by , the square of the coefficient of correlation

• ranges from 0 to 1 (r ranges from -1 to +1)


• Expressed as a percentage, it represents the proportion that
SS(x) could be written as can be predicted by the regression line
• The value 1 - is therefore the proportion contributed by
other factors
Also note that Solved Example
An Example

Pearson’s Correlation Coefficient


Pearson’s correlation is calculated from the following formulae:
The correlation coefficient is denoted by r

© Copy Right: Rai University


11.556 163
RESEARCH METHODOLOGY

Accounting Statistics
X2 Y2 XY
X Y
1 74.00 81.00 5476.00 6561.00 5994.00
2 93.00 86.00 8649.00 7396.00 7998.00
3 55.00 67.00 3025.00 4489.00 3685.00
4 41.00 35.00 1681.00 1225.00 1435.00
5 23.00 30.00 529.00 900.00 690.00
6 92.00 100.00 8464.00 10000.00 9200.00 Interpretation/Conclusion
There is a linear relation between the results of Accounting and
7 64.00 55.00 4096.00 3025.00 3520.00 Statistics as shown from the scatter diagram in Figure 1. A linear
8 40.00 52.00 1600.00 2704.00 2080.00 regression analysis was done using the least-square method.
9 71.00 76.00 5041.00 5776.00 5396.00 The resultant regression line is represented by
10 33.00 24.00 1089.00 576.00 792.00 in which X represents the results of
11 30.00 48.00 900.00 2304.00 1440.00 Accounting and Y that of Statistics. Figure 2 shows the
regression line. In this example, the choice of dependent and
12 71.00 87.00 5041.00 7569.00 6177.00
independent variables is arbitrary. It can be said that the results
Sum 687.00 741.00 45591.00 52525.00 48407.00 of Statistics are correlated to that of Accounting or vice versa.
Mean 57.25 61.75 3799.25 4377.08 4033.92 The Coefficient of Determination is 0.9194. This shows that
the two variables are correlated. Nearly 92% of the variation in
Y is explained by the regression line.
The Coefficient of Correlation (r) has a value of 0.8453. This
indicates that the two variables are positively correlated (Y
increases as X increases).
Points to Ponder
• Correlation is useful tool to determine relationship between
two variables
• Methods to determine correlation –
• Scatter diagrams
• Karl Pearson’s coefficient to Correlation

n∑ xy − ∑ x∑ y
r=
n∑ x − (∑ x ) 2 * n∑ y 2 − ( ∑ y ) 2
2

• Spearmen’s coefficient of rank correlation


6∑ d 2
R =1 −
n( n 2 − 1)
d = difference between ranks
• Value of r should be between − 1 ≤ r ≤ 1

Exercise
Note: This exercise is rather lengthy!
The relationship between cigarette smoking and lung function
was investigated by gathering 16 college boys who were regular
smokers for the past 2 years, and were of similar height and age.
For each boy two measurements were taken. One was the
average number of cigarettes
Smoked per day. The other was the forced expiratory volume
(FEV, in litres/minute), a
Measure of lung function.
The data were as follows:
FEV CIGS FEV CIGS For these data, the following were

© Copy Right: Rai University


164 11.556
2.0 25 2.9 5 calculated:

RESEARCH METHODOLOGY
1.7 15 3.1 5 N=16
3.4 10 3.1 10 ???”CIGS = 270 ie ?”X
2.2 15 2.9 25 ???”FEV = 43.30 ie ?”Y
3.9 5 2.6 30 ???”CIGS2 = 5650 ie ?”X2
2.5 25 2.1 30 ???”FEV2 = 121.91 ie ?” X2
2.5 20 3.1 15 ???”FEVxCIGS = 687.5 ie ?”XY
2.8 15 2.5 20
1 Calculate the correlation between FEV and CIGS.
2 Determine the proportion of variation in FEV explained by
CIGS
3 Calculate the regression line to predict FEV from CIGS
4 For a boy who smokes 40 cigarettes a day, what is the
predicted value of FEV?
5 What caution(s) might you add to the estimate in question 4?

© Copy Right: Rai University


11.556 165
RESEARCH METHODOLOGY

LESSON 29:
MULTICOLLINEARITY IN MULTIPLE REGRESSION

Definition and Effectof Multicolinearity bearer to receive two Pizza Shack pizzas while paying for only
In multiple- regression analysis, the regression coefficient often the more expensive of the two. The manager has collected the
become less reliable as the degree of correlation between the data in table 4 and would like to use it to predict pizza sales.
independent variable increases. Table 4 Pizza Shack sales and Advertising Data
If there is a high level of correlation between some of the
Month X1 number X 2 Cost of Y Total
independent variables we have a problem that statisticians call
of Ads Ads Pizza sales
multicollinearity. appearing Appearing (000s of
dollars)
In multiple regression coefficients become unreliable if there is a May 12 13.9 43.6
high level of correlation between the independent variables.
One of the key assumptions we make when carrying out a June 11 12.0 38.0
regression analysis is that the variables are independent and July 9 9.3 30.1
uncorrelated.
Aug. 7 9.7 35.3
Multicollinearity is the problem when two or more of the
independent variables are correlated. Sept. 12 12.3 46.4
Why would Multicollinearity occur? Oct. 8 11.4 34.2
For example if we wish to estimate a firm’s sales revenue and Nov. 6 9.3 30.2
we use both number of sales men employed and their tal salary
bill. These two would naturally be highly correlated with each Dec. 13 14.3 40.7
other. In actual case we could have effectively done with only Jan. 18 10.2 38.5
one variable rather than using two. Adding a second variable
which is highly correlalated with the first distorts the values of Feb. 6 8.4 22.6
the regression coefficients. Nevertheless, we can often predict y March. 8 11.2 37.6
well, even when multicolinearity is present.
April 10 11.1 35.2
What effect does Multicollinearity have
on a Regression?
Essentially when we run a regression with where independent In Tables 5& 6 and we have given Minitab outputs for the
variables are highly correlated we find that the overall predictive regressions of total sales of number of ads and cost of ads,
power of the regression may not be affected i.e., the egression respectively.
may continue to have a high R2. But the value of the individual Table 5 Minitab regression of sales on number of ads
b coefficients may not be significant , i.e, they may have large Regression Analysis
standard errors and small t values. The prob values at which
these t ratios are significant may also be higher. The regression equation is
Predictor Coef Stdev t=ratio p
How does Multicollinearity Affect Us?
Constant 16.937 4.982 3.40 0.007
1. If the explanatory variables are relevant and explain a
significant proportion of the variation in y we can still make ADS 2.0832 0.5271 3.95 0.003
reasonably accurate predictions. S = 4.206 R –Sq = 61.0%
2. What we can’t do is tell with much precision how the Analysis of Variance
dependent variable changes in response to changes in any
SOURCE DF SS MS F P
one of the correlated variables. This is because the slope
coefficients will become distorted ands will be associated Regression 1 2.76.31 276.31 15.62 0.003
with high standard errors. Error 10 176.88 17.69
An example will make the issue clearer: Total 11 453.19
Let’s look at an example in which multicollinearity is present to Table 6 Minitab regression of sales on the cost of ads
see how it present to se how it affects the regression. For the
past 12 months, the manager of Pizza Shack has been running Regression Analysis
a series of advertisements in the local newspaper. the ads are The regression equation is
scheduled and paid for in the month before they appear. Each SALES = 4.17 + 2.87 Cost
of the ads contains a two-for one coupon, which entitles the

© Copy Right: Rai University


166 11.556
Predictor Coef Stdev t=ratio p What has Happened Here?

RESEARCH METHODOLOGY
Constant 4.173 7.109 0.59 0.570 In the simple regression, each variable is highly significant, and
in the multiple regression, they are collectively vary significant,
ADS 2.8725 0.63330 4.54 0.000
but individually not significant.
S = 3.849 R –Sq =67.3%
Correlation Between two Explanatory
Analysis of Variance Variables:-
SOURCE DF SS MS F P This contradiction is explained once we notice that the number
Regression 1 305.04 305.04 20.59 0.000 of ads is highly correlated with the cost of ads. In fact the
correlation between these two variables is r = 0.8949, so we have
Error 10 148.15 14.81
a problem with multicollinearity in our data.
Total 11 453.19
You might wonder why the two variables are not perfectly
For the regression on number of ads, we see that the observed correlated. This is because the cost of an ad varies slightly,
t value is 3.95 with 10 degree of freedom and a significance level depending on where it appears in the newspaper. For instance,
of a = 0.01 the critical t value is found to be 3.169. because t o > t c in the Sunday paper, ads in the TV section cost more than ads
we conclude that the number rof ads is a higher significant in the news section and the manager of Pizza shack has placed
explanatory variable for total sales. Note also that r2 = 61.0 Sunday ads in each of these section on different occasions.
percent, so that the number of ads explains about 61 percent of
the variation in pizza sales. Nevertheless Both Variables Explain The Same Thing
Because X1 and X2 are closely related to each other, in effect they
For the regression on the cost of ads, the observed t value is each explain the same part of the variation Y. that’s why we get
4.54, so the cost of ads is even core significant as an explanatory r2 = 61.0 percent in the first simple regression r2 = 67.3 percent in
variable for total sales than was the number of ads (for which the second simple regression, but an R2 of only 68.4 percent in
the observed t value was only 3.95). in this regression, r2 = 67.3 the multiple regression . adding the number of ads as a second
percent so about 67 percent of the variation in pizza sales is explanatory variable to the cost of ads explains only about 1
explained by the cost of ads. percent more of the variation in total sales.
Using Both Explanatory Variables in a Individual Contribution Can’t Be Separated Out
Multiple Regression At this point, it is fair to ask, “which variable is really explaining
Because both explanatory variables are highly significant by
the variation in total sales in the multiple regression?” the
themselves, we try to use both of them in a multiple regres-
answer is that both are, but we cannot separate out their
sion. The output is in Table 8
individual contributions because they are so highly correlated
The multiple regression is highly significant as a whole, because with each other. As a result of this, their coefficients in the
the ANOVA p is 0,006.the multiple coefficient of determina- multiple regression have high standard errors, relatively small
tion is r2 = 68.4 percent so the two variables to gather explain computed t values, and relatively large prob>½t½values.
about 68 percent of the variation in total sales.
How Does this Multicollinearity Affect
Table 7 Minitab regression of sales on the number and cost of ads Us?
Regression Analysis We are still able to make relatively precise predictions when it is
The regression equation is present : Note that for the multiple regression (output intable
SALES = 6.58 + 0.62 ADS + 2.14 COST 6), the standard error of estimate, which determines the widths
of confidence intervals for predictions, is 3.989 while for the
Predictor Coef Stdev t=ratio p
simple regression with the cost of ads as the explanatory
Constant 6.584 8.542 0.77 0.461 variable (output in Table 5 ), we have Se = 3.849.
ADS 0.625 1.120 0.56 0.591 What Can’t We Do?
COST 2.139 1.470 1.45 0.180 We can’t tell with much precision how sales will change if we
S = 3.989 R –Sq =68.4% increase the number of ads by one . The multiple regression
says b 1 = 0.625 ( that is, each ad increases total pizza sales by
Analysis of Variance about $625), but the standard error of this coefficient is 1.12
SOURCE DF SS MS F P (that is, about $ 1,120).
Regression 2 309.99 154.99 9.74 0.006 Multicollinearity is a problem you have to deal with in multiple
Error 9 143.20 15.91 regressions, an developing a common sense understanding of
Total 11 453.19 it is necessary. Remember that you can still make fairly precise
predictions when it’s present. but remember that when it’s
Loss of Individual Significance present you cant tell with much precision how much the
However if we look at the p values for the individual variables dependent variable will change if you “jiggle” one of the
in the multiple regression, we see that even at a = 0.1 neither independence variables. So our aim should be to minimize
variables is a significant explanatory variable. multicollinearity. Hint : the best multiple regression is one that
explains the relationship among the data by accounting for the

© Copy Right: Rai University


11.556 167
largest proportion of the variation in the dependent variable, a. Use the following Minitab output to determine the best
RESEARCH METHODOLOGY

with the fewest number of independent variables. Warning: fitting regression equation for the airline:
Thowing in too many independent variables just because you The regression equation is
have a computer is not a great idea.
SALES = 172 + 25.9 PROMOT – 13.2 COMP - 3.04 FREE
Exercises
Predictor Coef Stdev T – P
Q1 Edith Pratt is a busy executive in a nation wide trucking
company. Edith is late for a meeting because she has been ratio
unable to locate the multiple regression output that an Constant 172.34 51.38 3.35 0.006
associate produced for he. Of the total regression was
significant at the 0.05 level, then she wanted to use the PROMOT 25.950 4.877 5.32 0.000
computer output as evidence to support some of her ideas
at the meeting. The subordinate however, is sick today and COMP -13.238 3.686 -3.59 0.004
Edith has been unable to locate has work. As a mater of
FREE -3.041 2.342 -1.30 0.221
fact, all the information she possesses concerning the
multiple regression is piece of scrap paper with the
following on it: b Do the passengers who fly free cause sales to decrease
Regression for E.Pratt significantly?
SSR 872.4,With df c. Does an increase in promotions by $ 1,000 change sales by
$28,000, or is the change significantly different from $28000?
SSE ,with 17 df
State and test appropriate hypotheses. use a = 0.10
SST 1023.6 with 24 df
d. Give a 90 percent confidence interval for the slope coefficient
Because the scrap paper doesn’t even have a complete set of of COMP.
numbers on it, Edith has concluded that it must the useless.
You however, should know better. Should go directly to the
meeting or continue looking for the computer output?
Q2 A New England based commuter airline has taken a survey
of its 15 terminals and has obtained the following data for
the month of February, where
SALES = total revenue based on number of tickets sold
PROMOT = amount spent on promoting the airline in the area
(in thousand if dollars)
COMP = number of competing airlines at that terminal
FREE = the percentage of passengers who flew free (for
various reasons)
Sales($ ) Promot ($) Comp Free
79.3 2.5 10 3
200.1 5.5 8 6
163.2 6.0 12 9
200.1 7.9 7 16
146.0 5.2 8 15
177.7 7.6 12 9
30.9 2.0 12 8
291.9 9.0 5 10
160.0 4.0 8 4
339.4 9.6 5 16
159.6 5.5 11 7
86.3 3.0 12 6
237.5 6.0 6 10
107.2 5.0 10 4
155.0 3.5 10 4

© Copy Right: Rai University


168 11.556
RESEARCH METHODOLOGY
LESSON 30:
MULTIPLE REGRESSION

So far we have talked of regression analysis using only one


independent explanatory variable. At this level the regression
analysis can also be estimated manually. However it is very rare
that we have just one explanatory variable and the explanatory
power of the estimated equation can be substantially improved
by the addition of more independent variables. For example in
our earlier example of household consumption we can probably
improve the explanatory power of the equation by adding more
variables such as household size, age distribution of household,
etc. However when we use two or more independent variables
the process of regression becomes that much more complex and
it is not feasible to solve for the parameters of the equation
manually. Typically most multiple regression analysis is always The Computer and Multiple regression
carried out by computers, which enable us to carry out complex A manager in any managerial situation deals with complex
calculations using large volumes of data easily. Therefore our problems requiring large samples and several independent
stress when discussing multiple regression will be on under- variables.
standing and interpreting computer output. The generalized multiple regression model is specified for k
Multiple Regression Equation variables with n data points. That is for each of the k indepen-
The general form of the multiple regression equation is as dent variables we have n data points. The regression equation
follows: that we estimate is :

The three variable case for example is : yˆ = a + b1 x1 + b2 x 2 + b3 x3 + ............ + bk x k


This equation is estimated by the computer . We now look at
yˆ = a + b1 x1 + b2 x 2 + b3 x 3 + e how a statistical package such as SPSS or Minitab handles the
For the two variable case we can find the multiple regression data.
equation as follows: An example will help make the process clearer:
yˆ = a + b1 x1 + b2 x 2 + e Suppose the IRS in US wish to model discovery of unpaid
The normal equations for this are as follows: taxes. They include the following independent variables:
1. No. of hours of Field audit($00s)
∑ y = na + b ∑ x + b ∑ x
1 1 2 2 2. No. of computer hours($00s)

∑ x y = a ∑ x + b ∑ x + b ∑ xx
1 1 1
2
1 2 2
3. Reward to informants ($000s)
4. Actual unpaid taxes discovered. ($100000s)
∑ x y = a∑ x + b ∑ x x + b ∑ x
2 2 1 1 2 2
2
2 The data is shown in Table 1
These can be solved to obtain the values of the parameters a, Month Field Comp Reward Actual
b1, b2
audit hours to unpaid
So far we have referred to a as the y intercept and b 1 as the slopes
of the multiple regression. However are the estimated regres- informers taxes
sion coefficients. The constant a is the value of ŷ if both x 1, Jan 45 16 71 29
x 2 are zero. The coefficients b1, b2 describe how changes in x 1 Feb 42 14 70 24
affect the value of ŷ . Thus b1 measures the value of changes in
x 1 on ŷ holding x 2 constant. Similarly b2 measures the effect on March 44 15 72 27
ŷ of changes in x 2 holding x 1 constant. Apr 43 13 71 25
Thus linear regression estimates a regression line between two May 46 13 75 26
variables. Multiple regression there is a regression plane among Jun 44 14 74 28
y, x1 and x2. This regression plane is determined in the same way
as the regression line by minimizing the sum of squared July 45 16 76 30
deviations of data points from the regression plane. Each Aug 44 16 69 28
independent variable accounts for some of the variation in the Sept 43 15 74 28
dependent variable. This is shown in figure 1 below.
Oct 42 15 73 27

© Copy Right: Rai University


11.556 169
Now a regression is run on Minitab and the sample out put is around our estimated vale . The t value at 95% level of
RESEARCH METHODOLOGY

presented below. We now have to interpret this output. This is confidence given our degrees of freedom of n-k-1 is 2.444. For
given in table 2 example in our problem:
Now How Do We Interpret This Output? For a value of
1. The regression equation is of the form : x1= 4300 hours
yˆ = a + b1 x1 + b2 x 2 + b3 x 3 From the numbers given in x2= 1500 hors
the coefficient column we can read the estimating equation: x 3= $75000
=-45.8+.597Audit+1.18Comp+.405Rewards Our estimate for is $27905000 and our se is $286000. for
How do we interpret this equation? example if we want to construct a 95% confidence interval
around this estimate of $27905000 we can do it as follows:
The interpretation is similar to that of the one variable simple
linear regression case. $27905000+/-t s e= $27905000+2.447(286,000)
• If we hold the number of field audit labour hours, number =$2860,800 upper limit
of computer hours constant and we change rewards to =$27905000+2.447(286,000)
informants by one unit , then ŷ will change by an =$27,205,200 Lower limit
additional $405000 for each additional $1000 paid to
informants. the standard error of the estimate measures the dispersion of
data points around the regression plane. Smaller values of s e
• Similarly holding x 1 and x 3 constant an additional 100 hours of
indicate a better regression. If the addition of another variable
computer time will increase by $1177000. reduces s e then we say that the inclusion of the third variable
• Similarly for holding x2 and x3 constant we increase an increase improves the fir of the regression.
in hours in filed by $100 increase recoveries by $597000.
The Coefficient of Multiple
We can also use this equation to solve problems such as : Determination
Suppose in Nov the IRS plans to leave field hours and com- In a multiple regression we measure the strength of the
puter hours at their Oct level but increase rewards to $75000 relationship among the three independent variables and the
How much of recoveries can they expect to make in Nov? dependent variables by the coefficient of determination or R2.
We can get a forecasted value by substituting in the equation. This defined as :
= -45.8+.597(43)+1.18(15)+.405(75) R2 is the proportion of total variation in y that is explained by
= 27.905 or approximately $28 million. the regression plane.
Standard error of the regression In our example we have R2=98.3% .This tells us the 98.3%
of variation in unpaid taxes is explained by the three indepen-
Now that we have our equation we need to have some measure
dent variables. AS we add more variables in a regression
of the dispersion of actual observations around the estimated
explanatory power of the equation improves if the R2 increases.
regression plane. We can expect the estimation to be more
accurate as the degree of dispersion around this regression plane Example2:
is less. Insert exercise lr p732
To measure this dispersion or variation by the standard error of Example: Pam Schneider owns and operates an accounting
the estimate se firm in Ithaca, New York. Pam feels that it would be used to be
Where able to predict in advance in the number of rush income-tax
returns during the busy march 1 to April 15 period so that she
Y = sample values of the dependent variable
can better ;oan her personnel need during this time. She has
= corresponding estimated values from the regression hypothesized that several factors may be useful in her produc-
equation tion. Data for these factors and number of rush returns for past
n = number of data points in the sample years are as follows:
k = number of independent variables (3 in our example0 X1 X2 X3 Y
The denominator of this equation shows that in a regression Economic Population Average Number
with k independent variables, the standard error has n-k-1 degrees index within 1 mile income in of rush
of office Ithaca returns,
of freedom. This is because one more degree of freedom is
march 1
reduced by the estimation of the intercept term a. Thus we have to April
k+1 parameters to estimate from the sample data. 15
Standard error of the regression is also called the root mean 99 10188 21465 2306
square or Mean square error (MSE). In our sample output this 106 8566 22228 1266
is indicated by s. The standard error of the regression in our 100 10557 27665 1422
problem is 0.286.
129 10219 25200 1721
We can also use the standard error of estimate or the MSE and
179 9662 26300 2544
a the t distribution to form an approximate confidence interval

© Copy Right: Rai University


170 11.556
a. Use the following Minitab output to determine the best a What is the regression equation?

RESEARCH METHODOLOGY
fitting regression equation for these data: b What is the standard error of estimate?
The regressions equation is c What is R2 for this regression?
Y = -1275 + 17.1 X1 + 0.514 X2 - 0.174 X3 e Given an approximate 95 percent confidence interval for the
Predictor Coef Stdev T – P value of Y when the values of X1, X2, X3, and X4 are 52.4, 41.6
35.8, and 3, respectively.
const ratio
-1275 2699 -0.47 0.719 X1 X2 X3 X4 Y
-
X1 17.059 6.098 2.47 0.245 21.4 62.9 21.9 -2 22.8
X2 0.5456 0.3144 1.72 0.335 51.7 40.7 42.9 5 93.7
X3 -0.1743 0.1005 -1.73 0.333 41.8 81.8 69.8 2 64.9
S = 396.1 R sq = 87.2% 11.8 41.0 90.9 -4 19.2
b. What percentage of the total variation in the number of 71.6 22.6 12.9 8 55.8
rush returns is explained by this equation?
c. For this year, the economic index is 169, then population 91.9 61.5 30.9 1 23.1
with in 1 mile of the office is 10212, and the average income
in Ithaca is $26925. How many rush returns should Pam Q3 We are trying to predict the annual demand for widgets
expect to prices between March 1 April 15? (DEMAND)using the following independent variable.
Results: PRICE = price of widgets (in $)
ŷ = -1275 + 17.059 X 1 + 0.5406 X 2 - 0.1743 X 3 . INCOME = consumer income (in $)
R2 = 87.2%; 87.2% of the total variation in Y is explained by the model. SUB = price of a substitute commodity (in $)
= -1275 + 17.059 (169 ) + 0.5406 (10,212) – 0.1743( 26,925) = (NOTE: A substitute commodity is one that can be
2436 rush returns. substituted for another commodity. For example, margarine
Exercises is a substitute commodity for butter,)
Q1Given the following set of data, use whatever computer Year Demand Price Income Sub
package is available to f find the best fitting regression
($) ($)
equation and answer the following:
a. What is the regression equation? 1982 40 9 400 10
b. What is the standard error of estimate? 1983 45 8 500 14
c. What is R2 for this regression? 1984 50 9 600 12
d. What is the predicted value for Y when X 1 = 5.8, X 2 = 4.2, X 3 = 5.1?
1985 55 8 700 13
Y X1 X2 X3 1986 60 7 800 11
64.7 3.5 5.3 8.5 1987 70 6 900 15

80.9 7.4 1.6 2.6 1988 65 6 1000 26


1989 65 8 1100 27
24.6 2.5 6.3 4.5
1990 75 5 1200 22
43.9 3.7 9.4 8.8 1991 75 5 1300 19
77.7 5.5 1.4 3.6 1992 80 5 1400 20
20.6 8.3 9.2 2.5 1993 100 3 1500 23

66.9 6.7 2.5 2.7 1994 90 4 1600 18


1995 95 3 1700 24
34.3 1.2 2.2 1.3
1996 85 4 1800 21
Given the following set of data use whatever computer package
is available to find the best fitting regression equation and
answer the following:

© Copy Right: Rai University


11.556 171
a. Using whatever computer package is available, determine the
RESEARCH METHODOLOGY

best-fitting regression equation for these data.


b. Are the signs (+ or -) of the regression coefficients of the
independent variables, as one would expect? Explain briefly.
c. State and interpret the coefficient of multiple determinations
for this problem.
d. State and interpret the standard error of estimate for this
problem.
e. Using the equation, what would you predict for DEMAND
if the price of widgets was $6, consumer income was $1200
and the price of the substitute commodity was $17?

© Copy Right: Rai University


172 11.556
RESEARCH METHODOLOGY
LESSON 31:
MAKING INFERENCES ABOUT POPULATION PARAMETERS

Essentially when we run a regression we are actually estimating


the parameters on the basis of the sample of observations.
Therefore ŷ =a+bx for example is a sample regression line
much in the same way that ‘x is a sample estimate of the
population parameter m. In the same way our population
regression line or the true relationship of the data is : is
Y=A+Bx . This equation however is unknown and we have to
use sample data to estimate it. The true form of the unknown
equation for the k variable case is:
yˆ = a + b1 x1 + b2 x 2 + b3 x3 + ............ + bk x k
Even in the case of the population regression plane regression
plane not all data points will lie on it.. Why ? Consider our IRS Figure 2
problem. Not all payments to informants will be equally
If p> a Xi is not a significant explanatory variable .
effective.;
If p<a Xi is a significant explanatory variable .
• Some of the computer hours may be used for organizing
data rather than analyzing accounts. This test of significance of the explanatory variable is always a
two-tailed test. The independent variable x i is a significant
• For these and other reasons some of the data points will lie
explanatory variable if bi is significantly different from zero.
above the regression plane and some below it .
This requires that our t ratio be a large positive or negative.
• Therefore instead of satisfying the above equation the
In our IRS example for each of the three explanatory variables
individual data points will satisfy :
p is less than .01. Therefore we conclude that each one is a
yˆ = a + b1 x1 + b2 x 2 + b3 x3 + ............ + bk x k + e significant explanatory variable.
This is the population regression plane plus a random distur- Test of significance of the regression as a whole
bance term . The term e is a random disturbance term which
It is quite possible that we frequently may get a high value of R2
equals zero on the average. The standard deviation of this term
by pure chance. After all if we throw a dart on board to get a
of this term is se The standard error of the regression se which
scatter plot we could generate a regression, which may conceiv-
we have talked about in the earlier section is an estimate of se.
ably have a high R2. Therefore we need to ask the question a
As Our Sample Regression Equation high value of R2 necessarily mean that the independent variables
This equation estimates the unknown e population regression explain a large proportion of the variation in Y or could this be
plane : a freak chance.
As we can see the estimation of a regression plane can also be In statistical terms we ask the following question:
thought of as a problem of statistical inference where we make Is the regression as a whole significant? In the last section we
inferences regarding an unknown population relationship on had looked at whether the individual x i were significant. Now
the basis of an estimated relationship based on sample data. we ask whether collectively all the x i (i=1…k) together signifi-
Much in the same way as for hypothesis testing for a mean we cantly explain the variability in y.
can also set up confidence intervals for the parameters of the Our Hypothesis is
estimated equation. We can also make inferences about the Ho: B1=B2+…Bk = 0 Null hypothesis that y does not depend on
slopes of the true regression equation slope parameters(B1, B2, x is
Bk) on the basis of slopes coefficients of the estimated
Ha: atleast one Bi ¹0 Alternative hypothesis that at least one Bi is
equation (b1, b2 , b 3, bk ).
not zero.
Tests of Inference of an individual slope parameter Bi
To explain this concept we have to go back to our initial
As explained earlier we can use the value of the individual b i , diagram, which shows the two variable case. (insert diag Lr p743
which are values of the slope parameter for the ith variable , to
The total variation in y :
test a hypotheses about the value of Bi , which is the true
population value of the slope for the ith variable. ∑ ( y − y) 2

The process of hypotheses testing is the same as that delineated


Explained variation by the regression is :
for testing the mean.
∑ ( yˆ − y ) 2

© Copy Right: Rai University


11.556 173
Unexplained variation : and unexplained variance.(within column variance.) This is
RESEARCH METHODOLOGY

∑ ( y − yˆ) 2 shown in table 3


Table 3
This is shown in the figure 3 for the one variable case of Analysis of Variance
simplicity. For a multiple variable case the something applies
conceptually.
Source DF SS MS F P
Regression 3 29.1088 9.7029 118.52 0.00
Error 6 .4912 .0819
Total 9 29.600

The sample output for the IRS problem is given above.


SSR=29.109, k=3
SSE=.491 ( with n-k-1 df = 6) degrees of freedom.
29.11
Figure 3
F = 3 = 118.3
Thus when we look at the variation in y we look at 3 different 0.491
terms each of which is a sum of squares .These are denoted as 6
follows: The MS column is the sum of squares divided by the number
SST= Total sum of squares: ∑ ( y − y) 2
of degrees of freedom. The output also gives us the p- value,
which is 0.00. Because p<a =0.01 we can conclude that the
regression as a whole is highly significant.
SSR=Regression sum of squares: ∑ ( yˆ − y ) 2
Exercises
Q1 Bill Buxton, a statistic professor in a leading business school,
SSE=Error sum of squares: ∑ ( y − yˆ) 2
has a keen interest in factors affecting student’s performance
on exams. The midterm exam for the past semester had a
Total variation in y can be broken into two parts: the explained wide distribution of grades, but Bill feels certain that several
and the unexplained: factors explain the distribution: He allowed has students to
SST=SSR+SSE study from as many different books as they liked, their Iqs
they are of different ages, and they study varying amount of
Each of these has an associated degrees of freedom. SST has n- time for exams. To develop a predicting formula for exam
1 degrees of freedom. SSR has k degrees of freedom because grads, Bill asked each student to answer, at the end of the
there are k independent variables. SSE has degrees of freedom exam, questions regarding study time and number of books
n-k-1 because we used n observations to estimate k+1 used. Bill’s teaching record already contained the Iqs and ages
parameters a, b1,b2, ..bk. for the students, so he compiled the data for the class and ran
If the null hypotheses is true we get the following F ratio: a multiple regression with Minitab. The output form Bill’s
SSR computer run was as follows:

F= k Predictor Coef Stdev T-ratio P


SSE
n − k −1 Constant -49.948 41.55 -1.20 0.268

Which has a F distribution with k numerator degrees of Hours 1.06931 0.98163 1.09 0.312
freedom and n-k-1 degrees of freedom in the denominator. Iq 1.36460 0.37627 3.63 0.008
If the null hypotheses is false i.e that the explanatory variables
have a significant effect on y then the F ratio tends to be higher Books 2.03982 1.50799 1.35 0.218
than if the null hypothesis is true., So if the F ratio is large we Age -1.78990 0.67332 -2.67 0.319
reject the null hypotheses that the explanatory variables have no
effect on the variation of y. Therefore we reject Ho and conclude
that the regression is significant. S = 11.657 R – sq = 76.7%
a What is the best fitting regression equation for these data?
Going back to our IRS example we now look at the computer
output. A typical output of a regression also includes the b What percentage of the variation in grades is explained by
computed F ratio for the regression. This is also at times called this equation?
the ANOVA for the regression. This is because we break up the c What grade would you aspect for a 21- year old student with
up the analysis of variation in Y into explained variance or an IQ of 113 who studied 5 hour and used three different
variance explained by the regression(between column variace0 books?

© Copy Right: Rai University


174 11.556
RESEARCH METHODOLOGY
LESSON 32:
MULTICOLLINEARITY IN MULTIPLE REGRESSION

Definition And Effect of Multicolinearity the more expensive of the two. The manager has collected the
In multiple- regression analysis, the regression coefficient often data in table 4 and would like to use it to predict pizza sales.
become less reliable as the degree of correlation between the Table 4 Pizza Shack sales and Advertising Data
independent variable increases.
If there is a high level of correlation between some of the Month X1 number X2 Cost of Y Total
independent variables we have a problem that statisticians call of Ads Ads Pizza sales
multicollinearity. appearing Appearing (000s of
In multiple regression coefficients become unreliable if there is a dollars)
high level of correlation between the independent variables. May 12 13.9 43.6
One of the key assumptions we make when carrying out a June 11 12.0 38.0
regression analysis is that the variables are independent and
uncorrelated. July 9 9.3 30.1
Multicollinearity is the problem when two or more of the Aug. 7 9.7 35.3
independent variables are correlated.
Sept. 12 12.3 46.4
Why Would Multicollinearity Occur?
Oct. 8 11.4 34.2
For example if we wish to estimate a firm’s sales revenue and
we use both number of sales men employed and their tal salary Nov. 6 9.3 30.2
bill. These two would naturally be highly correlated with each
Dec. 13 14.3 40.7
other. In actual case we could have effectively done with only
one variable rather than using two. Adding a second variable Jan. 18 10.2 38.5
which is highly correlalated with the first distorts the values of
Feb. 6 8.4 22.6
the regression coefficients. Nevertheless, we can often predict y
well, even when multicolinearity is present. March. 8 11.2 37.6
What Effect Does Multicollinearity Have April 10 11.1 35.2
on o Regression?
Essentially when we run a regression with where independent In Tables 5& 6 and we have given Minitab outputs for the
variables are highly correlated we find that the overall predictive regressions of total sales of number of ads and cost of ads,
power of the regression may not be affected i.e., the egression respectively.
may continue to have a high R2. But the value of the individual
Table 5 Minitab regression of sales on number of ads
b coefficients may not be significant , i.e, they may have large
standard errors and small t values. The prob values at which Regression Analysis
these t ratios are significant may also be higher. The regression equation is
How Does Multicollinearity Affect Us? SALES = 16.9 + 2.08 ADS
1. If the explanatory variables are relevant and explain a Predictor Coef Stdev t=ratio p
significant proportion of the variation in y we can still make Constant 16.937 4.982 3.40 0.007
reasonably accurate predictions. ADS 2.0832 0.5271 3.95 0.003
2. What we can’t do is tell with much precision how the S = 4.206 R –Sq = 61.0%
dependent variable changes in response to changes in any
Analysis of Variance
one of the correlated variables. This is because the slope
coefficients will become distorted ands will be associated SOURCE DF SS MS F P
with high standard errors. Regression 1 2.76.31 276.31 15.62 0.003
An Example Will Make The Issue Clearer Error 10 176.88 17.69
Let’s look at an example in which multicollinearity is present to Total 11 453.19
see how it present to se how it affects the regression. For the
Table 6 Minitab regression of sales on the cost of ads
past 12 months, the manager of Pizza Shack has been running
a series of advertisements in the local newspaper. the ads are Regression Analysis
scheduled and paid for in the month before they appear. Each The regression equation is
of the ads contains a two-for one coupon, which entitles the SALES = 4.17 + 2.87 Cost
bearer to receive two Pizza Shack pizzas while paying for only

© Copy Right: Rai University


11.556 175
Predictor Coef Stdev t=ratio p SOURCE DF SS
RESEARCH METHODOLOGY

Constant 4.173 7.109 0.59 0.570 MS F P


ADS 2.8725 0.63330 4.54 0.000 Regression 2 309.99
154.99 9.74 0.006
S = 3.849 R –Sq =67.3%
Error 9 143.20
Analysis of Variance
15.91
SOURCE DF SS MS F P
Total 11 453.19
Regression 1 305.04 305.04 20.59 0.000
Loss of Individual Significance
Error 10 148.15 14.81
However if we look at the p values for the individual variables
Total 11 453.19 in the multiple regression, we see that even at a = 0.1 neither
For the regression on number of ads, we see that the observed variables is a significant explanatory variable.
t value is 3.95 with 10 degree of freedom and a significance level What has happened here?
of a = 0.01 the critical t value is found to be 3.169. because t o > t c
In the simple regression, each variable is highly significant, and
we conclude that the number rof ads is a higher significant
in the multiple regression, they are collectively vary significant,
explanatory variable for total sales. Note also that r2 = 61.0
but individually not significant.
percent, so that the number of ads explains about 61 percent of
the variation in pizza sales. Correlation Between Two Explanatory
For the regression on the cost of ads, the observed t value is
Variables
This contradiction is explained once we notice that the number
4.54, so the cost of ads is even core significant as an explanatory
of ads is highly correlated with the cost of ads. In fact the
variable for total sales than was the number of ads (for which
correlation between these two variables is r = 0.8949, so we have
the observed t value was only 3.95). in this regression, r2 = 67.3
a problem with multicollinearity in our data.
percent so about 67 percent of the variation in pizza sales is
explained by the cost of ads. You might wonder why the two variables are not perfectly
correlated. This is because the cost of an ad varies slightly,
depending on where it appears in the newspaper. For instance,
Using both explanatory variables in a multiple regression: - in the Sunday paper, ads in the TV section cost more than ads
in the news section and the manager of Pizza shack has placed
Sunday ads in each of these section on different occasions.
Because both explanatory variables are highly significant by Nevertheless Both Variables Explain The
themselves, we try to use both of them in a multiple regres- Same Thing
sion. The output is in Table 8 Because X1 and X2 are closely related to each other, in effect they
The multiple regression is highly significant as a whole, because each explain the same part of the variation Y. that’s why we get
the ANOVA p is 0,006.the multiple coefficient of determina- r2 = 61.0 percent in the first simple regression r2 = 67.3 percent in
tion is r2 = 68.4 percent so the two variables to gather explain the second simple regression, but an R2 of only 68.4 percent in
about 68 percent of the variation in total sales. the multiple regression . adding the number of ads as a second
Table 7 Minitab regression of sales on the number and cost of explanatory variable to the cost of ads explains only about 1
ads percent more of the variation in total sales.
Regression Analysis Individual Contribution Can’t Be
Separated Out
At this point, it is fair to ask, “which variable is really explaining
The regression equation is
the variation in total sales in the multiple regression?” the
SALES = 6.58 + 0.62 ADS + 2.14 COST answer is that both are, but we cannot separate out their
Predictor Coef Stdev t=ratio individual contributions because they are so highly
p correlated with each other. As a result of this, their
Constant 6.584 8.542 coefficients in the multiple regression have high standard
0.77 0.461 errors, relatively small computed t values, and relatively
large prob>½t½values.
ADS 0.625 1.120
0.56 0.591 How Does This Multicollinearity Affect
COST 2.139 1.470
Us?
We are still able to make relatively precise predictions when it is
1.45 0.180
present : Note that for the multiple regression (output intable
S = 3.989 R –Sq =68.4% 6), the standard error of estimate, which determines the widths
of confidence intervals for predictions, is 3.989 while for the
Analysis of Variance simple regression with the cost of ads as the explanatory
variable (output in Table 5 ), we have Se = 3.849.

© Copy Right: Rai University


176 11.556
What can’t we do?

RESEARCH METHODOLOGY
Sales($ Promot Comp Free
We can’t tell with much precision how sales will change if we ) ($)
increase the number of ads by one . The multiple regression
says b 1 = 0.625 ( that is, each ad increases total pizza sales by
79.3 2.5 10 3
about $625), but the standard error of this coefficient is 1.12 200.1 5.5 8 6
(that is, about $ 1,120). 163.2 6.0 12 9
Multicollinearity is a problem you have to deal with in multiple 200.1 7.9 7 16
regressions, an developing a common sense understanding of 146.0 5.2 8 15
it is necessary. Remember that you can still make fairly precise 177.7 7.6 12 9
predictions when it’s present. but remember that when it’s
present you cant tell with much precision how much the
30.9 2.0 12 8
dependent variable will change if you “jiggle” one of the 291.9 9.0 5 10
independence variables. So our aim should be to minimize 160.0 4.0 8 4
multicollinearity. Hint : the best multiple regression is one that 339.4 9.6 5 16
explains the relationship among the data by accounting for the 159.6 5.5 11 7
largest proportion of the variation in the dependent variable,
with the fewest number of independent variables. Warning:
86.3 3.0 12 6
Thowing in too many independent variables just because you 237.5 6.0 6 10
have a computer is not a great idea. 107.2 5.0 10 4
Exercises 155.0 3.5 10 4
Q1 Edith Pratt is a busy executive in a nation wide trucking a Use the following Minitab output to determine the best
company. Edith is late for a meeting because she has been fitting regression equation for the airline:
unable to locate the multiple regression output that an The regression equation is
associate produced for he. Of the total regression was
SALES = 172 + 25.9 PROMOT – 13.2 COMP - 3.04 FREE
significant at the 0.05 level, then she wanted to use the
computer output as evidence to support some of her ideas Predictor Coef Stdev T – ratio P
at the meeting. The subordinate however, is sick today and
Edith has been unable to locate has work. As a mater of Constant 172.34 51.38 3.35 0.006
fact, all the information she possesses concerning the PROMOT 25.950 4.877 5.32 0.000
multiple regression is piece of scrap paper with the
COMP -13.238 3.686 -3.59 0.004
following on it
Regression for E.Pratt FREE -3.041 2.342 -1.30 0.221
SSR 872.4,With df
a. Do the passengers who fly free cause sales to decrease
SSE ,with 17 df significantly?
SST 1023.6 with 24 df b. Does an increase in promotions by $ 1,000 change sales by
Because the scrap paper doesn’t even have a complete set of $28,000, or is the change significantly different from $28000?
numbers on it, Edith has concluded that it must the useless. State and test appropriate hypotheses. use a = 0.10
You however, should know better. Should go directly to the c. Give a 90 percent confidence interval for the slope coefficient
meeting or continue looking for the computer output? of COMP.
Q2 A New England based commuter airline has taken a survey
of its 15 terminals and has obtained the following data for
the month of February, where
SALES = total revenue based on number of tickets sold
PROMOT = amount spent on promoting the airline in the
area (in thousand if dollars)
COMP = number of competing airlines at that terminal
FREE = the percentage of passengers who flew free (for
various reasons)

© Copy Right: Rai University


11.556 177
RESEARCH METHODOLOGY

LESSON 33:
APPLICATIONS OF REGRESSION ANALYSIS IN RESEARCH

Mark Lowtown Publishes the Mosquito Junction Enquirer and c Based on your answers to (a) and b, is the regression
is having difficulty predicting the amount of newsprint needed significant as a whole?
each day. He has randomly selected 27 day over the past year and
Modelling Techniques
recorded the following information:
Given a variable we usually have a group of potential explana-
POUNFD = pounds of newsprint for that day’s newspaper tory variables. We would also have different combinations of
CLAWFIED = number of classified advertisements explanatory variables or different regression equations.
DISPLAY = number of display advisements Each regression equation is called a model. Modelling tech-
FULLPAGE = number of full-page advertisements niques are the process by which we include different explanatory
variables and check the appropriateness of the regression
Using Minitab to regress POUNDS on the other three variables,
model. There are many different techniques: Two commonly
march got the output that follows.
used ones are :
Predictor Coef Stdev T – ratio P Dummy Variable Technique or
Constant 1072.95 872.43 1.23 0.232 Qualitative Data
So far we have used only numerical data . But frequently we
CLASFIED 0.251 0.126 1.99 0.060
have a variable which is categorical or qualitative. An example
DISPLAY 1.250 0.884 1.41 0.172 would be countries, regions, gender(male vs female).
FULLPAGE 250.66 67.92 3.69 0.001 The most important thing for any regression is to look at the
residuals. If the regression includes all relevant variables these
a Mark had always felt that each display advertisement used at residuals should be random. If the residuals show any non
least 3 pounds of newsprint. Does there regression give him random patterns this indicates that there is a systematic
significant reason to doubt this belief at the 5 percent level? influence on the dependent variable which should actually have
b Similarly, Mark had always felt that each classified been included in the regression equation. This is shown in the
advertisement used rough half a pound of newsprint. Does table8 below . We see that the first 5 residuals are positive i.e., y-
he now have significant reason to doubt this belief at the 5 ^y >0 .
percent level?
Insert figure 4 (scan of residuals)
c Mark sells full page advertising space to the local merchants For the last three however the residuals are negative i.e the
for $30 per page. regression line fall b above these points a this suggests that
Should he consider adjusting his rates if newsprint costs there is a gender factor at working our example. How do we
him 9$ per pound? assume other costs are negligible. State incorporate gender into our regression model? We do this by a
explicit hypotheses and an explicit conclusion (Hint : device called a Dummy variable. For the five points representing
Holding all else constant each additional full-page ad uses male sales men this variable has a value =0 For the three female
250.66 pounds of paper ´$0.09 per pound = $22.56 cost. sales persons it equals 1. Thus we now fit a regression equation
Breakeven is at 333.333 pounds. Why ? thus if the slop of the form;
coefficient for FULLPAGE is significantly above 333.333
Mark is not making a profit and his rates should be
yˆ = a + b1 x1 + b2 x 2
changed.) Where the variable X1= 0 for men and 1 for females.
Q3 The following addition output was provided by Minitab This is effectively amounts to estimating tow different equation
when Bill ran the multiple regression: s one for man and one for women.
Analysis of variance i.e.
SOURCE DF SS MS F P Sales men:yˆ = a + b1 (o ) + bs x 2 = a + b2 x 2
Regression 4 3134.42 783.60 Sales women: a + b1 (1) + b2 x2 = A + b1 + B2 x2
Error 7 951.25 135.89 For sales men and women with the same length of employ-
ment we predict a base salary difference of b2 dollars.
Total 11 4085
Now We test for the significance of the b coefficient i.e., is there
actually a lower salary being paid to women? Even with the
a What is the observed value of F?
same length of service?
b At a significance level 0.05, what is the appropriate critical
Again our hypothesized population relationship is :
value of F to use in determining whether the regression as a
whole is significant? Y = A + B1 X 1 + B 2 X 2
© Copy Right: Rai University
178 11.556
If there is discrimination the B2 should be negative. We carry out Analysis of Variance

RESEARCH METHODOLOGY
the following test of significance:
Regression Analysis
Ho: B1=0
The regression equation is
Ha:B1<0
SALARY = 5.81 + 0.233 MONTHS
PredictorCoef Stdev t-ratio P
The results for our hypothesis is given in the table below. The
Constant 5.8093 0.4038 14.39 0.000
coefficient for gender i.e., b1 has a t ratio of –3.31. and is
significant .016 level of p. At the .01 level of significant we find MONTHS 0.23320 0.02492 9.36 0.000
p>a we there fore reject the hypothesis a that there is no s = 0.5494 R-sq = 92.6%
discrimination at 1% level of significance. Further we note that Analysis of Variance
the inclusion of gender variable makes the months employed
SOURCE DF SS MS
even more significant as an explanatory variable. We can now
look at the fresh output of the residuals. And we see that they Regression 1 26.443 26.443
essentially have a random pattern. Error 7 2.113 0.302
Insert Figure Total 8 28.556

Scan Diag p757 LR)


Clearly shows that base salary increases with length of service; I ROW SALARY FITS 1 RESIl
but if you try to “eyeball” the regression line, you’ll note that 1 7.5 7.2085 0.291499
the black points tend to be above it and the colored cir- d>s 2 8.6 8.1413 0.458684
rend a> be below it
Figure 13-10 gives the output from a regression of base salary 3 9.1 8.6077 0.492276
on months employed. From that output, we see that months 4 10.3 10.0069 0.293054
employed is a very highly significant explanatory variable for 5 13.0 12.8054 0.194607
base salary. 6 6.2 6.9753 -0.775297
Also, r2 = 92.6 percent, indicating that months employed 7 8.7 8.8409 -0.140928
explains about 93 percent of the variation in base salary. Figure 8 9.4 9.3073 0.092664
13-11 contains part of the output that we haven’t seen before. a 9 9.8 10.7066 -0.906558
table of residuals. For each data point, the residual is just Y - Y,
which we recog-nize as the error in the fit of the regression line Now let’s review how we handled the qualitative variable in this
at that point. In Figure 13-11, FITS 1 are the fitted values and problem. We set up a dummy variable, which we gave the value
RESIl are the residuals. 0 for the men and the value 1 for the women. Then the
coefficient of the dummy variable can be interpreted as the
Perhaps the most important part of analyzing a regression difference between a woman’s base salary and the base salary for
output is looking at the residuals. If the regression includes all a man. Suppose we had set the dummy vari-able to 0 for
the relevant explanatory factors, these residu-als ought to be women and I for men. Then its coefficient would be the
random. Looking at this in another way, if the residuals show difference between a man’s base salary and the base salary for a
any non-random patterns, this indicates that there is something woman. Can you guess what the regression would have been in
systematic going on that we have failed to take into account. So this case? It shouldn’t surprise you to learn that it would have
we look for patterns in the residuals; or to put it some-what been
more picturesquely, we “squeeze the residuals until they talk.”
Y = 5.4595 + 0.22707X1 + O.7890X2
As we look at the residuals in Figure 13-11, we note that the
first five residuals are pos-itive. So for the salesmen, we have Y - The choice of which category is given the value 0 and which the
Y > 0, or Y > Y, that is, the regression line falls be-low these value 1 is totally arbitrary and affects only the sign, not the
five data points. Three of the last four residuals are negative. numerical value of the coefficient of the dummy variable.
And thus for the Saleswomen, we have Y - Y < 0, or Y < Y, so Our example had only one qualitative variable (gender), and that
the regression line lies above three of the four variable had only two possible categories (male and female).
Data points. This confirms the observation we made when we Although we won’t pursue the details here, dummy variable
looked at the scatter diagram techniques can also be used in problems with several qualitative
variables, and those variables can have more than two possible
The regression equation is categories.
SALARY = 5.81 + 0.233 MONTHS
Exercises
PredictorCoef Stdev t-ratio P Sc 13-4 Edith Pratt is a busy executive in a nation wide
Constant 5.8093 0.4038 14.39 0.000 trucking company. Edith is late for a meeting because she has
MONTHS 0.23320 0.02492 9.36 0.000 been unable to locate the multiple regression output that an
associate produced for he. Of the total regression was signifi-
s = 0.5494 R-sq = 92.6%
cant at the 0.05 level, then she wanted to use the computer

© Copy Right: Rai University


11.556 179
output as evidence to support some of her ideas at the meeting. b Do the passengers who fly free cause sales to decrease
RESEARCH METHODOLOGY

The subordinate however, is sick today and Edith has been significantly?
unable to locate has work. As a mater of fact, all the informa- c Does an increase in promotions by $ 1,000 change sales by
tion she possesses concerning the multiple regression is piece of $28,000, or is the change significantly different from $28000?
scrap paper with the following on it: State and test appropriate hypotheses. use a = 0.10
Regression for E.Pratt d Give a 90 percent confidence interval for the slope coefficient
SSR 872.4,With df of COMP
SSE ,with 17 df
SST 1023.6 with 24 df Notes -

Because the scrap paper doesn’t even have a complete set of


numbers on it, Edith has concluded that it must the useless.
You however, should know better. Should go directly to the
meeting or continue looking for the computer output?
SC 13 – 5
A New England based commuter airline has taken a survey of
its 15 terminals and has obtained the following data for the
month of February, where
SALES = total revenue based on number of tickets sold
PROMOT = amount spent on promoting the airline in the area
(in thousand if dollars)
COMP = number of competing airlines at that terminal
FREE = the percentage of passengers who flew free (for
various reasons)
Sales($ ) Promot ($) Comp Free
79.3 2.5 10 3
200.1 5.5 8 6
163.2 6.0 12 9
200.1 7.9 7 16
146.0 5.2 8 15
177.7 7.6 12 9
30.9 2.0 12 8
291.9 9.0 5 10
160.0 4.0 8 4
339.4 9.6 5 16
159.6 5.5 11 7
86.3 3.0 12 6
237.5 6.0 6 10
107.2 5.0 10 4
155.0 3.5 10 4
a Use the following Minitab output to determine the best
fitting regression equation for the airline:
The regression equation is
SALES = 172 + 25.9 PROMOT – 13.2 COMP - 3.04 FREE
PredictorCoef Stdev T – ratio P
Constant 172.34 51.38 3.35 0.006
PROMOT 25.950 4.877 5.32 0.000
COMP -13.238 3.686 -3.59 0.004
FREE -3.041 2.342 -1.30 0.221

© Copy Right: Rai University


180 11.556
RESEARCH METHODOLOGY
LESSON 34:
REGRESSION ANALYSIS USING SPSS PACKAGE

13 –22 Mark Lowtown Publishes the Mosquito Junction 13-23 Refer to Exercise 13-19. at a significance level of 0.01 is
Enquirer and is having difficulty predicting the amount of DISTANCE a significant explanatory variable for SALES?
newsprint needed each day. He has randomly selected 27 day 13-24 Refer to Exercise 13-19. the following additional out
over the past year and recorded the following information: was provided by Mititab when the multiple regression was run:
POUNFD = pounds of newsprint for that day’s newspaper Analysis of variance
CLAWFIED = number of classified advertisements SOURCE DF SS MS F P
DISPLAY = number of display advisements Regression 4 2861495 715374 102.39 0.000
FULLPAGE = number of full-page advertisements Error 18 125761 6896.7
Using Minitab to regress POUNDS on the other three variables, Total 22 2987256
march got the output that follows.
At the 0.05 l
Predictor Coef Stdev T – ratio P evel of significance, is the regression significant as a whole?
Constant 1072.95 872.43 1.23 0.232 13-27 Henry Lander is direction of production for the Alecos
CLASFIED 0.251 0.126 1.99 0.060 Corporation of Caracas, Venezuela. Henry has asked you to
DISPLAY 1.250 0.884 1.41 0.172 help him determine a formula for predicting absenteeism in a
FULLPAGE 250.66 67.92 3.69 0.001 met packing facility he hypothesizes that percentage absenteeism
can be explain by average daily temperature. Data are gathered
a Mark had always felt that each display advertisement used at for several months you run the simple regression and you find
least 3 pounds of newsprint. Does there regression give him that temperature explains 66 percent of the variation in
significant reason to doubt this belief at the 5 percent level? absenteeism. But hennery is not convinced that this is a
b Similarly, Mark had always felt that each classified satisfactory predictor. He suggests that daily rainfall may also
advertisement used rough half a pound of newsprint. Does have something to do with absenteeism. So you gather data,
he now have significant reason to doubt this belief at the 5 run a regression of absenteeism on rainfall, and
percent level? We Also Include Some Material From an
c Mark sells full page advertising space to the local merchants Internet Web Site on Regression Models
for $30 per page. Introductory Statistics: Concepts, Models, and Applications
Should he consider adjusting his rates if newsprint costs David W. Stockburger
him 9$ per pound? assume other costs are negligible. State
Regression Models
explicit hypotheses and an explicit conclusion (Hint :
Regression models are used to predict one variable from one or
Holding all else constant each additional full-page ad uses
more other variables. Regression models provide the scientist
250.66 pounds of paper ´$0.09 per pound = $22.56 cost.
with a powerful tool, allowing predictions about past, present,
Breakeven is at 333.333 pounds. Why ? thus if the slop
or future events to be made with information about past or
coefficient for FULLPAGE is significantly above 333.333
present events. The scientist employs these models either
Mark is not making a profit and his rates should be
because it is less expensive in terms of time and/or money to
changed.)
collect the information to make the predictions than to collect
13-23 Refer to exercise 13-18. the following addition output was the information about the event itself, or, more likely, because
provided by Minitab when Bill ran the multiple regression the event to be predicted will occur in some future time. Before
describing the details of the modeling process, however, some
SOURCE DF SS MS F P
examples of the use of regression models will be presented.
Regression 4 3134.42 783.60
Error 7 951.25 135.89 Example Uses of Regression Models
Total 11 4085 Selecting Colleges
Analysis of variance A high school student discusses plans to attend college with a
a What is the observed value of F? guidance counselor. The student has a 2.04 grade point average
out of 4.00 maximum and mediocre to poor scores on the
b At a significance level 0.05, what is the appropriate critical
ACT. He asks about attending Harvard. The counselor tells him
value of F to use in determining whether the regression as a
he would probably not do well at that institution, predicting he
whole is significant?
would have a grade point average of 0.64 at the end of four
c Based on your answers to (a) and b, is the regression years at Harvard. The student inquires about the necessary grade
significant as a whole?

© Copy Right: Rai University


11.556 181
intellectual tasks. The problem was one of both selection, who
RESEARCH METHODOLOGY

point average to graduate and when told that it is 2.25, the


student decides that maybe another institution might be more is drafted and who is rejected, and placement, of those selected,
appropriate in case he becomes involved in some “heavy duty who will cook and who will fight. The army that takes its best
partying.” and brightest men and women and places them in the front
lines digging trenches is less likely to win the war than the army
When asked about the large state university, the counselor
who places these men and women in the position of leader-
predicts that he might succeed, but chances for success are not
ship.
great, with a predicted grade point average of 1.23. A regional
institution is then proposed, with a predicted grade point It costs a great deal of money and time to train a person to fly
average of 1.54. Deciding that is still not high enough to an airplane. Every time one crashes, the air force has lost a plane,
graduate, the student decides to attend a local community the time and effort to train the pilot, and not to mention, the
college, graduates with an associates degree and makes a fortune loss of the life of a person. For this reason it was, and still is,
selling real estate. vital that the best possible selection and prediction tools be
used for personnel decisions.
If the counselor was using a regression model to make the
predictions, he or she would know that this particular student Manufacturing Widgets
would not make a grade point of 0.64 at Harvard, 1.23 at the A new plant to manufacture widgets is being located in a nearby
state university, and 1.54 at the regional university. These values community. The plant personnel officer advertises the employ-
are just “best guesses.” It may be that this particular student ment opportunity and the next morning has 10,000 people
was completely bored in high school, didn’t take the standard- waiting to apply for the 1,000 available jobs. It is important to
ized tests seriously, would become challenged in college and select the 1,000 people who will make the best employees
would succeed at Harvard. The selection committee at Harvard, because training takes time and money and firing is difficult and
however, when faced with a choice between a student with a bad for community relations. In order to provide information
predicted grade point of 3.24 and one with 0.64 would most to help make the correct decisions, the personnel officer
likely make the rational decision of the most promising employs a regression model. None of what follows will make
student. much sense if the procedure for constructing a regression
Pregnancy model is not understood, so the procedure will now be
A woman in the first trimester of pregnancy has a great deal of discussed.
concern about the environmental factors surrounding her Procedure For Construction of a
pregnancy and asks her doctor about what to impact they might Regression Model
have on her unborn child. The doctor makes a “point estimate” In order to construct a regression model, both the information
based on a regression model that the child will have an IQ of which is going to be used to make the prediction and the
75. It is highly unlikely that her child will have an IQ of exactly information which is to be predicted must be obtained from a
75, as there is always error in the regression procedure. Error sample of objects or individuals. The relationship between the
may be incorporated into the information given the woman in two pieces of information is then modeled with a linear
the form of an “interval estimate.” For example, it would make transformation. Then in the future, only the first information is
a great deal of difference if the doctor were to say that the child necessary, and the regression model is used to transform this
had a ninety-five percent chance of having an IQ between 70 information into the predicted. In other words, it is necessary to
and 80 in contrast to a ninety-five percent chance of an IQ have information on both variables before the model can be
between 50 and 100. The concept of error in prediction will constructed.
become an important part of the discussion of regression For example, the personnel officer of the widget manufacturing
models. company might give all applicants a test and predict the number
It is also worth pointing out that regression models do not of widgets made per hour on the basis of the test score. In
make decisions for people. Regression models are a source of order to create a regression model, the personnel officer would
information about the world. In order to use them wisely, it is first have to give the test to a sample of applicants and hire all
important to understand how they work. of them. Later, when the number of widgets made per hour
had stabilized, the personnel officer could create a prediction
Selection and Placement During the
model to predict the widget production of future applicants.
World Wars
All future applicants would be given the test and hiring
Technology helped the United States and her allies to win the
decisions would be based on test performance.
first and second world wars. One usually thinks of the atomic
bomb, radar, bombsights, better designed aircraft, etc when this A notational scheme is now necessary to describe the procedure:
statement is made. Less well known were the contributions of Xi is the variable used to predict, and is sometimes called the
psychologists and associated scientists to the development of independent variable. In the case of the widget manufacturing
test and prediction models used for selection and placement of example, it would be the test score.
men and women in the armed forces. Y i is the observed value of the predicted variable, and is
During these wars, the United States had thousands of men sometimes called the dependent variable. In the example, it
and women enlisting or being drafted into the military. These would be the number of widgets produced per hour by that
individuals differed in their ability to perform physical and individual.

© Copy Right: Rai University


182 11.556
Y’i is the predicted value of the dependent variable. In the residuals. If the column of differences between the observed

RESEARCH METHODOLOGY
example it would be the predicted number of widgets per hour and predicted is summed,
by that individual.
The goal in the regression procedure is to create a model where
the predicted and observed values of the variable to be
predicted are as similar as possible. For example, in the widget
manufacturing situation, it is desired that the predicted number then it would appear that interviewer A is the better at predic-
of widgets made per hour be as similar to observed values as tion, because he had a smaller sum of deviations, 1, than
possible. The more similar these two values, the better the interviewer B, with a sum of 14. This goes against common
model. The next section presents a method of measuring the sense. In this case large positive deviations cancel out large
similarity of the predicted and observed values of the predicted negative deviations, leaving what appears as an almost perfect
variable. prediction for interviewer A, but that is

The Least-squares Criteria For Goodness- Interviewer


of-fit Observed Mr. A Ms . B Mr. A Ms. B
In order to develop a measure of how well a model predicts the Yi Y'i Y' i Yi - Y'i Yi - Y' i
data, it is valuable to present an analogy of how to evaluate
predictions. Suppose there were two interviewers, Mr. A and 23 38 21 -15 2
Ms. B, who separately interviewed each applicant for the widget 18 34 15 -16 3
manufacturing job for ten minutes. At the end of that time, the 35 16 32 19 3
interviewer had to make a prediction about how many widgets
that applicant would produce two months later. All of the 10 10 8 0 2
applicants interviewed were hired, regardless of the predictions, 27 14 23 13 4
and at the end of the two month’s trial period, one interviewer, 1 14
the best one, was to be retained and promoted, the other was
to be fired. The purpose of the following is to develop a In order to avoid the preceding problem, it would be possible
measure of goodness-of-fit, or, how well the interviewer to ignore the signs of the differences and then sum, that is, take
predicted. the sum of the absolute values. This would work, but for
The notational scheme for the table is as follows: mathematical reasons the sign is eliminated by squaring the
differences. In the example, this procedure would yield:
Y i is the observed or actual number of widgets made per hour
Y’i is the predicted number of widgets Interviewer
Suppose the data for the five applicants were as follows:
Obs Mr. A Ms . B Mr. A Ms. B Mr. A Ms. B
Interviewer
Yi Y'i Y'i Yi - Y'i Yi - Y' i (Yi - Y' i) 2 (Yi - Y'i )2
Observed Mr. A Ms. B 23 38 21 -15 2 225 4
Yi Y'i Y'i 18 34 15 -16 3 256 9
23 38 21
35 16 32 19 3 361 9
18 34 15 10 10 8 0 2 0 4
35 16 32 27 14 23 13 4 169 16
10 10 8
1 14 1011 42
27 14 23 Summing the squared differences yields the desired measure of
goodness-of-fit. In this case the smaller the number, the closer
Obviously neither interviewer was impressed with the fourth the predicted to the observed values. This is expressed in the
applicant, for good reason. A casual comparison of the two following mathematical equation.
columns of predictions with the observed values leads one to
believe that interviewer B made the better predictions. A
procedure is desired which will provide a measure, or single
number, of how well each interviewer performed. The prediction which minimizes this sum is said to meet the
least-squares criterion. Interviewer B in the above example
The first step is to find how much each interviewer missed the
meets this criterion in a comparison between the two interview-
predicted value for each applicant. This is done by finding the
ers with values of 42 and 1011, respectively, and would be
difference between the predicted and observed values for each
promoted. Interviewer A would receive a pink slip.
applicant for each interviewer. These differences are called

© Copy Right: Rai University


11.556 183
The Regression Model where X2 = 20 would be Y 2'= 10 + (1*20) = 30. The same
RESEARCH METHODOLOGY

The situation using the regression model is analogous to that procedure is then applied to the last three scores, resulting in
of the interviewers, except instead of using interviewers, predictions of 20, 43, and 25, respectively.
predictions are made by performing a linear transformation of
the predictor variable. Rather than interviewers in the above Squared
Form-Board Widgets/hr Residuals
Residuals
example, the predicted value would be obtained by a linear
transformation of the score. The prediction takes the form Observed Observed Predicted
Xi Yi Y'i=a+bXi (Yi-Y'i) (Yi-Y'i)2
13 23 23 0 0
where a and b are parameters in the regression model.
20 18 30 -12 144
In the above example, suppose that, rather than being inter-
viewed each applicant took a form-board test. A form-board is a 10 35 20 15 225
board with holes cut out in 33 10 43 -33 1089
15 27 25 2 4
(Yi-Y'i)2 1462
It can be seen that the model does a good job of prediction for
the first and last applicant, but the middle applicants are poorly
predicted. Because it is desired that the model work for all
applicants, some other values for the parameters must be tried.
various shapes: square, round triangular, etcx. The goal is to put The selections of the parameters for the second model is based
the right pegs in the right holes as fast as possible. The saying on the observation that the longer it takes to put the form
“square peg in a round hole” came from this test, as the test has board together, the fewer the number of widgets made. When
been around for a long time. The score for the test is the the tendency is for one variable to increase while the other
number of seconds it takes to complete putting all the pegs in decreases, the relationship between the variables is said to be
the right holes. The data was collected as follows inverse. The mathematician knows that in order to model an
inverse relationship, a negative value of b must be used in the
Form-Board Test Widgets/hr regression model. In this case the parameters of a=36 and b=-1
Xi Yi will be used.
13 23 Xi Yi Y'i=a+bXi (Yi-Y'i) (Yi -Y'i)2

20 18 13 23 23 0 0
20 18 16 2 4
10 35
10 35 26 9 81
33 10
33 10 3 7 49
15 27
15 27 21 6 36
Because the two parameters of the regression model, a and b, (Yi-Y'i) 2
170
can take on any real value, there are an infinite number of
possible models, analogous to having an infinite number of This model fits the data much better than did the first model.
possible interviewers. The goal of regression is to select the Fairly large deviations are noted for the third applicant, which
parameters of the model so that the least-squares criterion is might be reduced by increasing the value of the additive
met, or, in other words, to minimize the sum of the squared component of the transformation, a. Thus a model with a=41
deviations. The procedure discussed in the last chapter, that of and b=-1 will now be tried.
transforming the scale of X to the scale of Y, such that both Xi Yi Y'i=a+bXi (Yi-Y'i) (Yi-Y'i)2
have the same mean and standard deviation will not work in
this case, because of the prediction goal. 13 23 28 -5 25
A number of possible models will now be examined where: 20 18 21 -3 19
Xi is the number of seconds to complete the form board task
10 35 31 4 16
Y i is the number of widgets made per hour two months later
Y’i is the predicted number of widgets 33 10 8 2 4
For the first model, let a=10 and b=1, attempting to predict the 15 27 26 1 1
first score perfectly. In this case the regression model becomes 2
(Yi-Y'i) 55
This makes the predicted values closer to the observed values on
The first score (X1=13) would be transformed into a predicted
the whole, as measured by the sum of squared deviations
score of Y 1'= 10 + (1*13) = 23. The second predicted score,

© Copy Right: Rai University


184 11.556
(residuals). Perhaps a decrease in the value of b would make the

RESEARCH METHODOLOGY
predictions better. Hence a model where a=32 and b=-.5 will be
tried.
Xi Yi Y' i=a+bXi (Yi-Y' i) (Yi-Y' i)2
13 23 25.5 -2.5 6.25
20 18 22 -4 16 Now comes the hard part that requires knowledge of calculus.
At this point even the mathematically sophisticated student will
10 35 27 8 64 be asked to “believe.” What the mathematician does is take the
33 10 17.5 -7.5 56.25 first-order partial derivative of the last form of the preceding
expression with respect to b, set it equal to zero, and solve for
15 27 24.5 3.5 12.25
the value of b. This is the method that mathematicians use to
(Yi-Y' i)2 142.5 solve for minimum and maximum values. Completing this
task, the result becomes:
Since the attempt increased the sum of the squared deviations,
it obviously was not a good idea.
The point is soon reached when the question, “When do we Using a similar procedure to find the value of a yields:
know when to stop?” must be asked. Using this procedure, the
answer must necessarily be “never”, because it is always possible
to change the values of the two parameters slightly and obtain a
better estimate, one which makes the sum of squared devia- The “optimal” values for a and b can be found by doing the
tions smaller. The following table summarizes what is known appropriate summations, plugging them into the equations,
about the problem thus far. and solving for the results. The appropriate summations are
presented below
a b (Yi-Y'i)2 Xi Yi Xi2 XiYi
10 1 1462 13 23 169 299
20 18 400 360
36 -1 170 10 35 100 350
33 10 1089 330
41 -1 55
15 27 225 405
32 -.5 142.5 SUM 91 113 1983 1744

With four attempts at selecting parameters for a model, it


appears that when a=41 and b=-1, the best-fitting (smallest
sum of squared deviations) is found to this point in time. If
the same search procedure were going to be continued, perhaps
the value of a could be adjusted when b=-2 and b=-1.5, and so
forth. The following program provides scroll bars to allow the
student to adjust the values of “a” and “b” and view the
resulting table of squared residuals. Unless the sum of squared
deviations is equal to zero, which is seldom possible in the real The result of these calculations is a regression model of the
world, we will never know if it is the best possible model. form:
Rather than throwing their hands up in despair, applied
statisticians approached the mathematician with the problem
and asked if a mathematical solution could be found. This is
the topic of the next section. If the student is simply willing to
“believe” it may be skimmed without any great loss of the
ability to “do” a linear regression problem.
Solving for Parameter Values which
Satisfy the Least-squares Criterion
The problem is presented to the mathematician as follows:
“The values of a and b in the linear model Y’i = a + b Xi are to
be found which minimize the algebraic expression

Solving for the a parameter is somewhat easier.

The mathematician begins as follows:

© Copy Right: Rai University


11.556 185
Demonstration of “Optimal” Parameter
RESEARCH METHODOLOGY

Estimates
Using either the algebraic expressions developed by the
mathematician or the calculator results, the “optimal” regression
model which results is:

Applying procedures identical to those used on earlier “non-


optimal” regression models, the residuals (deviations of
This procedure results in an “optimal” model. That is, no other observed and predicted values) are found, squared, and
values of a and b will yield a smaller sum of squared deviations. summed to find the sum of squared deviations.
The mathematician is willing to bet the family farm on this
result. A demonstration of this fact will be done for this Xi Yi Y'i=a+bXi (Yi-Y'i) (Yi-Y'i)2
problem shortly.
13 23 27.58 -4.57 20.88
In any case, both the number of pairs of numbers (five) and
the integer nature of the numbers made this problem “easy.” 20 18 20.88 -2.78 8.28
This “easy” problem resulted in considerable computational
10 35 30.44 4.55 20.76
effort. Imagine what a “difficult” problem with hundreds of
pairs of decimal numbers would be like. That is why a bivariate 33 10 8.44 1.56 2.42
statistics mode is available on many calculators.
15 27 25.66 1.34 1.80
Using Statistical Calculators to Solve for
Regression Parameters (Yi-Y'i)2 54.14
Most statistical calculators require a number of steps to solve
regression problems. The specific keystrokes required for the Note that the sum of squared deviations ((Yi -Y’i )2=54.14) is
steps vary for the different makes and models of calculators. smaller than the previous low of 55.0, but not by much. The
Please consult the calculator manual for details. mathematician is willing to guarantee that this is the smallest
sum of squared deviations that can be obtained by using any
Step 1: Put the calculator in “bivariate statistics mode.” This
possible values for a and b.
step is not necessary on some calculators.
The bottom line is that the equation
Step 2: Clear the statistical registers.
Step 3: Enter the pairs of numbers. Some calculators verify the
number of numbers entered at any point in time on the will be used to predict the number of widgets per hour that a
display. potential employee will make, given the score that he or she has
made on the form-board test. The prediction will not be
Step 4: Find the values of various statistics including:
perfect, but it will be the best available, given the data and the
The mean and standard deviation of both X and Y form of the model.
• The correlation coefficient (r)
Scatter Plots and the Regression Line
• The parameter estimates of the regression model The preceding has been an algebraic presentation of the logic
• The slope (b) underlying the regression procedure. Since there is a one-to-one
• The intercept (a) correspondence between algebra and geometry, and since some
students have an easier time understanding a visual presenta-
The results of these calculations for the example problem are
tion of an algebraic procedure, a visual presentation will now be
attempted. The data will be represented as points on a scatter
plot, while the regression equation will be represented by a
straight line, called the regression line.
The discussion of the correlation coefficient is left for the next A scatter plot or scatter gram is a visual representation of the
chapter. All that is important at the present time is the ability to relationship between the X and Y variables. First, the X and Y
calculate the value in the process of performing a regression axes are drawn with equally spaced markings to include all values
analysis. The value of the correlation coefficient will be used in a of that variable that occur in the sample. In the example
later formula in this chapter. problem, X, the seconds to put the form-board together,
would have to range between 10 and 33, the lowest and highest
values that occur in the sample. A similar value for the Y
variable, the number of widgets made per hour, is from 10 to
35. If the axes do not start at zero, as in the present case where
they both start at 10, a small space is left before the line
markings to indicate this fact.

© Copy Right: Rai University


186 11.556
The paired or bivariate (two variable, X,Y) data will be repre-

RESEARCH METHODOLOGY
sented as vectors or points on this graph. The point is plotted
by finding the intersection of the X and Y scores for that pair
of values. For example, the first point would be located at the As such it may be thought of as the average deviation of the
intersection of and X=13 and Y=23. The first point and the predicted from the observed values of Y, except the denomina-
remaining four points are presented on the following graph. tor is not N, but N-2, the degrees of freedom for the regression
The regression line is drawn by plotting the X and Y’ values. procedure. One degree of freedom is lost for each of the
The next figure presents the five X and Y’ values that were parameters estimated, a and b. Note that the numerator is the
found on the regression table of observed and predicted values. same as in the least squares criterion.
Note that the first point would be plotted as (13, 27.57) the
second point as (20, 20.88), etc.

The standard error of estimate is a standard deviation type of


measure. Note the similarity of the definitional formula of the
standard deviation of Y to the definitional formula for the
standard error of measurement.
Note that all the points fall on a straight line. If every possible Two differences appear. First, the standard error of measure-
Y’ were plotted for every possible X, then a straight line would ment divides the sum of squared deviations by N-2, rather than
be formed. The equation Y’ = a + bX defines a straight line in a N-1. Second, the standard error of measurement finds the sum
two dimensional space. The easiest way to draw the line is to of squared differences around a predicted value of Y, rather
plot the two extreme points, that is, the points corresponding than the mean.
to the smallest and largest X and connect these points with a The similarity of the two measures may be resolved if the
straightedge. Any two points would actually work, but the two standard deviation of Y is conceptualized as the error around a
extreme points give a line with the least drawing error. The a predicted Y of Y’i = a. When the least-squares criterion is
value is sometimes called the intercept and defines where the applied to this model, the optimal value of a is the mean of Y.
line crosses the Y-axis. This does not happen very often in In this case only one degree of freedom is lost because only one
actual drawings, because the axes do not begin at zero, that is, parameter is estimated for the regression model.
there is a break in the line. The following illustrates how to The standard error of estimate may be calculated from the
draw the regression line. definitional formula given above. The computation is difficult,
however, because the entire table of differences and squared
differences must be calculated. Because the numerator has
already been found, the calculation for the example data is
relatively easy.

Most often the scatter plot and regression line are combined as
follows The calculation of the standard error of estimate is simplified
by the following formula, called the computational formula for
the standard error of estimate. The computation is easier
because the statistical calculator computed the correlation
coefficient when finding a regression line. The computational
formula for the standard error of estimate will always give the
same result, within rounding error, as the definitional formula.
The computational formula may look more complicated, but it
does not require the computation of the entire table of
The Standard Error of Estimate differences between observed and predicted Y scores. The
The standard error of estimate is a measure of error in predic- computational formula is as follows:
tion. It is symbolized as s Y.X, read as s sub Y dot X. The
notation is used to mean the standard deviation of Y given the
value of X is known. The standard error of estimate is defined
by the formula

© Copy Right: Rai University


11.556 187
The computational formula for the standard error of estimate
RESEARCH METHODOLOGY

It is somewhat difficult to visualize all possible conditional


is most easily and accurately computed by temporality storing
distributions in only two dimensions, although the following
the values for s Y 2 and r2 in the calculator’s memory and recalling
illustration attempts the relatively impossible. If a hill can be
them when needed. Using this formula to calculate the standard
visualized with the middle being the regression line, the vision
error of estimate with the example data produces the following
would be essentially correct.
results

Note that the result is the same as the result from the applica-
tion of the definitional formula, within rounding error. The conditional distribution is a model of the distribution of
points around the regression line for a given value of X. The
The standard error of estimate is a measure of error in predic-
conditional distribution is important in this text mainly for the
tion. The larger its value, the less well the regression model fits
role it plays in computing an interval estimate.
the data, and the worse the prediction.
Interval Estimates
Conditional Distributions
The error in prediction may be incorporated into the informa-
A conditional distribution is a distribution of a variable given a
tion given to the client by using interval estimates rather than
particular value of another variable. For example, a conditional
point estimates. A point estimate is the predicted value of Y,
distribution of number of widgets made exists for each
Y’. While the point estimate gives the best possible prediction,
possible value of number of seconds to put the form board
as defined by the least squares criterion, the prediction is not
together. Conceptually, suppose that an infinite number of
perfect. The interval estimate presents two values; low and high,
applicants had made the same score of 18 on the form board
between which some percentage of the observed scores are
test. If everyone was hired, not everyone make the same
likely to fall. For example, if a person applying for a position
number of widgets three months later. The distribution of
manufacturing widgets made a score of X=18 on the form
scores which results would be called the conditional distribution
board test, a point estimate of 22.78 would result from the
of Y (widgets) given X (form board). The relationship between
application of the regression model and an interval estimate
X and Y in this case is often symbolized by Y |X. The condi-
might be from 14.25 to 31.11. The interval computed is a 95
tional distribution of Y given that X was 18 would be
percent confidence interval. It could be said that 95 times out of
symbolized as Y |X=18.
100 the number of widgets made per hour by an applicant
It is possible to model the conditional distribution with the making a score of 18 on the form board test would be between
normal curve. In order to create a normal curve model, it is 14.25 and 31.11.
necessary to estimate the values of the parameters of the
The model of the conditional distribution is critical to under-
model, • Y |X and • Y| X . The best estimate of • Y| X is the
standing the assumptions made when calculating an interval
predicted value of Y, Y’, given X equals a particular value. This
estimate. If the conditional distribution for a value of X is
is found by entering the appropriate value of X in the regres-
known, then finding an interval estimate is reduced to a
sion equation, Y’=a+bX. In the example, the estimate of • Y |X
problem that has already been solved in an earlier chapter. That
for the conditional distribution of number of widgets made
is, what two scores on a normal distribution with parameters
given X=18, would be Y’=40.01-.957*18=22.78. This value is
and cut off some middle percent of the distribution? While any
also called a point estimate, because it is the best guess of Y
percentage could be found, the standard value is a 95%
when X is a given value.
confidence interval.
The standard error of estimate is often used as an estimate of
For example, the parameter estimates of the conditional
• Y |X for all the conditional distributions. This assumes that all
distribution of X=18 are • Y |X=22.78 and • Y| X=4.25. The
conditional distributions have the same value for this param-
two scores which cut off the middle 95% of that distribution
eter. One interpretation of the standard error of estimate, then,
are 14.25 and 31.11. The use of the Normal Curve Area
is an estimate of the value of • Y |X for all possible conditional
program to find the middle area is illustrated belowx
distributions or values of X. The conditional distribution
which results when X=18 is presented below.

© Copy Right: Rai University


188 11.556
In this case, subscripts indicating a conditional distribution may The output from the preceding includes the correlation

RESEARCH METHODOLOGY
be employed. coefficient and standard error of estimate.
Because • Y| X is estimated by Y’, • Y| X by s Y.X, and the value of
z is 1.96 for a 95% confidence interval, the computational
formula for the confidence interval becomes

which is the computational form for computing the 95%


confidence interval. For example, for X=18, Y’=22.78, and sY.X The regression coefficients are also given in the output.
= 4.25, computation of the confidence interval becomes

Other sizes of confidence intervals could be computed by


changing the value of z.
Interpretation of the confidence interval for a given score of X
The optional save command generates two new variables in the
necessitates several assumptions. First, the conditional distribu-
data file.
tion for that X is a normal distribution. Second, • Y|X is
correctly estimated by Y’, that is, the relationship between X and
Y can be adequately modeled by a straight line. Third, •Y |X is
correctly estimated by s Y.X, which means assuming that all
conditional distributions have the same estimate for •Y|X.
Regression Analysis Using Spss
The REGRESSION command is called in SPSS as follows

Point to Ponder
• Regression models are powerful tools for predicting a score
based on some other score.
• They involve a linear transformation of the predictor variable
into the predicted variable.
• The parameters of the linear transformation are selected such
that the least squares criterion is met, resulting in an
“optimal” model. The model can then be used in the future
Selecting the following options will command the program to to predict either exact scores, called point estimates, or
do a simple linear regression and create two new variables in the intervals of scores, called interval estimates.
data editor: one with the predicted values of Y and the other
with the residuals.

© Copy Right: Rai University


11.556 189
RESEARCH METHODOLOGY

LESSON 35:
FACTOR ANALYSIS

So far we have looked at techniques of multiple regression inference will be mistaken, of course, so it is important when
where we essentially check out the association between a conducting factor analysis that possible variables which might
dependent variable and several independent variables. We now introduce spuriousness, such as anteceding causes, be included
look at a technique which also measure association but looks at in the analysis.
relations of interdependence. That is we now investigate
relations of interdependence where no one variable is depen-
Typical Problem Studied Using Factor
dent on another. In this situation all variables are treated as
Analysis
Factor analysis is used to study a complex product or service to
independent variables.
identify the major characteristics considered important by
In this and the next lesson we shall be introduced to Factor consumers.
analysis. Factor analysis is popular multivariate technique which
The two major uses of factor analysis
measures association between variables. The technique is highly
complex and makes use of sophisticated statistical techniques 1. To simplify a set of data by reducing a large number of
which are beyond the scope of our course. Therefore our measures (which in some way may be interrelated and
p[presentation of this technique will focus on its intuitive causing multicollinearity) for a set of respondents to a
rationale and applications. Since most multivariate techniques smaller more manageable set which are not interrelated and
are run by most statistical packages easily our emphasis will be still retain most of the original information .
on providing the student with exposure to relevant application 2. To identify the underlying structure of the data in which a
s and how to interpret computer output and to run a factor very large number of variables may really be measuring a
analysis on the computer. small number of basic characteristics or constructs of our
By the end of this lesson you should be able to sample.
• Understand the analytical and intuitive concepts of Factor For e.g a survey may throw up bet 15-20 attributes which a
analysis consumer considers when buying a product. However there is a
need to find out what are the key drivers.
• Determine the types of applications for which we can use
factor analysis. Factor analysis identifies latent or underlying factors from an
array of seemingly imp variables.
• Analyze and interpret computer output generated for a factor
analysis. Uses of Factor Analysis
To reduce a large number of variables to a smaller number of
What is Factor Analysis?
factors for modeling purposes, where the large number of
The main objective of Factor analysis is to summarize a large
variables precludes modeling all the measures individually. As
number of underlying factors into a smaller number of
such, factor analysis is integrated in structural equation model-
variables or factors which represent the basic factors underlying
ing (SEM), helping create the latent variables modeled by SEM.
the data.
However, factor analysis can be and is often used on a stand-
Factor analysis is used to uncover the latent structure (dimen- alone basis for similar purposes.
sions) of a set of variables. It reduces attribute space from a
• To select a subset of variables from a larger set, based on
larger number of variables to a smaller number of factors and
which original variables have the highest correlations with the
as such is a “non-dependent” procedure (that is, it does not
principal component factors.
assume a dependent variable is specified).
• To create a set of factors to be treated as uncorrelated
WE can best explain factor analysis with a non technical analogy:
variables as one approach to handling multicollinearity in
A mother sees various bumps and shapes under a blanket at such procedures as multiple regression
the bottom of a bed. When one shape moves toward the top
• To validate a scale or index by demonstrating that its
of the bed, all the other bumps and shapes move toward the
constituent items load on the same factor, and to drop
top also, so the mother concludes that what is under the
proposed scale items which cross-load on more than one
blanket is a single thing, most likely her child. Similarly, factor
factor.
analysis takes as input a number of measures and tests which
are analogous to the bumps and shapes. Those that move • To establish that multiple tests measure the same factor,
together are considered a single thing and are labeled a factor. thereby giving justification for administering fewer tests.
That is, in factor analysis the researcher is assuming that there is • To identify clusters of cases and/or outliers.
a “child” out there in the form of an underlying factor, and he • To determine network groups by determining which sets of
or she takes simultaneous movement (correlation) as evidence people cluster together (using Q-mode factor analysis,
of its existence. If correlation is spurious for some reason, this discussed below)

© Copy Right: Rai University


190 11.556
Applications analysis, is preferred for purposes of confirmatory factory

RESEARCH METHODOLOGY
The main applications of factor analysis are in marketing analysis Factor Analysis - the theory
research. Some of the application are as follows: Factor analysis is a complex statistical technique which works on
1. Developing perceptual maps; the basis of consumer responses to identify similarities or
associations across factors. It analyzes correlations between
Factor analysis is often used to determine the dimensions or
variables, reduces their numbers by grouping them in to fewer
critieria by which consumers evaluate brands and how each
factors.
brand is seen on each dimension.
2. Determining the underlying dimensions of the data.: How it Works
Factor analysis applies an advanced form of correlation analysis
A factor analysis of data on TV viewing indicates that there
to a no. of factors / statements or attributes. If several of the
are seven different types of programmes that are
statements are highly correlated, it is thought that these
independent of the network offering as perceived by the
statements measure some factor common to all of them.
viewers: movies, adult entertainment, westerns, family
entertainment, adventure plots, unrealistic events, sin A typical study will throw up many such factors. For each such
the researchers have to use their judgment to determine what a
3. Identifying market segments; and positioning of products;
particular factor represents. Factor analysis can only be applied to
An example of this is a factor analysis of data on desires continuous or intervally scaled variables.
sought on the last vacation taken by 1750 respondents
revealed six benefit segements for vacationers: Factor Analysis – the Process
We now take the case of a marketing research study where factor
• Those who vacation for the purpose of visiting analysis is most popularly used. We begin by administering a
friends and relatives, and not sight seeing, questionnaire to all consumers. What factor analysis does is it
• 2, visiting friends and relatives and plus sight seeing, identifies two or more questions that result in responses that
• 3, sightseeing, are highly correlated. Thus it looks at interdependencies or
• 4, outdoor vacations 5 interrelationships among data.The analysis begins by observing
the correlation and determining whether there are significant
• Resort vacationing correlations between them.
• 6. foreign vacationing.
Factor analysis is best illustrated with the help of an example:
3. It can be used for condensing or simplifying data:
Example
An example of this : In a study of consumer involvement A two wheeler manufacturer is interested in determining which
across a number of product categories, 19 items were variables his customers think of as being imp when they
reduced to four factors of : consider his product. The respondents were asked indicate on a
1. perceived product importance/ perceived importance s of 7 pt scale (1: completely agree, 7 : completely disgree) with a set
negative consequences of a mispurchase of 10 statements relating to their perceptions and some
2. subjective probability of a mispurchase attributes about two wheelers. Factor analysis would then aim
3. pleasure of owing/using product. The value of the product to reduce 10 factors to a few core factors.
as a cue to the type of person who owns it Each of these The statements are as:
factors was independent and there was no multicollinearity. 1. I use a two-wheeler because it’s affordable.
4. Testing of hypotheses about the structure of a data set. 2. It gives me a sense of freedom to own a two wheeler
Confirmatory factor analysis can be used to test whether the 3. Low maintenance costs make it very economical in long run
variables in a data set come from a specifies number of
4. A two-wheeler is essentially a man’s vehicle.
factors.
5. I feel very powerful when I am on my two-wheeler.
Basic principles of Factor Analysis
Factor analysis is part of the multiple general linear hypothesis 6. Some of my friend’s who don’t have one are jealous of me.
(MLGH) family of procedures and makes many of the same 7. I feel good whenever I see ads for my two wheeler on TV or
assumptions as multiple regression: magazines
• Linear relationships, 8. My vehicle gives me a comfortable ride.
• Interval or near-interval data, , 9. I think two wheelers are a safe way to travel.
• Proper specification (relevant variables included, extraneous 10.Three people should be allowed to travel on a 2 wheeler.
ones excluded), The answers given by 20 resp is inputed into the computer.
• Lack of high multicollinearity, and What the factor analysis does statistically is to group together
• Assumption of multivariate normality for purposes of those variables whose responses are highly correlated. Then
significance testing. from the groups of factors or statements we choose an overall
There are several different types of factor analysis, with the most factor which appears to represent what all the factors in the
common being principal components analysis (PCA). However, group appear to mean.
principal axis factoring (PAF), also called common factor

© Copy Right: Rai University


11.556 191
Interpretation of Computer Output 3. Correlation Coefficients
RESEARCH METHODOLOGY

Factor analysis identifies factors or attributes which are strongly We calculate the correlations coefficients associated with
correlated with each other uncorrelated with other. standardized scores of responses to each pair of statements.
The Process of Identification is Complex We give a simple example below for six pairs of statements.
And Is Broadly as Follows The gives a correlation matrix for them is given below. For
Factor analysis selects one factor at a time which explains the simplicity we assume the correlation between two statements
maximum variance in the standardized scores than any other is either one (perfect ) or zero.
factor combination. Each additional factor selected is likely to St 12345
explain less of the than the first factor. The process continues
till additional factors do not reduce the unexplained variance in 1 11100
standardized scores. 2 1100
Before we turn to an actual computer output we need to
3 100
understand some terms which appear on the computer output
and represent critical stages in the analysis. Generally the 4 11
analytical procedure follows a series of steps to arrive at a 5 1
solution.
The starting point is generating a correlation matrix of the
original data set where responses to each variable or statement is
correlated with others. We then construct new variables on the
basis of attributes or variables which are highly correlated with As can be seen statements 1, 2, 3 are correlated with each other
each other. and unrelated to 4, 5 which are correlated with each other. This
There are many different ways of extracting factors. Principal suggests all the underlying factors can be grouped into two core
Components analysis is the most frequently used approach. By factors which are unrelated to each other.
this method a set of variables is transformed into a new set of Computer Output Linser Ttable
factors that are uncorrelated with each other.
Table 1
These factors are constructed by finding the best linear combina-
tion of variables that accounts for the maximum possible Statement/v F1 F2 F3 communalities
variation in the data. Each factor is defined as the best linear ariables
combination of variables in terms of explaining the variance
1 .86 .12 .04 .76
not accounted for by the preceding factor. Additional factors
maybe selected till all the variance is accounted for . Usually the 2 .84 .18 .1 .75
factor extraction process is stopped after the unexplained 3 .68 .24 .15 .54
variance is below a specified level.
4 .1 .92 .06 .86
Meaning of Key Terms Used in Factor
Analysis 5 .06 .94 .07 .89
to understand and interpret a computer output of a factor
Eigen 1.9 1.85 .83
analysis we need to understand the meaning of certain terms.
1. Variance: A factor analysis is like a regression analysis and tries values
to best fit factors to a scatter diagram of responses in such a
way that factors explain the variance associated with Computer output
responses to each statement. We aim to get factors such that We now turn to interpreting the computer output. We begin
we explain as much variance associated with each statement with a part of the sample output which shows Above inn in
in the study. table2 we present a sample output of a factor analysis. On the
2. Standardized scores of an individual’s response: horizontal row we have the different factors. On the vertical
Standardized scores are used because responses to different column we have the different variables. We have taken five
questions can use different scales (e.g. 5pt, 7 pt, etc) To allow variables and the data is reduced to 3 possible factors.
for comparability all responses are standardized. This is done We now explain some of the terms in output and what they
by calculating an Individual’s standard score on a statement mean;
or a attribute
1. What is a Factor ?
Standardized score = [actual response to statement]-[mean
Each factor is a linear combination of its component factors.
response of all respondents to the statement]/standard
That is the factor analysis aims to reduce each variable to a
deviation of all responses to the statement.
linear combination of a set of actors. Thus is x1..x5 are are
Thus each person’s score is actually a measure of how many our original variables and we have three factors , then factor
standard deviations his response lies from the mean response analysis expresses each variable as a linear combination of the
calculated across all respondents.

© Copy Right: Rai University


192 11.556
three factors. The parameters I11 I 12, etc are the factor Eigen value for factor 1= .(86)+ .842+.682+.12+.062= 1.91

RESEARCH METHODOLOGY
loadings and e1 to e5 are the error terms. All computer programmes provide the Eigen value or the
X 1 = I 11F1 + I 12 X 2 + I 13 X 3 F3 percentage of variance explained. Before extraction it is
assumed each of the original variables has a Eigen value of
X 2 = I 21 F2 + I 22 F2 + I 23 X 3 1. We would therefore expect any factor which is linear
combination of some of the original values to have an
the factor model isilar to the regression model there are a
Eigen value greater than one. Therefore we usually only
few independent variables termed factors which help explain
retain factors which have an Eigen value>1.
the variation in the dependent variable or x. The factor
loadings are therefore the correlation between the factors and An alternative to the Eigen value is to look at the percentage
the variable. The error term consists of the variation in the of variation in original variables accounted for by the jth
factors which is not explained by the factors. factor.
2. Factor Loadings Percentage of variation in original variables accounted for by
the jth facto
These are the correlation bet the factor and the statement or
variable’s standardized response . For example F1 and x1 is = Eigen value j/Number of factors
.86. Factor 1 is highly correlated to statement 1 and 2 and 7. Cumulative percentage of variation;/ proportion of variance
least to statement 4.The loadings are derived using the explained Most output give both cumulative percentage of
principle of least squares . The factor loadings are then placed variation accounted for by a factor and the proportion of
in a correlation matrix between the variables and the factors. total variation in the data accounted for by one factor. Since
As shown in table 2. A factor is identified by those items factor analysis is designed to reduce the number of original
that have a relatively high factor loading on that factor and a variables a key question is how many factors should be
relatively low factor loading on other factors. genrated? It is possible to keep generating factors till they
3. Naming Factors equal the number of original variables. In which case the
factors would be useless. We usually rely on some rule of
4. The data shows F1 is a good fit on data from st1, 2, 3 but thumbs:
poor for st4,5. Thus a researcher would look at the basic
factor being measured by these factors and club them 1. All factors included prior to rotation must explain atleast as
together as representing an overall factor. much variation as an average variable. If there are five
variables each variable should account for 20% of the
5. Communalities+ variation in the data. Therefore we should look at the
How well a factor fits the data from all respondents for any percentage of variation explained by a factor . In our example
given statement? Communalities measure the percentage of we can see that factor 1 explains 55% of the variation in the
total variation in any variable or statement which is explained data; factor 2 35.5% of the variation. However Factors 3, 4, 5,
by all the factors. The communality can be found by explain less than 10% of the variantion in the data.
squaring the factor loadings of a variable across all factors Therefore we would probably drop these factors. In our
and then summing. For each statement communalities example we have dropped factors 3,4,5 and retained 1,2.. The
indicate the proportion of variance in responses to cumulative term in the putput essentially explains the
statements, which is explained by the three factors. For cumulative variance explained by the factors .The two are
example .89 of variance in response to statement 5 and only essentially the same thing.
.54 of variance of responses to statement 3 is explained by When interpreting output we look at cumulative percentage of
the three factors. Communalities provide information on variation . We see cum pct is 80.3 for all 3 factors. Therefore we
how well the factors fit the data. Since in this case the three are able to economize information contained in 10 original
factors account for most of the variance associated with each variables to 3 factors losing only 20% of original information.
of the statements we can say the 3 factors fit the data quite Most computer programmes also give the percentage of
well.It can also be thought of as a measure of the variance explained as well as the Eigen values. Another rule of
uniquenwss of a variable. A low communality of figure thumb requires that we retain sufficient factors which explain a
indicates that the variable is statistically independent and satisfactory percentage of total variance( usually over 70%)
cannot be combined with other variables.
After deciding upon extracted factors in stage 1 the researcher
6. Eigen Values has to interpret and name the factors. This is done by identify-
Indicate how well any given factor fits the data from all the ing which factor is associated with which of the original
respondents on all the statements. There is an Eigen value variables, i.e by looking at the factor loadings. If factor 1 has
for every factor. The higher is the Eigen value for a factor the high loading with variables 1,2,3. Then it is assumed it is a
higher is the amount of variance explained by the factor. The linear combination of these vars and is given a suitable name
most common approach to determining the number of representing these variables. The original factor matrix is used
factors to retain is to examine the Eigen values . for this purpose.
The Eigen values are defines as : the sum of the squared How Many Factors?
factor loadings for that factor. Thus for example for factor 1
the Eigen value is found by 1. A related rule of thumb is to look for a large drop in
variance explained between two factors in the PCA solution.

© Copy Right: Rai University


11.556 193
For example if there is a variances explained by five factors ( continued till the factors stabilize and there is relatively little
RESEARCH METHODOLOGY

before rotation ) is 40%, 30%, 20% 6% and 4% there is a change .


drop in variance for the fourth factor which might signal a There are many such programmes of rotation such as Varimax,
relatively unimportant factor. Promax, etc. \In Varimax rotation the factors
2. Eigenvalue criteria: An eigen value represents the amount
Point to Ponder
of variance in the original variables explained associated with
afactor. Only factors with Eigen values>1 are retained. The • Factor analysis is used to identify underlying dimensions in
sum of the factor loadings of each variable on a factor the data by reducing the number of variables .
represent a Eigen value. Or the toal variance explained by the • The input of factor analysis is asset of variables for each
factor. A factor with an Eigen value<1 is worse than a single object in the sample.
variable. And should be dropped. • Ouptut: the most important outputs are the factor loadings,
3. Screen plot criteria: This is a plot of the Eigen values factor scores and variance explained percentages eigen values.
against the number of factors. In order of extraction. The Factor loadings are used to interpret the factors. Sometimes
plotusually has a distinct brak between the steep slope of an analyst picks on two factors which load heavily on a factor
factors with large Eigen values and a gradual trailing off to represent the factor as a whole. The percentage of variance
associated with the rest of the factors. This is referred to as explained and eigen values help determine which factors to
the screen. Experience has shown that the point at which the retain.
screen begins denotes the true number of factors. This is • Key assumption of this analysis is that the factors
shown in figure 21.3. We would choose three factors. underlying the variables is and the variables completely
However as the third has a very low Eigen value we would represented by the factors. This means the list of variables
drop it. should be complete.
4. Percentage of variance criteria: the number of factors 1. Limitations
extracted is determined so that the cumulative percentage of • Tends to be highly subjective process. The determination of
variance extracted reaches a satisfactory level – usually at least number of factors , their interpretation, and rotation all
70%. involve considerable skill and judgement of the analyst.
5. 5. Significance test criteria: We can also determine the
• Also factor analysis does not take to statistical testing ,
statistical significance of separate Eigen values and retain only
therefore it id difficult to know if the results are merely
those factors which are statistically significant. The problem
accidental or actually reflect something meaningful.
with this criteria is that in large sample many factors may be
statistically significant. Even though they account for only a We can now examine how a factor analysis is conducted using
small proportion total. variation. an example:
• I want to be known personally at my bank and to be treated
Factor Interpretation
with special courtesy.
How is the factor interpreted? Interpretations are based on
factor loadings which are correlations between the factors and • If a financial institution treated me in an impersonal or
the original variables. The factor loadings are shown in table uncaring way, I would never patronize that organization again
21.1.for our study. Thus correlation between factor 1 and x1 is We assume that a pilot study was conducted using 15 -
.29. It therefore provides an indicator of the extent to which the respondents. Table 2 shows the pilot study data and the
original variables are correlated with each factor and the extent correlations among the variables. A factor analysis program
of correlation. This correlation is then used as the basis for usually starts by calculating variable-by-variable correlation
identifying factors and labeling them. Thus for example matrix. This In fact, it is quite possible to input
variables3,4,5 combine to form to define the first factor, Insert table 2
possibly personal factor. Because these variables stress the
personal aspects of bank transactions. The second factor is
highly correlated with the first two variables. They might be
termed small bank factors. Because both these appear to be
linked to small bank factors.
Factor Rotation
Factor analysis can generate several solutions for any data set.
Each solution is termed a rotation. Each time the factors are
rotated the factor loadings change .and so does the interpreta-
tion of the factors. Rotation involves moving the components
or the axes to improve the fit of the data. This will not change
the total variation explained by the retained factors but will shift
the relative percentage explained by each factor. Most computers
automatically provide Varimax scheme. Factor rotation is

© Copy Right: Rai University


194 11.556
RESEARCH METHODOLOGY
LESSON 36:
PRINCIPAL COMPONENT ANALYSIS

(hypothetical) study was conducted by a bank to determine if The following exercises are to done over the two practical
special marketing programs should be developed for several key sessions. You should be familiar with some of the early
segments. One of the study’s research questions concerned procedures. For the EFA procedures, please refer to the notes
attitudes toward banking, The respondents were asked their earlier in this handout.
opinion on a 0-to-9, agree-disagree scale, on the following • Saving a copy of the data file
questions:
Before you go any further, you should save a copy of the file
1. Small banks charge less than large banks. ‘driving01.sav’ into your file space. You can find ‘driving01.sav’
2. Large banks are more likely to make mistakes than small by:
banks. 1. Open SPSS in the usual way, select Open existing file and
3. Tellers do not need to be extremely courteous and friendly; More files
it’s enough for 2. In the Open file window, go to
them simply to be civil. Psycho\courses\psy2005\spss\ , select the driving01.sav
4. I want to be known personally at my bank and to be treated file and click on OK
with special 3. Once the file is open, click on Save as and put it in ‘my
courtesy. documents’ in ‘PC files on Singer’
5. If a financial institution treated me in an impersonal or Whenever you need the file again, you now have a copy from
uncaring way, I would which to work.
never patronize that organization again. • Exploring the data set
courtesy. 1. Cross-tabulation
1. Small banks charge less than large banks. Before you start any kind of analysis of a new data set, you
should explore the data so that you know what the variables are
2. Large banks are more likely to make mistakes than small
and what each number actually means. If you move the cursor
banks.
to the grey cell at the top of a column, a label will appear, telling
3. Tellers do not need to be extremely courteous and friendly; you what the variable is.
it’s enough for
The variables in the file are as follows: gender, age, area respon-
them simply to be civil. dent lives, length of time respondent has held a driving licence
4. I want to be known personally at my bank and to be treated (in years and months), annual mileage, preferred speed on a
with special variety of different roads at day and at night (motorways, dual
courtesy. carriageways, A-roads, country lanes, residential roads and busy
high street) and finally, a series of scores relating to items on a
5. If a financial institution treated me in an impersonal or
personality trait inventory.
uncaring way, I would
You can use the descriptives and frequencies commands to
never patronize that organization again.
do investigate the data, but they cannot tell you everything. If
Rotation we wanted to find out how many women there are in the
This is a second stage and is optional. dataset who live in rural areas, we must use a Crosstabs (cross-
Factor analysis can generate several solutions. Each one being tabulation) command:
termed a rotation.Each time there is a rotation the factor 4. Click on Analyze in the top menu, then select Descriptive
loadings change as does the interpretation of factors. Statistics, and click on Crosstabs.
There are many rotation programs e.g Varimax(orthogonal 5. Select the two variables that you want to compare (in this
rotation). case gender and area), put one in the Row box, and one in
Outputs: Most Imp Items are the Column box.
Factor loadings; correlations bet factors and variable and is used 6. Click on Statistics, and check the Chi-Square box. Click on
to determine the factor. The percentage of variance explained Continue.
criteria help determine no. of factors to include. 7. Click on OK.
Also included is some practical application exercises from the The output tells us how many men and women in the data set
internet. come from each type of area, and the chi-square option tells us
whether there are significantly different numbers in each cell.
However, it is not clear where these differences lie, so:

© Copy Right: Rai University


11.556 195
8. Click on Analyze in the top menu, then select Descriptive positive tendency, that item’s scores must be recoded. Looking
RESEARCH METHODOLOGY

Statistics, and click on Crosstabs (so long as you haven’t at the actual questions in the questionnaire, it is clear that items
done anything else since the first Crosstabs analysis above, var07, var08, var09 and var10 should all be recoded (var11- var13
the gender and area variables should still be in the correct are okay, because strong agreement implies a high ‘Thrill’
boxes, if not move them into the row and column boxes driving style).
and Click on Statistics, and check the Chi-Square box. Follow these steps to recode each item and then compute a scale
Click on Continue). composed of all item variables:
9. Click on Cells and check the ‘Expected’ counts box (also, 15. Go to Transform… …Recode… …Into different
try selecting the ‘Row’, ‘Column’ and ‘Total’ percentage variables and select the first item variable (var07) that
boxes). Click on Continue. requires recoding
10. Click on OK. 16. Give a name for the new recoded variable, such as var07r
Comparing the expected count with the observed count will tell and label it as ‘reversed var07’
you whether or not there is a higher observed frequency than 17. Set the new values by clicking old and new values and
expected in that particular cell. This will then tell you where the entering the old and new values in the appropriate boxes
significant differences lie. (adding each transformation as you go along). So you finish
Using the Crosstabs procedure, how many female respondents up with 1 > 5; 2 > 4; 3 > 3; 4 > 2; and 5 > 1
live in rural area, and what percentage of the total sample do 18. Click continue and then change and check that the
they make up? (10, 4.6%). How many male respondents are transformation has worked by getting a frequency table for
between the ages of 36 and 40? What percentage of the total the old and new variables – var07 and var07r. Have the
sample do they constitute? (35, 16.1%). values reversed properly? If not, then you may need to do it
2. Creating a Scale again!
You might want to sum people’s scores on several items to Follow the same procedure for the other items in the scale that
create a kind of index of their attitude, for example; we know need to be reversed
that some of the personality inventory items in the data set
Scale Calculation
relate to the Thrill-Sedate Driver Scale (Meadows, 1994). These
items are numbered 7 to 13 in the questionnaire (Appendix A) Once you have successfully reversed the counterbalanced item
and var7 to var13 in the data set. How do we create a single scale variables, you can compute your scale.
score? 19. Click on Transform… …Compute and typing a name for
First of all, some of the items may have been counterbalanced, the scale (e.g.: “Thrill”) in the Target variable box and type
so we have to reverse the scoring on these variables before we the following in the ‘numeric expression’ box:
add them together to give a single scale score. Currently, a high var07r + var08r + var09r + var10r + var11 + var12 + var13
score may indicate strong positive tendency on some of the 20. Click on OK
items, whereas the opposite is true of other items. We need to Now take a look at your new variable (it will have appeared in a
ensure that scores of ‘5’ represent the same tendencies through- column on the far right of your data sheet – get a ‘descriptives’
out the scale (in this case, high ‘Thrill’ driving style), so that the analysis on it. You should find that the maximum and
items scores may be added together to create a meaningful minimum values make sense in terms of the original values.
overall score. The seven ‘Thrill-Sedate’ items are scored between 1 and 5, so
Missing Values there should be no scores lower than 7 (ie: 1 x 7) and none
Make sure before computing a new variable like this that you higher than 35 (ie: 5 x 7). If there are scores outside these limits,
have already defined missing values, otherwise these will be perhaps you forgot to exclude missing values.
included in the scale score. For example, you would not want 3. Checking The Scale’s Internal Reliability
the values ‘99’ for ‘no response’ included in your scales, so Checking the internal reliability of a scale is vital. It assesses how
defining these as missing will mean that that particular respon- much each item score is correlated with the overall scale score (a
dent will not be included in the analysis. simplified version of the correlation matrix that I talked about
11. Double-click on the grey cell at the top of the relevant in the lecture).
column of data To check scale reliability:
12. Click on Missing Values 21. Click on Analyze… …Scale… …Reliability Analysis
13. Select Discrete Missing Values and type in 99 into one of 22. Select the items that you want to include in the scale (in this
the boxes case, all the items between var07 and var13 that didn’t
14. Click on Continue and OK require recoding in the earlier step, plus all the recoded ones
– in other words, those listed in the previous ‘scale
Recoding Variable Scores
calculation’ step), and move them into the Items box.
]It is usually fairly clear which items need to be recoded. If
strong agreement with the item statement indicates positive 23. Click on Statistics
tendency, then that item is okay to include in the scale without 24. Select ‘Scale if item deleted’ and Inter-item ‘Correlations’
recoding. However, if disagreement with the statement indicates

© Copy Right: Rai University


196 11.556
25. Click on Continue… …OK the next column, the cumulative variance explained by each

RESEARCH METHODOLOGY
26. In the output, you can first see the correlations between successive factor. In this example, the cumulative variance
items in the proposed scale. This is just like the correlation explained by the first four factors is 52.4%. You can ignore the
matrix referred to in the lectures. Secondly, you will see a list remaining columns.
of the items in the scale with a certain amount of The Scree plot is displayed next. You can see that in this
information about the items and the overall scale. The example, although four factors have been extracted (using the
statistic that SPSS uses to check reliability is Cronbach’s SPSS default criteria – see later), the scree plot shows that a 3
Alpha, which takes values between zero and 1. The closer factor solution might be better – the big difference in the slope
to 1 the value, the better, with acceptable reliability if Alpha of the line comes after three factors have been extracted. You
exceeds about 0.7.The column on the far right will tell us if can see this more clearly if you place a ruler along the slope in
there are any items currently in the scale that don’t correlate the scree plot. The discontinuity between the first three factors
with the rest. If any of the values in that column exceed and the remaining set is clear – they have a far ‘steeper’ slope
the value for ‘Alpha’ at the bottom of the table, then the than the later factors. Perhaps three factors may be better than
scale would be better without that item – it should be four? See section [iii] p.9 for further discussion of this issue.
removed from the scale and the Reliability Analysis run Next comes the factor matrix, showing the loadings for each
again. For this example, you should get a value for Alpha of the variables on each of the four factors. Remember that this
of 0.7793, with none of the seven items requiring removal is for unrotated factors, so move on to look at the rotated
from the scale. factor matrix below it, which will be easier to interpret. Each
• Factor analysis of Driving01.sav factor has a number of variables which have higher loadings,
1. Orthogonal (Varimax) Rotation (Uncorrelated Factors) and the rest have lower ones. Remember that we have asked
An orthogonal (varimax) analysis will identify factors that are SPSS to ‘suppress’ or ignore any values below 0.1, so these will
entirely independent of each other. Using the data in be represented by blank spaces. You should concentrate on
Driving01.sav we will run a factor analysis on the personality those values greater than 0.3, as any lower than this can also be
trait items (var01 to var20). ignored. To make things easier, you could go back and ask SPSS
to suppress values less than 0.3 – that will clean up the rotated
Use the following procedure to carry out the analysis:
factor matrix and make it easier to interpret.
27. Analyze… …Data reduction… …Factor
Finally comes the factor rotation matrix, which can also be
28. Select all the items from var01 to var20 and move them into ignored (it simply specifies the rotation that has been applied to
the Variables box the factors).
29. Click on Extraction 2. Correlated factors – oblique (oblimin) rotation
30. Click on the ì button next to the Method box and select You may have noticed that some of the questions in the
Principal Axis Factoring from the drop-down list questionnaire seem to measure similar things (for example, ‘the
31. Make sure there is a tick in the Scree Plot option law’ is mentioned in variable items that do not appear to load
heavily on the same factor). Two or more of the factors
32. Click on Continue
identified in the last exercise may well correlate with one
33. Click on Rotation, select Varimax (make sure the circle is another, as personality variables have a habit of doing. An
checked) orthogonal analysis may not be the most logical procedure to
34. Click on Options, select Sort by size and Suppress carry out. Using the data in Driving01.sav we will run an oblique
absolute values less than 0.1 and then change the value to factor analysis on the personality trait items (var01 to var20),
0.3 (instead of 0.1) which will identify factors that may be correlated to some
35. Click on Continue… …OK. degree.
Output Use the procedure described above, but when you click on the
First, you have the Communalities. These are all okay, as there Rotation button, instead of checking the Varimax option,
are none lower than about 0.2 (anything less than 0.1 should check the Direct Oblimin option instead. Compare the output
prompt you to drop that particular variable, as it clearly does not from this analysis with the output from the varimax analysis.
have enough in common with the factors in the solution to be The first few sections will look the same, because both analyses
useful. If you drop a variable, you should run the analysis use the same process to extract the factors. The difference is once
again, but without the problem variable). the initial solution has been identified and SPSS rotates it, in
order to clarify the solution (by redistributing the variance across
The next table displays the Eigenvalues for each potential
the factors). Instead of a rotated factor matrix, you will have
factor. You will have as many factors as there were variables to
‘pattern’ and ‘structure’ matrices.
begin with, but this does not result in any kind of data
reduction – not very useful. The first four factors have eigenval- Look at the factors and the loadings in the pattern matrix
ues greater than 1, so SPSS will extract these factors by default (concentrate on loadings greater than +/- 0.3). Do they look the
(SPSS automatically extracts all factors with eigenvalues greater same as the varimax solution? One thing that has changed is
than 1, unless you tell it to do otherwise). In column 2 you that, although the factors look similar, the loadings will have
have the amount of variance ‘explained’ by each factor, and in changed a bit, and not all load in the same way as before. For
example, var02 (“These days a person doesn’t really know quite

© Copy Right: Rai University


11.556 197
who he can count on”) no longer loads on factor 3, but only on values (‘count’) and expected values for each cell. You’ll see
RESEARCH METHODOLOGY

factor 4. How does this change the interpretation of factors 3 that there are more older men (ie: fewer younger men) and
and 4? more younger women (ie: fewer older women) than would
Finally is the factor correlation matrix. In a varimax solution, be expected in the sample.
you can ignore the plus or minus signs in front of the factor 37. Now run an ANOVA using the Thrill scale score as DV and
loadings, because the factors are entirely independent of one Age-group as IV. Previous research has found that younger
another (in other words, uncorrelated). However, in oblique people record higher Thrill scores than older people. Does
(oblimin) analyses, we have to take these into account because this pattern appear in this sample?
the factors correlate with one another to some extent, and 38. If you run a two-way ANOVA with Thrill score as DV and
therefore we need to know if there is a positive or a negative with Age and Gender as IVs, you may find an interaction
relationship (correlation) between the factors. The relationship between them. What does the interaction mean?
between correlated factors must inherently take into account the
Can you now see why, for this particular sample, there is no
sign of the loadings. In this example, the negative correlations
significant difference between men and women in terms of
are so small as to be unimportant (correlations less than 0.1 are
Thrill scores? The male sample is made up of older men, while
usually non-significant), and so this is not an issue. However,
the female sample is made up of younger women, so the scores
you should be aware that this may not always be the case. It
will be similar. This obviously emphasises how important it is
may seem confusing at first, but working out the logic behind
to ensure that your sample is representative.
the relationships between factors makes sense when you look at
the variable items that represent the factors (the relevant 2. Further factor analysis
questionnaire statements). Once the EFA procedures carried out during the factor analysis
of the data have identified the variables that load on each factor
3. Extracting a specific number of factors
(exercise 3), you could construct scales for the other two factors
Up to now, you have been letting SPSS decide how many factors
from the items in the questionnaire that load on each factor
to extract, and it has been using the default criterion (called the
(looking at the scree plot, we can see that there is quite a neat 3-
‘Kaiser’ criterion) of extracting factors with eigenvalues greater
factor solution, with each variable loading on only one of the
than 1. Look at the second table in your output: four factors
factors).
have eigenvalues greater than 1, so SPSS extracts and rotates
four factors. However, this criterion doesn’t always guarantee 39. First try to interpret the factors, based on the questionnaire
the optimal solution. We may have an idea of how many items (variables) that the factors load on.
factors we should extract – the scree plot can give some heavy 40. Use the scale-building and reliability procedures described
hints (as mentioned earlier). The scree plot is not exact – there is earlier in these exercises to produce internally-reliable scales
a degree of judgement in drawing these lines and judging where which we may then use to describe differences between
the major change in slope comes, but with larger samples it is people.
usually pretty reliable. 41. You could put all three personality trait scores into a
I reckon that three factors would lead to a more accurate Regression analysis and see how well they predict preferred
solution than four, so try running the analysis again, but this speed on different types of road.
time specify that you want a 3 factor solution by specifying the 42. You could also factor analyse the preferred speed data, to
Number of factors to extract as 3 in the Extraction options see if there are any patterns in the way that people respond
window. The solution fits quite well, with all variable items to the different items. How could you interpret the
loading quite high on only one factor, thus revealing a good resulting factors?
simple structure.
Remember, this is ‘Real World’ data, so the possibilities are
• Where to go from here? Further Exercises (not endless - you could come up with your own hypotheses, based
included on original handout) on your own ideas about how people drive.
The following exercises don’t introduce any new ideas or
Cris Burgess (2001)
concepts, but should enable you to practice some of the
techniques that are covered earlier in this series of exercises. They
should also help you to see how the techniques from the three
sections of the PSY2005 Multivariate Statistics module fit
together with one another.
1. Exploring the Thrill-Sedate scale scores
Some of you were asking what to do with the Thrill-Sedate
Driver scale once you had calculated it. You could try comparing
men vs women in terms of the scale score. We might expect that
men would record higher scores than women, and this is the
case. However, an independent t-test shows us that this
difference is not significant - why?
36. The first step in answering this question is to produce a
crosstabs table for gender vs age-group. Compare the observed

© Copy Right: Rai University


198 11.556
RESEARCH METHODOLOGY
LESSON 37:
MULTIDIMENSIONAL SCALING

We now turn to a very popular technique, which is used to 2. A second approach considers similarity or preference between
position a product in perceptual space. In a competitive market the objects, not taking in to account individual attributes as
much of marketing is concerned with the question of position- factors.
ing, i.e., how do we fare vis a vis competitors in the consumers Irrespective of technique used, the aim of MDS is to show
mind space.we have discussed measuring consumer attitudes market clusters or segments and their sizes. Such analysis is
using measurement scales such as the Likert. These measure useful for determining marketing strategies.
perceptions or preferences in terms of a single dimension.
However it is likely that a consumers perceptions of a product Attribute Based Approaches
may be multidimensional. Multidimensional scaling is the An important assumption of attribute-based approaches is that
technique that attempts to represent such perceptions and we can identify attributes on which individuals’ perceptions of
preferences as points in a geometric space. objects are based. Let us start with a simple example:
The actual statistical aspects of this technique are highly complex Suppose that our goal is to develop a perceptual map for a
and beyond the scope of our course. We shall focus under- nonalcoholic beverage market. An exploratory research has
standing the technique intuitively and also on its applications identified 14 beverages that seem relevant. Nine attributes that
particularly on interpretation of computer output. people use describe and evaluate these beverages have also been
identified. We now ask a group of respondents to rate each of
What is Multidimensional Scaling? the beverages on these nine attributes, on a 7-point scale. An
Mulltidimensional scaling (MDS) is a set of techniques that average rating of the respondent group on each of the nine
attempts to represent consumer perceptions regarding a product attributes can be done. However, it would be more useful if
in a geometric space. As we noted consumer perceptions these nine attributes could be combined into two or three
regarding a brand are likely to be multi dimensional. For dimensions or factors. Two approaches can be used factor
example a soft drink may be perceived along the”colaness “ analysis and discriminant analysis are usually used to reduce the
dimension, as well “dietness” dimension. A car can be seen to attributes to a small number of dimensions.
be both luxurious and sporty.
We shall now briefly look at how this is done using factor
MDS again looks at interdependence in consumer responses. analysis:
All variables are treated as independent variables. The basic
procedure is as follow: Factor Analysis
Since each respondent rates 14 beverages on nine attributes, he
• We obtain each respondents opinions of where each product
or she would have 14 factor scores on each of the emerging
or brand stands in product space. factors, one for each beverage. The position of each beverage in
• We then try to locate each consumers ideal point in the the perceptual space will be the average factor score for that
product space for each product. beverage. The perceptual map shown in Figure 1 below. As we
MDS scaling basically involves two problems: can see three factors, account for 77 percent of the variance in 9
1. Dimensions on which the consumer evaluate products need attributes. Each beverage is then positioned on the attributes.
to be identified. Usually as noted a product is identified on Since three factors or dimensions are involved, two maps are
several dimensions. If only two then they can be represented required to portray the
graphically. The first involves the first two factors, while the second includes
the first and third. For convenience, the original attitudes also
2. The products or brands need to be positioned with respect
are shown on the maps as lines or vectors. The vectors are
to these dimensions. The ouput of MDS is the location of
obtained based on the amount of correlation the original
the products/brands on various dimensions and this is
attitudes possess with the factor scores (represented as factors).
called a perceptual map
The direction of the vectors indicates the factor with which each
Techniques attribute is associated, and the length of the vector indicates the
There are several scaling techniques which can be used to get an strength of association. Thus, on the left map the “filling”
MDS. They differ with regard to the assumptions they make attribute has little association with any factor, whereas on the
and even the input data they use. These are as follows: right map the “filling” attribute is strongly associated with the
1. Use of attributes. For example if we are interested in the “refreshing” factor.
object beer. The attributes studied could be body, alcohol Discriminant Analysis
content, flavour, cost, etc. the MDS technique would then So far we have covered the use of factor analysis in generating a
combine these attributes into dimensions such as price- perceptual map. We can also use discriminant analysis.
quality, body.

© Copy Right: Rai University


11.556 199
The goal of factor analysis is to generate dimensions that 3. A third way is to ask the respondent to pick or two (or k)
RESEARCH METHODOLOGY

maximize interpretability and explain variance. The goal of use occasions that are most suitable a brand.
discriminant analysis is to gener-ate dimensions (termed The result in all these cases would be a row of zeros and ones
discriminant function factors) that will discriminate or separate for each respondent and for each brand.
the objects as much as possible.
This technique where we use data reflecting the association of
Similar to factor analysis, each dimension is based on a combi- an attribute or other variable with a brand or other object is
nation of the underlying attributes. However, in discriminant termed correspondence analysis. The data consists of rows of
analysis, the extent to which an attribute will tend to be an zeros and ones. This is a technique of MDS and is termed
important contributor towards a dimension depends on the correspondence analysis.
extent to which there is a perceived difference among the objects
Correspondence analysis generates as an output a perceptual
having that attribute.
map in which the elements of attributes and brand are both
Comparing Factor and Discriminant Analysis positioned.
Each of the approaches has advantages and disadvantages.
When Do We Use Binary Data?
Discriminant analysis Binary judgments are used in several contexts.
1. Discriminant analysis identifies clusters of attributes on 1. If the number of attributes and objects are large, the task of
which objects differ. If all objects are perceived to be similar scaling each object on each attribute may be excessive and
with respect to an attribute (such as an airline’s safety), that unrealistic. By simply checking which attributes (or use
factor should not affect preference such as the choice of an occasions ) apply to a given object may be easier and more
airline. Therefore discriminant analysis objective is to select efficient.
attributes that discriminate between objects s. 2. Binary data may be useful if we want to generate a very
2. A second useful characteristic of discriminant analysis is that comprehensive list of attributes or use occasions. In this
it provides a test of statistical significance. The null case we would ask respondents to list all the attributes they
hypothesis is that two objects are perceived identically. The can think of for a certain brand or to list all the objects or
test determines the probability that the between-object brands that would apply to a certain use occasion.
distance is due simply to a statistical accident. · For example, what snacks would you consider for a party
3. A third quality of discriminant analysis is that it identifies a given to watch the SuperBowl? In this case we would prefer
perceptual dimension even if it is represented by a single to generate binary data and use correspondence analysis.
attribute.
Basic Concepts of MDS
Factor Analysis We now try to understand how exactly an MDs is carried out.
1 .Factor analysis groups attributes that are similar. If there are Basically MDS uses proximities among different objects as
relatively less attributes representing a dimension, then it input.
will tend not to emerge in the factor analysis solution.
What is a Proximity?
2. Factor analysis is based on both perceived differences A proximity is a value that denotes how similar or different two
between objects and differences between people’s objects are or are perceived to be.
perceptions of objects. Thus, it tends to provide a richer
MDS then uses these proximities data to produce a geometric
solution as it uses more of the underlying attributes, and
configuration of points ,which are actually objects, in a two-
results in more dimensions.
dimensional (preferably)space as output. Attribute based data
3. All perceptual dimensions are included, whether they such as objects’ X attributes (profile matrix) or non attribute
discriminate between objects or not. A study conducted by based data such as similarity and preference data, can be used to
Hauser and Koppelman of shopping centers which obtain proximities data. The Euclidean distances (derived)
compared several approaches to multidimensional scaling. between objects in the two-dimensional space are then com-
They found that the factor analysis_dimensions provided puted and compared with the proximities data.
more interpretive value than those of discriminant analysis.
A key concept of MDS is that the derived distances (output)
Correspondence Analysis between the objects should correspond to the proximities
In both factor analysis and discriminant analysis, the variables (input).
are assumed be intervally scaled, continuous variables. A 7- • If we make the rank order of derived distances between
point Likert scale (agree-disag ) is usually used. However, often objects/brands correspond to the rank order of the
it is convenient to collect binary or Zero or one data. Ways of proximities data, the process is known as nonmetric MDS.
doing this are : Nonmetric MDS assumes that the proximities data are
1. For example respondents might be asked to identify from an ordinal
attribute list which attributes describe a brand. For each • If the derived distances are either multiple or linear functions
respondent we will get a row of zeros and ones. of the proximities, then
2. Another possibility is that the respondent could be asked to • It is known as metric MDS. Metric MDS assumes that they
pick three (or k) that attributes that are associated with a brand. are metric.

© Copy Right: Rai University


200 11.556
• In both cases the output (derived distances) is metric. Attribute Based MDS

RESEARCH METHODOLOGY
The statistical calculations are highly complex. For our purposes MDS can also be used on attribute data to produce perceptual
we need to understand the basic idea of what is done in an maps . When it is used on attribute data it is termed attribute
MDA analysis intuitively. based MDS. Attribute based MDs has the advantage that
attributes can have diagnostic value.. The dimensions can be
Evaluating The MDS Solution interpreted in terms of their correlations with the attributes.
What is the MDS solution? Studies have generally found that attribute based data are easier
The fit between the derived distances and the proximities in for respondents to use and dimensions based on attribute data
each dimension is evaluated through a measure called stress. predicted preference better.
In MDS, the objects can be projected onto two, three, four or Attribute based data has certain disadvantages:
even higher dimensions. Since visual inspection is possible only 1. if the list of attributes is not complete the study will suffer.
with two, or possibly three, dimensions, we always prefer lower
2. It may be difficult to generate a comprehensive attribute list
dimensions. Usually, the stress value increases when we decrease
given that people’s perceptions differ.
the number of dimensions. The appropriate number of
dimensions required to locate the objects in space can be 3. An object maybe perceived and evaluated as a whole by
obtained by plotting the stress values against the number of respondents rather than being broken up into attributes.
dimensions. This is similar to the factor analysis scree plot. 4. They also require more dimensions to represent them.
We choose the appropriate number of dimensions depending Because of the above problems frequently non-attribute or
on where the sudden jump in stress starts to occur. Sometimes similarity or preference data may be preferred.
we directly seek a two dimen-sional representation because it is Application of Mds With Nonattribute
easier to in-terpret. Data
Determining the Number of Dimensions Simialrity Data
Figure 2 plots the stress values against the number of dimen- Similarity data relates the perceived similarity of two objects in
sions. As you can see, higher dimensions are associ-ated with the eyes of the respondent. The respondent is not told on what
lower stress values and vice versa. The plot indicates that criteria to use to judge similarity. He is not therefore given an
probably two dimensions are acceptable, since there is a large attribute list. In our example students have judged Harvard
increase in the stress values from two dimensions to one. and Stanford to be quite similar.
Labeling the Dimensions The number of pairs to be judged for degree of similarity can
To label the dimensions, one can correlate the object’s attribute be as many as n(n- 1)/2, where n is the total number of objects.
ratings with the dimension to determine which dimensions With 10 brands there would be 45 pairs of brands to judge
correlate highly with what attributes. These can then be (although fewer could be used).
accordingly named. Although at least seven or eight objects should be judged, the
Figure 221 is an example of naming the dimensions for a approach is easier to illustrate if only-four objects are consid-
perceptual map of shopping locations. ered. First/the results ( pairwise similarity judgments are
The first dimension for example is labeled as variety because summarized in a matrix, as shown in fig 4 . The numbers in the
attributes such as variety of merchandise, specials, store matrix represent the average similarity judgment . a sample of
availability, etc are associated with it. 50 respondents. Instead of similarity ratings, the respondents
Similarly the second dimension may have a label such as quality be asked simply to rank the pairs from most to least similar. An
vs price. average order position then would replace the average similarity
rating matrix. It should be noted, however, that rank ordering
Interpreting the Dimensions can be difficult if 10 or more objects are involved.
WE now look at the perceptual map , which gives the location A perceptual map could be obtained from the average similarity
of various shopping areas in Chicago. ratings however, it is also’possible to use only the ordinal or
As we can see the Chicago Loop location is the only one that “nonmetric” portion data. the knowledge that objects A and C
offers a good value(quality/price ) and variety. in Figure 4 have an average similarity of 1.7 is replaced by the
Korvette city on the other hand offers low value and less variety. fact that objects A and C are the most similar pair. Figure -4
This location should therefore try to reposition itself by shows the conversion to rank-order information.
moving towards the origin by increasing both value and variety. Ordinal or non metric information is preferred, for several
Thus we can see that different shopping locations are perceived reasons
differently from one another. 1. It actually contains about the same amount of information
Once the location of the brands and objects can be evaluated in that the usually is not affected by replacing intervally
and clear strategy implemented to reposition the brand or to scaled or “metric” data with or or nonmetric data.
maintain its position. 2. Second, the nonmetric data often are thought to be more
reliable.

© Copy Right: Rai University


11.556 201
Next, a computer program is employed to convert the rankings Figure 6, the objects on the horizontal indicate a cola-non cola
RESEARCH METHODOLOGY

of similarities into distances in a map with a small number of dimension. The vertical axis seems to represent a nondiet
dimensions, so that similar objects are close together and vice dimension, because in both the cola group and the noncola
versa. The computer will be programmed to the four objects in group nondiet drinks tend to be higher than the diet drinks.
a space of two, three, or more dimensions, so that the distance The concept of an ideal object in the space is an important one
is between pair (A, C), the next shortest between pair (A, B), in MDS because it allows the analyst to relate object positioning
and the longest pair between (A, D). One possible solution that to customer likes and dislikes. It also
satisfied these constraints in two dimensions is the fQllowing:
dislikes. It also provides a means for segmenting customers
You might be able to relocate the point differently and still
according to their
satisfy the constraints so that the rankings of the distances in
the map correspond to the rankings of the pairwise similarity preferences for product attributes.
judgments. This is because there are only a few points to move The concept of an ideal object in the space is an important one
in the space and only six constraints to satisfy. With 10 objects in because it allows the analyst to relate object positioning to
and 45 constraints the task of locating the points in’ a two- customer likes /dislikes. It also provides a means for segment-
dimensional space is more difficult, and requires a computer. ing customers according to preferences for product attributes.
Once a solution is found (the points are located in the space), it Preference Data
is unlikely that there will be a significantly different solution that An ideal object is one that the customer would prefer over
still satisfies the constraints of the similarities matrix. Thus, we others, including objects that can be conceptualized in the space
can argue that the intervally scaled nature of the distances but do not exist. It is a combination of all the customers’
between points really was hidden in the rank-order input data preferred-attribute levels. The assumption that people have
all the time. similar perceptions may be reasonable, preferences are nearly
The power of this technique is in its ability to find the smallest always heterogeneous. Thus individuals ideal objects will differ.
number of dimensions for which there is a reasonably good fit However one reason to locate ideal objects is to identify
between the input similarity rankings and the rankings’ of segments of customers who have similar ideal objects.
distance between objects in the resulting space.Tthis means There are Two Types of Ldeal Objects
starting with two dimensions and if this is not satisfactory,
1. The first lies within the perceptual map. For example, if a
continuing to add dimensions until an acceptable fit is achieved.
new cookie were rated on attribute scales such as
The determination of “acceptable” is a matter of judgment,
Very sweet. . . . . . . . . . . . . . . . . . . . . Not at all sweet Large,
although most analysts will trade off some degree of fit to stay
substantial. . . . . . . . . . . . . . . . Small, dainty
with a two- or three-dimensional map, because of the advan-
tages of visual interpretations. a respondent might well prefer a middle position on the scale.
There are situations where more dimensions are necessary. This 2. The second type is illustrated by a different example.
happened in a study of nine different types of sauces (mustard, Suppose attributes of a proposed new car included
catsup, relish, steak sauce, dressing, and so on). Most respon- Inexpensive ta buy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expen-
dents perceived too many differences to be captured with two or sive ta buy Inexpensive to operate. . . . . . . . . . . . . . . . . . . . . . . . .
three dimensions, in terms of either the types of foods with . . Expensive 10 operate Good handling. . . . . . . . . . . . . . . . . . . .
which the sauces would be used or the physical characteristics of . . . . . . . . . . . . .Bad handling’
each sauce.6 then respondents would very likely prefer an end point on the
A sample of 64 undergraduates provided similarity judgments scale. For instance, the car should be as inexpensive as possible
for all 45 pairs of 10 drinks including Coke, Diet Coke, 7-Up, to buy and operate. In that case, the ideal object would be
Calistoga Natural Orange, and Slice. They were asked to rate the represented by an ideal vector or direction rather than an ideal
similarity of each pair such as Slice-Diet Coke, point in the space. The direction would depend on the relative
on a nine-point scale. The two-dimensional solution is shown desirability of the various attributes.
in figure.5 There are two approaches to obtaining ideal-object locations.
We can see that Slice is considered closer to Diet 7-Up than to 7- The first is simply to ask respondents to consider an ideal object
Up, and Schweppes as one of the objects to be rated or compared. The problem
Calistoga are separated even though they are very similar. with this approach is that conceptualizing an ideal object may
not be natural for a respondent, and the result may therefore be
Interpreting the resulting dimension takes place “outside” the
ambiguous and unreliable.
technique.
A second approach is indirect. For each individual, a rank-order
Additional information must be introduced to decide why
preference among the objects is sought. Then, given a percep-
objects are located in their relative positions. Sometimes, the
tual map, a program will locate the individual’s ideal objects
location of the objects themse1vs suggests dimensional
such that the distances to the objects have the same rank order
interpretations. For example, in Figure 1the location of
(or as close to it as possible) as the rank-order preference. The
objects suggests dimension interpretations even without the preferred object should be closest to the ideal. The second most
attribute inform . Thus, the fruit punch versus hot coffee object preferred object should be farther from the ideal than the
locations on the horizontal suggest a maturity dimension. In

© Copy Right: Rai University


202 11.556
preferred, but closer than the third most preferred, and so on.

RESEARCH METHODOLOGY
Often, it is not possible to determine a location that will satisfy
this requirement perfectly and still obtain a small number of
dimensions with which an analyst would like to work. In that
case, compromises are made and the computer program does as
well as possible by maximizing some measure of “ goodness-
of-fit.”
Issues in MDS
Perceptual maps are good vehicles through which to summarize
the position of brands and people in attribute space and, more
generally, to portray the relation-ship among any variables or
constructs. It is particularly useful to portray the positioning of
existing or new brands and the relationship of those positions
to the relevant segments.
However there are several problems in working with MDS:
1. When more than two or three dimensions are needed, the
usefulness is re-duced.
2. Perceptual mapping has not been shown to be reliable across
different methods-
• Users rarely take the trouble to apply multiple approaches to
a context to ensure that a map is not method-specific.
3. Perceptual maps are static snapshots at a point of time. It is
difficult from the
model to know how they might be affected by market
events.
3. The interpretation of dimensions can be difficult.
Summary of MDS
Application : used to identify dimensions
• By which objects are perceived
• To position objects with respect to those dimensions
• To make positioning decisions for new/old products.

Inputs
• Attribute based data
• Similarity based data
• Preference data

Outputs
• Provides location of each object on a limited number of
dimensions. The number of dimensions is selected on the
basis of goodness of fit measure. In attitude based MDS
attribute vectors and may be included to help interpret the
dimensions.
Assumptions
1. underlying data represents valid measures.
2. Respondents use an appropriate context.
3. Attribute list should be complete.

© Copy Right: Rai University


11.556 203
RESEARCH METHODOLOGY

LESSON 38:
FURTHER APPLICATIONS AND THEORY THE MULTIDIMENSIONAL
SCALING USING STATE SOFTWARE

General Purpose Measures of Goodness-of-fit: Stress


Multidimensional scaling (MDS) can be considered to be an The most common measure that is used to evaluate how well
alternative to factor analysis (see Factor Analysis). In general, the (or poorly) a particular configuration reproduces the observed
goal of the analysis is to detect meaningful underlying dimen- distance matrix is the stress measure. The raw stress value Phi
sions that allow the researcher to explain observed similarities of a configuration is defined by:
or dissimilarities (distances) between the investigated objects. In
Phi = [d ij - f ( )]2
factor analysis, the similarities between objects (e.g., variables) ij

are expressed in the correlation matrix. With MDS one may In this formula, dij stands for the reproduced distances, given
analyze any kind of similarity or dissimilarity matrix, in addition the respective number of dimensions, and ij (deltaij) stands for
to correlation matrices. the input data (i.e., observed distances). The expression f (ij )
indicates a nonmetric, monotone transformation of the ob-
Logic of MDS served input data (distances). Thus, it will attempt to reproduce
The following simple example may demonstrate the logic of an
the general rank-ordering of distances between the objects in
MDS analysis. Suppose we take a matrix of distances between
the analysis.
major US cities from a map. We then analyze this matrix,
specifying that we want to reproduce the distances based on two There are several similar related measures that are commonly
dimensions. As a result of the MDS analysis, we would most used; however, most of them amount to the computation of
likely obtain a two-dimensional representation of the locations the sum of squared deviations of observed distances (or some
of the cities, that is, we would basically obtain a two-dimen- monotone transformation of those distances) from the
sional map. reproduced distances. Thus, the smaller the stress value, the
better is the fit of the reproduced distance matrix to the
In general then, MDS attempts to arrange “objects” (major
observed distance matrix.
cities in this example) in a space with a particular number of
dimensions (two-dimensional in this example) so as to Shepard diagram. One can plot the reproduced distances for a
reproduce the observed distances. As a result, we can “explain” particular number of dimensions against the observed input
the distances in terms of underlying dimensions; in our data (distances). This scatterplot is referred to as a Shepard
example, we could explain the distances in terms of the two diagram. This plot shows the reproduced distances plotted on
geographical dimensions: north/south and east/west. the vertical (Y) axis versus the original similarities plotted on the
horizontal (X) axis (hence, the generally negative slope). This
Orientation of axes. As in factor analysis, the actual orienta-
plot also shows a step-function. This line represents the so-
tion of axes in the final solution is arbitrary. To return to our
called D-hat values, that is, the result of the monotone transfor-
example, we could rotate the map in any way we want, the
mation f() of the input data. If all reproduced distances fall
distances between cities remain the same. Thus, the final
onto the step-line, then the rank-ordering of distances (or
orientation of axes in the plane or space is mostly the result of
similarities) would be perfectly reproduced by the respective
a subjective decision by the researcher, who will choose an
solution (dimensional model). Deviations from the step-line
orientation that can be most easily explained. To return to our
indicate lack of fit.
example, we could have chosen an orientation of axes other
than north/south and east/west; however, that orientation is How Many Dimensions to Specify?
most convenient because it “makes the most sense” (i.e., it is If you are familiar with factor analysis, you will be quite aware
easily interpretable). of this issue. If you are not familiar with factor analysis, you
may want to read the Factor Analysis section in the manual;
Computational Approach
however, this is not necessary in order to understand the
MDS is not so much an exact procedure as rather a way to
following discussion. In general, the more dimensions we use
“rearrange” objects in an efficient manner, so as to arrive at a
in order to reproduce the distance matrix, the better is the fit of
configuration that best approximates the observed distances. It
the reproduced matrix to the observed matrix (i.e., the smaller
actually moves objects around in the space defined by the
is the stress). In fact, if we use as many dimensions as there are
requested number of dimensions, and checks how well the
variables, then we can perfectly reproduce the observed distance
distances between objects can be reproduced by the new
matrix. Of course, our goal is to reduce the observed complexity
configuration. In more technical terms, it uses a function
of nature, that is, to explain the distance matrix in terms of
minimization algorithm that evaluates different configurations
fewer underlying dimensions. To return to the example of
with the goal of maximizing the goodness-of-fit (or minimiz-
distances between cities, once we have a two-dimensional map it
ing “lack of fit”).
is much easier to visualize the location of and navigate between
cities, as compared to relying on the distance matrix only.

© Copy Right: Rai University


204 11.556
Sources of misfit. Let us consider for a moment why fewer configurations. Often, more interpretable solutions emerge.

RESEARCH METHODOLOGY
factors may produce a worse representation of a distance matrix However, if the data points in the plot do not follow any
than would more factors. Imagine the three cities A, B, and C, pattern, and if the stress plot does not show any clear “elbow,”
and the three cities D, E, and F; shown below are their distances then the data are most likely random “noise.”
from each other.
Interpreting the Dimensions
A B C D E F The interpretation of dimensions usually represents the final
A 0 D 0 step of the analysis. As mentioned earlier, the actual orienta-
tions of the axes from the MDS analysis are arbitrary, and can be
B 90 0 E 90 0 rotated in any direction. A first step is to produce scatterplots of
C 90 90 0 F 180 90 0 the objects in the different two-dimensional planes.

In the first matrix, all cities are exactly 90 miles apart from each
other; in the second matrix, cities D and F are 180 miles apart.
Now, can we arrange the three cities (objects) on one dimension
(line)? Indeed, we can arrange cities D, E, and F on one
dimension:
D—90 miles—-E—90 miles—-F
D is 90 miles away from E, and E is 90 miles away form F;
thus, D is 90+90=180 miles away from F. If you try to do the
same thing with cities A, B, and C you will see that there is no
way to arrange the three cities on one line so that the distances
can be reproduced. However, we can arrange those cities in two
dimensions, in the shape of a triangle Three-dimensional solutions can also be illustrated graphically,
however, their interpretation is somewhat more complex.
A
90 miles 90 miles
B 90 miles C

Arranging the three cities in this manner, we can perfectly


reproduce the distances between them. Without going into
much detail, this small example illustrates how a particular
distance matrix implies a particular number of dimensions. Of
course, “real” data are never this “clean,” and contain a lot of
noise, that is, random variability that contributes to the
differences between the reproduced and observed matrix.
Scree test. A common way to decide how many dimensions to In addition to “meaningful dimensions,” one should also look
use is to plot the stress value against different numbers of for clusters of points or particular patterns and configurations
dimensions. This test was first proposed by Cattell (1966) in (such as circles, manifolds, etc.). For a detailed discussion of
the context of the number-of-factors problem in factor analysis how to interpret final configurations, see Borg and Lingoes
(seeFactor Analysis); Kruskal and Wish (1978; pp. 53-60) discuss (1987), Borg and Shye (in press), or Guttman (1968).
the application of this plot to MDS. Use of multiple regression techniques. An analytical way of
Cattell suggests to find the place where the smooth decrease of interpreting dimensions (described in Kruskal & Wish, 1978) is
stress values (eigenvalues in factor analysis) appears to level off to use multiple regression techniques to regress some meaning-
to the right of the plot. To the right of this point one finds, ful variables on the coordinates for the different dimensions.
presumably, only “factorial scree” — “scree” is the geological Note that this can easily be done via Multiple Regression.
term referring to the debris which collects on the lower part of a Point to Ponder
rocky slope. • Multidimensional scaling (MDS) is often used in
Interpretability of configuration. A second criterion for conjunction with cluster analysis or conjoint analysis.
deciding how many dimensions to interpret is the clarity of the • It allows a respondent’s perception about a product, service,
final configuration. Sometimes, as in our example of distances or other object of attitude to be described in a spatial
between cities, the resultant dimensions are easily interpreted. manner.
At other times, the points in the plot form a sort of “random
• MDS helps the business researcher to understand difficult –
cloud,” and there is no straightforward and easy way to interpret
to-measure constructs such as product quality or desirability,
the dimensions. In the latter case one should try to include
more or fewer dimensions and examine the resultant final

© Copy Right: Rai University


11.556 205
which are perceived and cognitively mapped in different ways
RESEARCH METHODOLOGY

by different individuals.
• Items judged to be similar will fall close together in
multidimensional space and are revealed numerically and
geometrically by spatial map
Notes-

© Copy Right: Rai University


206 11.556
RESEARCH METHODOLOGY
LESSON 39:
CONJOINT ANALYSIS

We will now learn of an important technique, which helps us Example 1


determine the best or optimal combination of product features Suppose we have to design a public transport system. We wish
or attributes. Conjoint measure-ment is the statistical technique to test the relative desirability of three attributes:
typically used to identify the most desirable com-bination of The company aims to provide a service. They wish to test three
attributes or features for the particular product or service under levels of frequency, and three levels of prices. Further they want
in-vestigation. This is a highly advanced technique and we shall to test the weightage given by consumer to add on features such
only be able to touch on it intuitively. A more detailed explana- as AC and music. The conjoint problem can be presented as
tion is beyond the scope of our course. However it is an follows:
important technique, which helps a marketing person, decides
Fare (three levels Rs10, Rs15, Rs 20)
on what is the optimal combination of product features to be
offered. Frequency of service (10 minutes, 15 minutes, 20 minutes)
By the end of this lesson you should have an intuitive under- AC vs non AC vs. music (Ac & music, AC, music, nothing)
standing of what is conjoint analysis and its applications in A sample of 500 respondents are selected and asked to rank
marketing and other types of research problems. their preferences for all possible combinations and for each level.
What is Conjoint Analysis? These are shown below along with one respondent’s sample
Conjoint analysis is a technique used to identify the most rankings. We can present our trade off information in the form
desirable combination of features to be offered in a new of a table:
product. It addresses the problem of how the customer will Table 1
value the various tangible and intangible features offered by a Frequency Ac AC&music Music Nothing
particular firm’s product.
10 1 2 3 4
Conjoint measurement also tells us the extent to which
respondents are willing to give up (trade off) some features and 15 5 6 7 8
attributes to retain others. 20 9 10 11 12
Thus conjoint analysis is done to determine what utility a
consumer attaches to attributes such as: Basically the respondent’s preference ranking help reveal how
• Price (high, low,) desirable a particular feature is to a respondent. Features
respondents are unwilling to give up from one preference
• After sales service (frequent, monthly, yearly, guarantee)
ranking to the next are given a higher utility. Thus in the above
• Product features example the respondent gives a high weightage to service
Conjoint analysis – how it works followed by AC. the offer of music is clearly not very important
as he ranks it below AC. However he is not willing to trade off
A consumer is asked to compare different products attribute
frequency of service with either AC or music.
combinations and rank them. Respondents are to indicate the
combination they most prefer, the second most preferred, etc. Conjoint analysis uses preference rankings to calculate a set of
Conjoint analysis is applied to categorical variables, which reflect utilities for each respondent where one utility is calculated for
different features or characteristics of products. For example for each respondent for each attribute or feature. The calculation of
a new product the features may be: utilities is such that the sum of utilities for a particular combi-
nation shows a good correspondence with that combination’s
• Colour (different shades)
position in the individual’s original preference rankings. The
• Size (largest vs. medium vs. small) utilities basically show the importance of each level of each
• Shape (square vs. cylindrical) importance to respondents.
• Price (different price levels) We can also identify the more important attributes by looking at
It differs from factor analysis because it is only applied to the range of utilities for each of the different levels.
categorical variables. It is similar to factor analysis in that it tries For Example
to identify interdependencies between a number of variables
• Frequency of service has a range from 1.6 to .04. The range is
where the variables are the different features.
therefore equal to =1.2.A high range implies that the
We can best understand Conjoint analysis with the help of an respondent is more sensitive to changes in the level of this
example: attribute.
• These utilities are calculated across all respondents for all
attributes and for different levels of each attribute.

© Copy Right: Rai University


11.556 207
At the end of the analysis we would identify 3-4 of the most 3. Conjoint Example: Packaged Soups Results for One
RESEARCH METHODOLOGY

popular combinations would be identified for which the relative Subject Salt
costs and benefits can be worked out. Flavor Cal. Free Price Rating
Uses of Conjoint Analysis Onion 80 yes $1.19 9
• It is used in industrial marketing where a product can have Onion 80 yes $1.49 8
many combinations and features and not all features would Onion 80 no $1.19 6
be important to all consumers. In industrial marketing the
Onion 80 no $1.49 6
analysis can be done at the individual level, as each individual
is important.
• In case of consumer goods the analysis should be done Onion 100 yes $1.19 7
segment wise. To avoid unnecessarily long questionnaires a Onion 100 yes $1.49 6
preliminary factor analysis should be run to select only Onion 100 no $1.19 5
testable attributes. Also the number of attributes should be
restricted. Onion 100 no $1.49 5

Problems
It is important that the attributes be selected carefully. The Onion 140 yes $1.19 7
analysis assumes the attributes are important to consumers. Onion 140 yes $1.49 6
We now present some applications of Conjoint analysis Onion 140 no $1.19 5
from the internet. Onion 140 no $1.49 5
Conjoint helps understand why consumers prefer certain
products:
Chicken 80 yes $1.19 7
U [product] = U [attribute1 (i)] + U [attribute2 (j)] + ... +
U[attribute(k)] Chicken 80 yes $1.49 6
Where: Chicken 80 no $1.19 2
U [product]=overall utility, or “worth” of the product Chicken 80 no $1.49 2
U [attribute (y)] = “part worth” of the yth level of the Xth
Attribute J = number of attributes Chicken 100 yes $1.19 3
Conjoint analysis is a sophisticated technique for measuring Chicken 100 yes $1.49 3
consumer attitudes and preferences. Like the multi-attribute Chicken 100 no $1.19 2
model, it helps understand why consumers prefer certain Chicken 100 no $1.49 1
products. Also like the multi-attribute model, it decomposes
overall preference into a series of additive terms.
Chicken 140 yes $1.19 2
However, there is an important difference between conjoint
analysis and the multiattribute model: Chicken 140 yes $1.49 2
• The multiattribute model is compositional - it builds up an Chicken 140 no $1.19 2
inferred overall attitude as the sum of measured sub- Chicken 140 no $1.49 1
components.
• The conjoint model is deco positional - it measures overall Vegetable 80 yes $1.19 9
preference and decomposes this into inferred sub-
Vegetable 80 yes $1.49 8
components.
Vegetable 80 no $1.19 7
2. Example of Conjoint Analysis Technique
Vegetable 80 no $1.49 6
Packaged soups.
Four attributes with the following levels:
Vegetable 100 yes $1.19 8
a. Flavor onion, chicken noodle, country vegetable
Vegetable 100 yes $1.49 7
b. Calories 80, 100, 140
Vegetable 100 no $1.19 6
c. Salt-free yes, no
Vegetable 100 no $1.49 5
d. Price $1.19, $1,49
Altogether there are 3x3x2x2 = 36 possible combinations. Vegetable 140 yes $1.19 6
A consumer could, in theory, rate each of the 36 combinations Vegetable 140 yes $1.49 5
on a 9 - point preference scale. Vegetable 140 no $1.19 5
Vegetable 140 no $1.49 4

© Copy Right: Rai University


208 11.556
4. Part-worth calculated for one subject: e. New product alternatives can be synthesized from basic

RESEARCH METHODOLOGY
normalized relative attributes
attribute level mean mean importance 8. Airline Example: Stages
a. Develop releva nt set of attributes and select appropriate
levels
Flavor vegetable 6.33 1.00 43%
b. Use a fractional factorial design to create an orthogonal array
onion 6.25 .98
of stimuli.
chicken 2.75 .00
c. Rate or rank the stimuli
d. Estimate part-worths for attribute levels.
Calories 80 6.33 1.00 26%
e. Estimate relative importance of attributes
100 4.83 .58
f. Interpretation
140 4.17 .40
9. Airline Example: Discussion
a. Interpretation of results
Salt-Free yes 6.06 .92 23%
b. Selecting attributes
no 4.17 .40
c. Selecting attribute levels
d. Applications to market segmentation
Price $1.19 5.44 .75 8% e. Applications to product development
$1.49 4.78 .56 10. Example: Conjoint analysis for faculty chair candidate.
5. Orthogonal Arrays Conjoint as aid to decision making.
In actual applications, it becomes impossible to present all Attributes used were:
possible combinations of attributes to a consumer. Consider
carpet cleaners: a. Area of specialization (quantitative, consumer behavior,
strategy, management, international)
3 package designs
b. Research orientation (star, active, inactive, minor)
3 brand names
c. Teaching orientation (star, good, average, below average)
3 price points
d. Current position (chair, full, associate)
Good Housekeeping Seal yes/no
e. Role in department (work with junior faculty, work with
Money Back guarantee yes/no faculty at other schools, work with business community)
b. There are 108 possible combinations? • Discuss Part-Worths and relative importances for eight
c. Q) what is the fewest number of combinations we can get faculty members.
by with? • Discuss use of non-metric data and monotone
A) Sum of the number of degrees-of-freedom for the main transformation of faculty rank orders, which optimized the
effects of each attribute:
fit of the conjoint model
(3-1) + (3-1) + (3-1) + (2-1) + (2-1) = 8
Sawtooth Software
d. We need to find an “orthogonal array” of 8 profiles, which
allows us to estimate all additive main effects in the conjoint Research Paper Series
model. Typically, we at least double the minimum number Understanding Conjoint Analysis In
for greater stability. 15 Minutes
6. Issues in Designing a Conjoint Study Joseph Curry,
a. Attribute selection Sawtooth Technologies, Inc.
b. Collecting preference data 1996
c. Typical sample sizes © Copyright 1996 - 2001, Sawtooth Software, Inc.
d. Profile vs. two-factor evaluation 530 W. Fir St.
e. Computerized (adaptive) approaches Sequim, WA 98382
7. Desirable problem situations for conjoint analysis (360) 681-2300
a. Product is realistically decomposable www.sawtoothsoftware.com
b. Product is reasoned high-stake decision Understanding Conjoint Analysis in 15 Minutes
c. All combinations that are presented to respondent are Joseph Curry
reasonable
(Originally published in Quirk’s Marketing Research Review)
d. Product/service alternatives can be realistically described
Copyright 1996, Sawtooth Software

© Copy Right: Rai University


11.556 209
Conjoint analysis is a popular marketing research technique that 54 holes 36 holes 18 holes
RESEARCH METHODOLOGY

marketers use to determine what features a new product should 275 yards 1 2 4
have and how it should be priced. Conjoint analysis became
250 yards 3 5 7
popular because it was a far less expensive and more flexible way
to address these issues thanconcept testing. Average
The basics of conjoint analysis are not hard to understand. I’ll Driving
attempt to acquaint you withthese basics in the next 15 minutes Distance 225 yards 6 8 9
so that you can appreciate what conjoint analysis has to offer. Figure 2b
A simple example is all that’s required. Buyer 2 Average Ball Life
Suppose we want to market a new golf ball. We know from 54 holes 36 holes 18 holes
experience and from talking withgolfers that there are three
275 yards 1 3 6
important product features:
250 yards 2 5 8
! Average Driving Distance
Average
! Average Ball Life
Driving
! Price
Distance 225 yards 4 7 9
We further know that there is a range of feasible alternatives for
each of these features, for instance: Both buyers agree on the most and least preferred ball. But as
we can see from their other
Average Driving Distance Average Ball Life Price
choices, Buyer 1 tends to trade-off ball life for distance, whereas
275 yards 54 holes $1.25
Buyer 2 makes the oppositetrade-off.
250 yards 36 holes $1.50
225 yards 18 holes $1.75
The knowledge we gain in going from Figure 1 to Figures 2a
Obviously, the market’s “ideal” ball would be: and 2b is the essence of conjointanalysis. If you understand
Average Driving Distance Average Ball Life Price this, you understand the power behind this technique.
275 yards 54 holes $1.25 and the “ideal” ball from a cost of Next, let’s figure out a set of values for driving distance and a
manufacturing perspective would be: second set for ball life for Buyer 1so that when we add these
Average Driving Distance Average Ball Life Price values together for each ball they reproduce Buyer 1’s rank
orders.
225 yards 18 holes $1.75assuming that it costs less to produce a
ball that travels a shorter distance and has a shorter life.
Figure 3 shows one possible scheme.
Here’s the basic marketing issue: We’d lose our shirts selling the Figure 3
first ball and the market Buyer 1 Average Ball Life
wouldn’t buy the second. The most viable product is some- 54 holes
where in between, but where? 50
Conjoint analysis lets us find out where. 36 holes
25
A traditional research project might start by considering the 18 holes
rankings for distance and ball life in
0
Figure 1.
275 yards
Figure 1
100
Rank Average Driving Distance Rank Average Ball Life
(1)
1 275 yards 1 54 holes
150
2 250 yards 2 36 holes
(2)
3 225 yards 3 18 holes
125
This type of information doesn’t tell us anything that we didn’t
(4)
already know about which ball toproduce.
100
Now consider the same two features taken conjointly. Figures 2a
and 2b show the rankings of the 9 possible products for two 250 yards
buyers assuming price is the same for all combinations. 60
Figure 2a (3)
Buyer 1 Average Ball Life 110

© Copy Right: Rai University


210 11.556
(5) (2)

RESEARCH METHODOLOGY
85 55
(6) (5)
60 30
Average (8)
Driving 5
Distance Price
225 yards $1.75
0 0
(7) (3)
50 50
(8) (6)
25 25
(9) (9)
0 0
Notice that we could have picked many other sets of numbers We now have in Figure 5 a complete set of values (referred to as
that would have worked, so thereis some arbitrariness in the “utilities” or “part-worths”) thatcapture Buyer 1’s trade-offs.
magnitudes of these numbers even though their relationships Figure 5
to eachother are fixed.
275 yards 100 54 holes 50 $1.25 20
Next suppose that Figure 4a represents the trade-offs Buyer 1 is
250 yards 60 36 holes 25 $1.50 5
willing to make between ball life
225 yards 0 18 holes 0 $1.75 0
and price. Starting with the values we just derived for ball life,
Figure 4b shows a set of valuesfor price that when added to Average Driving Distance Average Ball Life Price
those ball life reproduce the rankings for Buyer 1 in Figure 4a. Let’s see how we would use this information to determine
Figure 4a which ball to produce. Suppose we
Buyer 1 Average Ball Life were considering one of two golf balls shown in Figure 6.
54 holes 36 holes 18 holes Figure 6
$1.25 1 4 7 Distance Ball Long-Life Ball
$1.50 2 5 8 Price Distance 275 250
$1.75 3 6 9 Life 18 54
Figure 4b Price $1.50 $1.75
Buyer 1 Average Ball Life The values for Buyer 1 in Figure 5 when added together give us
an estimate of his preferences.
54 holes
Applying these to the two golf balls we’re considering, we get
50
the results in Figure 7.
36 holes
Figure 7
25
Buyer 1
18 holes
Distance 275 100 250 60
0
Life 18 0 54 50
$1.25
Price $1.50 5 $1.75 0
20
Total Utility 105 110
(1)
Distance Ball Long-Life Ball
70
We’d expect buyer 1 to prefer the long-life ball over the distance
(4) ball since it has the larger total value.
45 It’s easy to see how this can be generalized to several different
(7) balls and to a representative sample of
20 buyers.
$1.50 These three steps—collecting trade-offs, estimating buyer value
5 systems, and making choice predictions— form the basics of

© Copy Right: Rai University


11.556 211
conjoint analysis. Although trade-off matrices are useful for
RESEARCH METHODOLOGY

explaining conjoint analysis as in this example, not many


researchers use them nowadays. It’s easier to collect conjoint
data by having respondents rank or rate concept statements or
by using PC-based interviewing software that decides what
questions to ask each respondent, based on his previous
answers. As you may expect there is more to applying conjoint
analysis than is presented here. But if you understand this
example, you understand what conjoint analysis is and what it
can do for you as a marketer.
Point to Ponder
• Conjoint analysis is a technique that typically handles non
metric independent variables.
• Conjoint analysis allows the researcher to determine the
importance of product or service attributes and the levels of
features that are most desirable.
• Respondents provide preferences data by ranking or rating
cards that describe products
• These data become utility weight of product characteristics by
means of optimal scaling and loglinear algorithms

© Copy Right: Rai University


212 11.556
RESEARCH METHODOLOGY
LESSON 40:
DISCRIMINANT ANALYSIS

In the last few lessons we had learnt to do multiple regression Discriminant analysis works by creating a new variable called the
analysis. There are other multivariate techniques, which measure Discriminant function score which is used to predict to which
the association between variables. Discriminant analysis is one group a case belongs.
such technique. Frequently as researchers we are asked to classify Discriminant function scores are computed similarly to factor
people or objects into one or more groups. The problem faced scores, i.e. using eigenvalues. The computations find the
by the researcher is to find some problem with which he can coefficients for the independent variables that maximize the
predict the categorize into which the individuals would fall. So measure of distance between the groups defined by the
that he can actually predict, based on certain characteristics or dependent variable.
features what category an individual will fall into. The analysis
The discriminant function is similar to a regression equation in
statistically is very similar in to multiple regression. The result
which the independent variables are multiplied by coefficients
of discriminate analysis is an equation, which allows us to
and summed to produce a score
predict which group a new observation will belong to. Again
our exposition focuses on the intuitive foundations and The data structure for DFA is a single grouping variable that is
applications of this technique as the statistics is beyond the predicted by a series of other variables. The grouping variable
scope of our course. must be nominal, which might also be a reclassification of a
continuous variable into groups. The function is presented
Example: We may have data on various economic and social
thus:
variables of different states of our country. It is possible to
categories the state as developed or less developed from the Y’ = X1W 1 + X2W 2 + X3W 3 + ...Xn W n + Constant
common set of observed variables ? or if the states have been This is essentially identical to a multiple regression, but in reality
categorized already on a hypothetical basis , how far our data in the two techniques are quite different. Regression is built on a
itself serves to discriminant between these hypothesized linear combination of variables that maximizes the regression
categories. relationship, i.e., the least squares regression, between a
continuous dependent variable and the regression variate. In
What is Discriminant Function Analysis?
DFA, the dependent variable consists of discrete groups, and
Discriminant analysis is used to analyze relationships between a
what you want to do with the function is to maximize the
non-metric dependent variable and metric or dichotomous
distance between those groups, i.e., to come up with a function
independent variables. Dicriminant analysis attempts to use the
that has strong discriminatory power among the groups.
independent variables to distinguish among the groups or
Although logit regression does somewhat the same thing when
categories of the dependent variable.
you have a binary (two group) variable, and the book makes a
The usefulness of a Discriminant model is based upon its big thing of the similarities, the reality is that the way in which
accuracy rate, or ability to predict the known group member- they compute the functions is quite different.
ships in the categories of the dependent variable.
What Does it Do?
Discriminant scores are standardized, so that if the score falls
DFA can be used in both an exploratory and predictive mode.
on one side of the boundary (standard score less than zero, the
In particular, it can:
case is predicted to be a member of one group) and if the score
falls on the other side of the boundary (positive standard • Determine whether there are differences among the average
score), it is predicted to be a member of the other group. scores of a series of variables for two or more groups (note
the similarity to MANOVA)
If the dependent variable defines two groups, one statistically
significant Dscriminant function is required to distinguish the • Determine which variables account for these differences
groups; if the dependent variable defines three groups, two • Be used as a statistically-based classification procedure
statistically significant discriminant functions are required to • Understand the dimensions of discrimination and how
distinguish among the three groups; etc. groups are discriminated
Discriminant function analysis is a technique that has received • Note that these proceed from a hypothesis-testing mode to
lots of theoretical attention from statisticians as well as an exploration mode. Let’s look at each one.
somewhat less attention from users. It is similar to multiple
regression and somewhat to MANOVA, and as we’ve seen, Differences Among Average Scores
many of the other techniques use discriminant functions as part At first glance, this appears to be what MANOVA does, but it is
of the technique. really quite different. In MANOVA, we have multiple depen-
dent variables, combined into a variate, and we examine
Conceptually, we can think of the Discriminant function or whether those variates differ among discrete classes of one or
equation as defining the boundary between groups. more independent variables. Although we can examine

© Copy Right: Rai University


11.556 213
interaction among the independent variables, we are looking at and I used the functions to go back and put a maximum
RESEARCH METHODOLOGY

differences among the variates, or combinations of continuous likelihood onto the other specimens in the CU museum. As an
dependent variables. In DFA, the focus is on the groups as the aside, I also used MANOVA to test for differences among the 6
outcome. Here, we want to develop a function that will species based on these 6 characteristics. So that shows you the
maximize the distance among the groups. relationship between the 2 techniques - one is almost the
inverse of the other, in terms of whether a variable is consid-
Which Variables Account for These
ered dependent or independent.
Differences?
In this case, we are looking at the loadings of the variables on Interpreting Output
the discriminant function. Just like all the other techniques that DFA gives you a lot of output, depending on which options
deal with coefficients, we will have the coefficients in a raw and you choose. You can save the discriminant scores as new
standardized form. variables, and you can save the group membership probabilities
Can anyone tell me how you could get the standardized form if as new variables.
the program didn’t do it for you? Like regression, you can use a stepwise procedure to come up
Statistically Based Classification with a “least number of variables” analysis. If you enter all
The form of the DF looks so similar to a regression that it is variables, those with a very low tolerance (extreme
hard to understand how it differs. Certainly, the method and multicollinearity) will be excluded.
what it tries to accomplish differs - but the outcome of The first output that you get is an assessment of the statistical
applying the function is a continuous, rather than discontinu- significance of the functions. This is an expression of whether
ous, value. So now what? The classification is statistical because the differences among the group means are significant. Each
it gives you a likelihood of group membership. The outcome is successive function will explain less variance than the previous,
compared against the group distributions, and each case can and be less significant. Before the functions are computed, you
then be placed into the most likely, second most likely, etc. One are given a Wilk’s Lambda that expresses the difference among
of the things you can do with DFA, as a further test of its the group means. Then the successive functions are computed,
significance, is to apply the classification phase and use chi- and for each there is a WL computed. There is also a canonical
square analysis to see how well the function separates the correlation, which is the correlation between the function and
groups. the original variables. A high canonical correlation indicates that
a large amount of the variance in those variables is expressed by
Understand the Dimensions the function.
DFA will give you multiple functions (one less than the
number of groups), and you can use these functions to look at Following the significance tests are the standardized coefficients,
different aspects of the data, i.e., in a sense an ordination. and once again those can be interpreted as weightings. If you
have lots of coefficients over 1.0, there may be a problem with
An Example multicollinearity.
We look at the classification problem faced by a ecologist
The next set of output are the correlations between the discrimi-
attempting to classify different types and distribution of a
nating variables and the functions, i.e., the cross-loadings in a
family of land snails.
sense. SPSS arranges these by function, so that you can see the
In CO, the family has 3 genera, with 2 species each. All the early variables that load highly on the first function, 2nd, etc.
work on their descriptions was based on shell morphology, but
Next you get the function values for the group means, called the
later it was shown that this really was not a good basis for the
centroids. Those are primarily helpful if you need to plot the
differences, and that the reproductive anatomy was the key.
relative positions of the groups, but SPSS and most other
What used to be more species were collapsed down to the 6
programs will do that step for you.
once that was discovered. Well, there were hundreds of
collections of these things in museums, mostly just with the Finally, you get into the classification phase, and your output
shells. More recent collections were stored in alcohol, with the will depend upon what you requested. I’ve found that the most
entire body present. My job was to do a thorough description straightforward thing to do is to request a classification
of these species and to write a monograph on them. That summary only, and then to save the probabilities of individual
meant dissecting 100’s of specimens, and as I did that, I also group memberships as new variables, so that you can may refer
took 9 measurements of the shell. As you do work like that, to that if necessary. The classification summary will tell you the
there’s loads of time for the mind to wander, so I began number of cases in each group as predicted by the analysis and
imagining that I could really see differences in the shell mor- from your data. A “good” function should then be able to do a
phology. I started testing myself, and found I was a little better good job in this phase.
than random, suggesting that there were indeed differences. Assumptions
So what was the DFA? I used the 9 shell measures to develop a The two primary assumptions of DFA are multivariate
function that I could apply to the species. My question was - are normality and equality of covariances. There are some sugges-
these 6 species differentiated on the basis of shell morphology, tions that logistic regression is less sensitive to differences from
how well, and what variables do that differentiation? I actually normality, and that if you have only 2 groups, that is the
can’t find a copy of the thesis, so I don’t recall the details of the preferred technique.
outcome, but basically I found that there were some differences,

© Copy Right: Rai University


214 11.556
You can, however, find just as many papers saying that DFA is interesting secondary structure in one part of the gradient just

RESEARCH METHODOLOGY
really quite a robust technique... So it’s really up to you. If you because it was subsumed elsewhere.
have more than 2 groups, DFA is the only choice. Some of you may have read about classification trees, and
When to Use It, and When to Use Logistic regression trees are quite similar. Most of the applications of
Regression these seem to be in data sets where you have a large number of
potentially predictive variables, and you’re trying to make sense
I just want to talk very briefly about logistic regression. Logit
of them. But a key limitation is that you are predicting only one
analysis is regression of a series of independent variables on a
variable. That makes it not very useful for community classifica-
binary dependent variable. It is called “logistic regression”
tion, for example. Lots of uses in remote sensing and landscape
because it is based on a logistic curve
ecology.
A large part of the material has been taken from Dr. Marilyn D.
Walker web site on multivariate techniques.
Further theoretical and applications from the internet
© Copyright StatSoft, Inc., 1984-2003
Discriminant Function Analysis
• General Purpose
• Computational Approach
• Stepwise Discriminant Analysis
• Interpreting a Two-Group Discriminant Function
• Discriminant Functions for Multiple Groups
• Assumptions
The slope of this curve is low at the upper and lower extremes • Classification
of the independent variable(s) and highest in the middle, i.e., it
is most sensitive there. The outcome is not a prediction of Y General Purpose
but a probability of belonging to a particular state of Y, with 1 Discriminant function analysis is used to determine which
(never reached) being a perfect probability. The probability of an variables discriminate between two or more naturally occurring
event (i.e., the probability of “membership”) is expressed by a groups. For example, an educational researcher may want to
logistic equation. investigate which variables discriminate between high school
graduates who decide (1) to go to college, (2) to attend a trade
Logit regression has not had that much usage in ecological/
or professional school, or (3) to seek no further training or
biological work because of the paucity of situations regarding
education. For that purpose the researcher could collect data on
binary variables, but recently a new technique has come up that
numerous variables prior to students’ graduation. After
uses it - regression tree analysis. This is still fairly rare, because
graduation, most students will naturally fall into one of the
few people have the tools and skills to work with it. There are
three categories. Discriminant Analysis could then be used to
forms that use logit regression and forms that use Dscriminant
determine which variable(s) are the best predictors of students’
functions, but basically what it does is to help understand
subsequent educational choice.
structure in complex data sets.
A medical researcher may record different variables relating to
Logit regression is a hierarchical dichotomous technique based
patients’ backgrounds in order to learn which variables best
on intensive succession computations. Essentially, it will test all
predict whether a patient is likely to recover completely (group
possible dichotomous divisions of a dataset looking for a
1), partially (group 2), or not at all (group 3). A biologist could
binary variable (or a binary division of a continuous variable)
record different characteristics of similar types (groups) of
that produces the “best possible” dichotomous division in the
flowers, and then perform a discriminant function analysis to
data, i.e., the most homogeneous subsets. From logit regres-
determine the set of characteristics that allows for the best
sion, that would be a variable for which the individual cases
discrimination between the types.
load toward the ends and away from the middle, i.e., where
most cases were classified as 1 or 0 (or approaching those). For Computational Approach
Dscriminant analysis, it would be the variable that best separates Computationally, discriminant function analysis is very similar
the group means. It then takes those subsets and redivides to analysis of variance (ANOVA). Let us consider a simple
them, and so on. It is computationally very intensive, because it example. Suppose we measure height in a random sample of
can not only look at each variable but at all possible splits of 50 males and 50 females. Females are, on the average, not as tall
that variable (presuming it’s continuous). as males, and this difference will be reflected in the difference in
means (for the variable Height). Therefore, variable height
One of the beauties of logistic regression is that it potentially
allows us to discriminate between males and females with a
solves the problem of secondary variation that I presented a
better than chance probability: if a person is tall, then he is likely
few days ago, i.e., that in most data sets, the secondary variation
to be a male, if a person is short, then she is likely to be a
is different along different parts of the primary gradient. A tree
female.
analysis will treat each of those separately, so that you don’t lose

© Copy Right: Rai University


11.556 215
RESEARCH METHODOLOGY

We can generalize this reasoning to groups and variables that are


less “trivial.” For example, suppose we have two groups of
high school graduates: Those who choose to attend college after
graduation and those who do not. We could have measured
students’ stated intention to continue on to college one year
prior to graduation. If the means for the two groups (those
who actually went to college and those who did not) are
different, then we can say that intention to attend college as
stated one year prior to graduation allows us to discriminate
between those who are and are not college bound (and this
information may be used by career counselors to provide the
appropriate guidance to the respective students).
To summarize the discussion so far, the basic idea underlying
discriminant function analysis is to determine whether groups
differ with regard to the mean of a variable, and then to use
In this example, Root (function) 1 seems to discriminate
that variable to predict group membership (e.g., of new cases).
mostly between groups Setosa, and Virginic and Versicol
Analysis of Variance. Stated in this manner, the discriminant combined. In the vertical direction (Root 2), a slight trend of
function problem can be rephrased as a one-way analysis of Versicol points to fall below the center line (0) is apparent.
variance (ANOVA) problem. Specifically, one can ask whether or
Factor structure matrix. Another way to determine which
not two or more groups are significantly different from each
variables “mark” or define a particular discriminant function is
other with respect to the mean of a particular variable. To learn
to look at the factor structure. The factor structure coefficients
more about how one can test for the statistical significance of
are the correlations between the variables in the model and the
differences between means in different groups you may want to
discriminant functions; if you are familiar with factor analysis
read the Overview section to ANOVA/MANOVA. However, it
(see Factor Analysis) you may think of these correlations as
should be clear that, if the means for a variable are significantly
factor loadings of the variables on each discriminant function.
different in different groups, then we can say that this variable
discriminates between the groups. Some authors have argued that these structure coefficients
should be used when interpreting the substantive “meaning”
In the case of a single variable, the final significance test of
of discriminant functions. The reasons given by those authors
whether or not a variable discriminates between groups is the F
are that (1) supposedly the structure coefficients are more stable,
test. As described in Elementary Concepts and ANOVA /
and (2) they allow for the interpretation of factors (discriminant
MANOVA, F is essentially computed as the ratio of the
functions) in the manner that is analogous to factor analysis.
between-groups variance in the data over the pooled (average)
However, subsequent Monte Carlo research (Barcikowski &
within-group variance. If the between-group variance is
Stevens, 1975; Huberty, 1975) has shown that the discriminant
significantly larger then there must be significant differences
function coefficients and the structure coefficients are about
between means.
equally unstable, unless the n is fairly large (e.g., if there are 20
Multiple Variables. Usually, one includes several variables in a times more cases than there are variables). The most important
study in order to see which one(s) contribute to the discrimina- thing to remember is that the discriminant function coefficients
tion between groups. In that case, we have a matrix of total denote the unique (partial) contribution of each variable to the
variances and covariances; likewise, we have a matrix of pooled discriminant function(s), while the structure coefficients denote
within-group variances and covariances. We can compare those the simple correlations between the variables and the
two matrices via multivariate F tests in order to determined function(s). If one wants to assign substantive “meaningful”
whether or not there are any significant differences (with regard labels to the discriminant functions (akin to the interpretation
to all variables) between groups. This procedure is identical to of factors in factor analysis), then the structure coefficients
multivariate analysis of variance or MANOVA. As in should be used (interpreted); if one wants to learn what is each
MANOVA, one could first perform the multivariate test, and, if variable’s unique contribution to the discriminant function, use
statistically significant, proceed to see which of the variables the discriminant function coefficients (weights).
have significantly different means across the groups. Thus, even
Significance of discriminant functions. One can test the
though the computations with multiple variables are more
number of roots that add significantly to the discrimination
complex, the principal reasoning still applies, namely, that we are
between group. Only those found to be statistically significant
looking for variables that discriminate between groups, as
should be used for interpretation; non-significant functions
evident in observed mean differences.
(roots) should be ignored.
Summary. To summarize, when interpreting multiple discrimi-
nant functions, which arise from analyses with more than two
groups and more than one variable, one would first test the
different functions for statistical significance, and only consider
the significant functions for further examination. Next, we

© Copy Right: Rai University


216 11.556
would look at the standardized b coefficients for each variable

RESEARCH METHODOLOGY
for each significant function. The larger the standardized b
coefficient, the larger is the respective variable’s unique contribu-
tion to the discrimination specified by the respective
discriminant function. In order to derive substantive “meaning-
ful” labels for the discriminant functions, one can also examine
the factor structure matrix with the correlations between the
variables and the discriminant functions. Finally, we would look
at the means for the significant discriminant functions in order
to determine between which groups the respective functions
seem to discriminate.
Point to Ponder
• Discriminate analysis is used to classify people or objects into
groups based on several predictor variables.
• The groups are defined by a categorical variable with two or
more values, whereas the predictor s are metric.
• The effectiveness of the discriminate equation is based not
only on its statistical significance but also on its success in
correctly classifying cases to groups.

© Copy Right: Rai University


11.556 217
RESEARCH METHODOLOGY

LESSON 41:
CLUSTER ANALYSIS

So far we have learnt how to classify objects into different Hierarchical vs. nonhierarchical approaches
groups using Discriminant analysis. Discriminant analysis is Nonhierarchical approaches divide up your data into a set of
essentially used to divide objects into two or more groups. The classes, and each case is assigned to a particular class. Generally,
result of Discriminant analysis is an equation which can predict the hierarchical methods are more widely used because they give
which class or group a new object will fall into. We now turn to greater insight into the overall structure. However, they have
another technique which groups objects, cluster analysis. their own challenges.
However the basis on which it does so is different from
There are 2 ways of doing a hierarchical analysis -
Discriminant analysis. Cluster analysis is a technique of
agglomeratively or divisively. Divisive methods are similar to
grouping individuals or objects into unknown groups. It is
what we talked about with regression tree analysis - they start
different from Discriminant analysis because the number and
with a data set, subdivide it, and continually subdivide the
characteristics of a group in a cluster analysis are not known
subsets until some predetermined threshold is reached. That
prior to the analysis.
threshold might be determined by group size or relationships
Again a technique using complex high level statistics our among group members. Agglomerative methods begin with
exposition will focus on obtaining an intuitive understanding each sample representing a cluster, joining the two most similar,
of the technique and its applications. and then repeatedly joining new clusters together.
Cluster Analysis: What It is and What It’s Results of hierarchical methods are usually presented as a
Not dendrogram:
Frequently marketers are interested in putting people or objects Rescaled Distance Cluster Combine
into groups on the basis of similarities among the objects
CASE 0 5 10 15 20 25
based on a common set of measures . This concept has
extensive applicability in marketing where we may want to Label Num +———+——+———+———+———+
group potential customers into homogenous groups in which Case 11 11 -+—————+
he could develop a specific marketing mix to reach to reach a Case 26 26 -+ +—+
particular group. The basis of grouping could involve a variety
Case 2 2 ——————+ +———+
of socio economic and psychographic characteristics.
Case 12 12 —————+——+ +————+
One goal of marketing of a marketing manager is to identify
consumer segments so that the marketing programme can be Case 29 29 —————+ | |
developed and tailored to each segment. This can be done by Case 27 27 —————+——————+ |
clustering consumers. For example we might cluster them on Case 28 28 —————+ +——————+
the basis of product benefits they seek from a college or by
Case 19 19 ——-+—+ | |
lifestyles. The result for students might be one group which
likes outdoor activities, another that enjoys entertainment , etc. Case 25 25 ——+ +——+ | |
Each segment would have different product needs and and Case 18 18 ————+ +—————————+
may repond differently to advertising. | Case 20 20 ——+———+ | |
Alternatively we might want to cluster brands to determine Case 22 22 ——+ +-+ |
which brands are regarded as similar. If a test marketing Case 1 1 ————+—+ |
programme is planned we could cluster different cities so that
Case 8 8 ————+ |
different marketing programmes can be tried out in different
cities. Case 6 6 ———+—+ |
Cluster analysis is neither a single technique nor a statistical Case 10 10 ———+ +—+ |
technique. It is a mathematical formula for dividing data into Case 13 13 —————+ +——+ |
classes, without a preconceived notion of what those classes are, Case 17 17 —————+—+ | |
based on relationships within the data. There are many different
Case 21 21 —————+ | |
ways to do this, and some of them use statistical probabilities
or statistical quantities such as sum of squares at various points. Case 14 14 —————+-+ +——+ |
But overall, the techniques themselves are not really statistical, as Case 16 16 —————+ +-+ | | |
they give you no means of assessing likelihood. Case 30 30 ——————+ | | | |
Once you had come up with some classes, what are 2 techniques Case 3 3 ——+—+ +——+ +—————+ |
that we have learned that you could use to explore the validity
Case 4 4 ——+ +——+ | | |
of your classification?

© Copy Right: Rai University


218 11.556
Case 23 23 ————+ | | | | average of the cluster, or at some other means of joining

RESEARCH METHODOLOGY
Case 7 7 ———————+ | +————+ groups of points. There are basically 3 common techniques:
Case 5 5 —————————————+ | Single linkage or nearest neighbor joins the two clusters that
have the 2 most similar individual points. This can cause a lot
Case 9 9 ———————————————+-+ |
of problems - often what you get is a series of clusters that each
Case 24 24 ——————————————+ +——+ have one new member. This is also called chaining. It isn’t a
Case 15 15 ————————————————+ terrifically useful new structure. In certain circumstances you
The further to the right, the more dissimilar the clusters are, and might want such a structure.
eventually everything is joined together. Both agglomerative Complete linkage or furthest neighbor joins the two clusters
and divisive techniques can be used to produce a dendrogram. that have the most distance points the most similar. This
A key thing to understand about hierarchical techniques is that method tends to create “globular-shaped” clusters that have
each sample has “membership” in multiple groups, because the unequal variances and sample sizes, which is often what we
groups are themselves clustered hierarchically. When you actually expect in ecology. I’ve found that it is the most practical and
work with the data, you often want to determine a point at useful method in terms of being able to straightforwardly
which you will work with the clusters, that is somewhere interpret the output. The clusters tend to be very clean and well-
between each sample being a cluster and all samples being one defined, you don’t get a lot of chaining or reversals.
giant cluster. Ward’s Method and Between-groups linkage. These are two
Distance Measures and Agglomeration different techniques that both tend to favor “spherical” clusters
Methods with equal variance and sample size. Ward’s method is based on
In order to do the agglomeration or division, there has to be a minimizing the within-cluster sum of squares. The between-
means of measuring multivariate distance. I should also say groups method also uses sum of squares.
that we will work with cluster analysis on “raw” data, but often There are other methods - within-groups, which is closely
what happens is that first you reduce the data using an ordina- related to Ward’s, and the centroid method, which joins groups
tion technique, and then you do a clustering on the reduced with the nearest centroids, but they are not very useful for all
data. So that is another approach that can be used that is practical purposes. Two common problems in cluster analysis
particularly useful if you’re working with something like species are chaining and reversals. Chaining is where single samples
data, where you may have literally hundreds of variables. That’s join a larger cluster each time, so instead of a cluster analysis,
too many variables! Species data are really a somewhat special you really have an ordination, i.e., there is no true hierarchical
case, but a case that many of us might wish to work with. structure. Reversals are caused when an entity joins another
The common distance measures are the Euclidean distance, the cluster at a higher level of similarity than was there before,
squared Euclidean distance, the “City-Block” distance (simple making the hierarchical structure go “backwards” in some cases.
distance that doesn’t adjust for geometry - not usually recom- Generally, these problems can all be avoided by using a com-
mended), and plenty of others. Generally, the Euclidean plete linkage method.
distance in multivariate space will be the preferred method. SPSS
Assumptions and scaling problems
gives you 8 choices if your data are interval or greater, 2 choices
Cluster analysis has no “assumptions” per se, because you
for count data, and 7 choices for binary data. Again, you can
aren’t testing any hypotheses, but there are some considerations
usually get around this decision quite easily by just using the
about scaling. There are also some “problems” that can occur if
Euclidean distance.
your variables have certain structure and you techniques that
With species data, there is another common “distance” measure don’t consider that. For example, if you are using a technique
that is often used - the percentage similarity. Percentage similar- that bases distance on linear correlation, then you are assuming
ity is calculated between pairs of samples, and it is the sum of that linear correlation is a reasonable approximation of distance
the minimum value for each species (i.e., whichever sample has within your data.
the least of a species, even if that number is zero), multiplied
If the variables that you are clustering have different scales, then
by 2, divided by the sum of the species abundances for each
obviously those with a stronger “structure” and greater overall
sample. This is a nice number, because it varies between 0 and 1.
variance will control the clusters. If you wish to avoid this, you
1 minus the %age similarity is the percentage dissimilarity,
should rescale everything to a common system. SPSS offers
which is a measure of distance. Most clustering of vegetation
several ways to do this - z-scores, scaling from 0 to 1, scaling
samples will start with a matrix of dissimilarities and work
from -1 to +1, etc. Even when your data are all on the same
directly from that, which often gives a more stable solution than
“scale”, you may actually have some problems with this. We’ll
working with the original data.
talk a lot more about scaling when we get to ordination, but for
Another decision point comes in in how the the agglomeration now we’ll just think about it in terms of scaling different
is done. This is not so important at the very first step, where variables to a common system.
you are simply joining two samples, but it gets very important
later on. The key decisions are whether to look at the individual Interpretation and Validation
cases and their similarity, choosing the most similar cases, at an The interpretation and validation stage of cluster analysis are big
problems, because there is no way to really effectively assess the
outcome. Basically - is it useful? Answer that question, and you

© Copy Right: Rai University


11.556 219
will have gone a long way. You can statistically evaluate how well
RESEARCH METHODOLOGY

differentiated your clusters are using MANOVA, and you can


use DFA to look at which variables are contributing most
strongly to the clustering. Using these other techniques to
iteratively improve your clustering is recommended.
A lot of the contents and material for this chapter have been
taken from 1998 Dr. Marilyn D. Walker and various other web
sites.
Further application and theory from various web sites

© Copy Right: Rai University


220 11.556
RESEARCH METHODOLOGY

© Copy Right: Rai University


11.556 221
RESEARCH METHODOLOGY

© Copy Right: Rai University


222 11.556
RESEARCH METHODOLOGY
Points to Ponder
Five steps are basic to the application of most cluster studies-
· Selection of the sample to be clustered (e.g. buyers, medical
patients, inventory, products, employee)
· Definition of the variables on which to measure the objects,
events, or people (e.g. financial status, political affiliation,
market segment characteristics, symptom classes, product
competition definitions, productivity attributes).
· Computation of similarities among the entries through
correlation, Euclidean distances, and other techniques.
· Selection of mutually exclusive clusters (maximization of
within –cluster similarity and between –cluster differences )
or hierarchically arranged clusters.
· Cluster comparison and validation.

© Copy Right: Rai University


11.556 223
RESEARCH METHODOLOGY

LESSON 42:
INTERPOLATION AND EXTRAPOLATION

Objective 1. It often happens that a particular type of information is


Dear friends, in many situations during your research work, being col1ected regular intervals such as the census data.
some of the values are missing or you are unable to get some Now suppose if we need the population figure for, say 1988
of the data, but may be these data is crucial for the results of or 1999. it would be impracticable to conduct a census for
your study. What do you do in such a situation? and fortunately these years. The only alternative is to make use of the
you are not knowing the relationship between input data & the technique interpolation and extrapolation.
output data. 2. The technique of interpolation is also used where a part of
Thanks! To the “Interpolation & Extrapolation”, using the data destroyed or missing.
these techniques you can find these missing values or you can For example, we may be studying the figures of sales for a
predict some of the future values. So the friends, after comple- company from 1980 to 1998. We may find that for a particular
tion of this lesson, you will be able to – year, say 1992 data are not available. The records may either be
• Find missing values from given data series with regular missing or destroyed. Also when you conduct survey, there is
interval between input data values possibility that some of data points may be missing or
• Find missing values from given data series with irregular destroyed.
interval between input data values If we are given two variables X and Y, X being the independent
• Predict the data values for future points variable and Y the dependent variable. We say that Y is a
function of X and is expressed as,
Introduction
Dear friends many times, in practical work we come across Y=f(x) = Yx
situationswhere we have to estimate a value which is not available For values of X0. Xl . X2. ... Xn of the X variable
in the given data series or we want to predict a future value. and Y0. Y 1. Y2…….. Yn respectively of the Y variable.
Let us take example of; the census of population in India takes Yfx for any value of X between
Ifwewanttoestimatethevalueo
placeevery10years. i e.,.we have the census figures for 1931, limits Xo and Xn the technique of interpolation can be used.
1941, 1951, 1961, 1971, 1981, and 1991. Now if we require
population figures for the year 1998 or 1990. Despite the great significance of interpolation and extrapolation
it should be noted that they give us only the most likely
What Should We Do? estimates under certain assump-tions. The reader should not
To talk of a census for 1998 or for 1999 is impracticable. form the impression that the figures obtained by this technique
One way out is to make pure guesswork. And that may be would be 100 per cent correct in practice.
highly deceptive. What is desirable is to obtain the required The variations between actual and estimated values are quite
estimates by analysing the available data. natural.
The techniques of interpolation and extrapolation are extremely For example, on the basis of census figures of 1931 to 1991 for
helpful in estimating the missing values and projecting the India we might get population for 2001 as 1000 million. However,
future values. “Interpolation”, thus refers to the insertion of the actual population obtained by census may be different.
an intermediate value in a series of items whereas “extrapola- Thus, we can say that the interpolated or extrapolated values are only
tion” refers -projecting a value for the future. Interpolation best possible estimates under certain assumptions; they are not substi-
supplies us with the missing link whereas extrapolation helps in tutes for actual values.
forecasting.
The accuracy of interpolation depends upon
There is no basic difference between interpolation and extrapo-
a. Knowledge of the possible fluctuations of the figures to be
lation as far as the methods are concerned but I distinguishing
obtained by a general inspection of the fluctuations at dates
past from future we give them two different names. Interpola-
for which they are given;
tion relates to the past whereas the extrapolation gives us the
forecast. b. On knowledge of the course events with which the figures
are connected.
Now discuss some of the applications of interpolation &
extrapolation - If by looking to the data we find that the fluctuations in the
series are Regular we can expect quite accurate estimates.
Application of Interpolation and
Activity: Think & write some of the applications of Interpolation
Extrapolation
& Extrapolation in real life situations
The tools of interpolation and extrapolation are of great
practical utility their utility shall be clear from the following: But to apply these methods we are assuming certain assump-
tions.

© Copy Right: Rai University


224 11.556
Assumptions of Interpolation And Population (crores) 25.13 27.91 31.87 36.11 43.92 54.82

RESEARCH METHODOLOGY
Extrapolation 68.33 84.63
The following assumptions are made while making use of the
techniques interpolation and extrapolation:
1. There are no sudden jumps in the series from one period
to another- While Interpolating a value, we always presume
that there are no sudden ups and downs in the data or, in
other words, the data depicts some sort of continuity. For
example, if we are given the population figures for 1941, 51,
61, 71, 81 and 91 and we are asked to interpolate the figure
for 1988. This would be done the assumption that
throughout this period from 1941-91, there have been
violent changes in population. While extrapolating a value
the same assumption would apply. However in many cases Solution: It is clear from the graph that the interpolated
this assumption may not hold good and our value may be figure for 1986 is 77.5 crores.
faulty.
Binomial Expansion Method
2. Another assumption made while interpolating or
This method of interpolation is simple to understand and
extrapolating values is that the rate of change of figures
requires very little calculations. However, it is applicable only in
from one period to another is uniform. Thus in above
those situations where
illustration our assumption would be that from 1941 to
1991 the growth rate of population has been uniform. This Following two conditions are satisfied:
assumption again may not be good in practice in many cases. 1. The x-variable advances by equal intervals, say 5, 10, 15, 20, 25
Now let us discuss some of the methods of interpolation & etc. If the increase is not uniform this method is not applicable.
extrapolation. For example, if x is 5, 8,13, 15, 24 etc. This method cannot be
Method of Interpolation applied.
Now let us see how we can interpolate the values, considering 2. The value of X for which Y is to be interpolated is
the assumption written above. The various methods of one of the class limits of X series. For example, observe the
interpolation can be divided in two heads: following data:
1. Graphic method X: 5 10 15 20 25
2. Algebraic methods Y: 30 32 ? 38 40
The following are some of the important and more popular We can determine the value of Y corresponding to X = 15 but
methods under algebraic method: not corresponding to X= 12 or 18. The same is true for
1. Binomial expansion method extrapolation, i.e., we can extrapolate the value for X = 30 and
not X = 28.
2. Newton’s method
In this method, expand the binomial (y - l) n and equate it to
3. Lagrange’s method zero.
Each one of these methods is suitable in a certain set of
n(n − 1) n −2 n(n − 1)(n − 2) n −3
circumstances, which are described below: (y − 1)n = y n − ny n−1 + y − y + ...... = 0
2! 3!
Graphic Method: This is the simplest of all the methods of
interpolation, When this method is used the given data are where n is the number of observation of y.
plotted on a graph paper and the plotted points are joined. For the different values of ‘n’ we have the following results:
When there are only two values we shall get a straight line No. of Equation for determining the unknown values
known values
otherwise a curve shall be obtained. 2
2 or 0
∆ Y2 - 2Y 1 + Yo = 0
We take one variable on X axis (say for example year on X Axis)
and on the Y-axis the values of the another variable (say sales), 3 or
3
0∆ Y3 - 3Y2 + 3Y l - Yo = 0

The period for which the value is to be interpolated a perpen- 4 or ∆40 Y4 - 4Y3 + 6Y 2 - 4Y l + Yo = 0

dicular is drawn starting from that point on X-axis to some 5 or ∆ 0


5
Y5 - 5Y4 + 10Y 3 - lOY2 + 5Y l – Y0 = 0
point on the line (or curve). The point where it meets the line 6 or ∆0
6
Y6 - 6Y5 + 15Y 4 - 20Y3 + 15Y 2 - 6Yl + Yo = 0
or the curve another perpendicular is drawn on the Y-axis. The
7 or ∆0
7
Y7 - 7Y6 + 21Y5 - 35Y4 + 35Y3 - 21Y2 + 7Y l - Yo = 0
corresponding value of the variable, is the required value.
∆8
8 or 0 Y8 - 8Y7 + 28Y6 - 56Y 5 + 70Y 4 - 56Y 3 + 28Y 2 - 8Y 1 + Yo = 0
The following example shall illustrate the method.
9 or
∆90 Y9 - 9Y8 + 36Y7 - 84Y 6 + 126Y 5 - 126Y 4 + 84Y 3 - 56Y2 + 9Y1 - Yo= 0
Example: From the following data determine the population The powers in the expanded formula are used as subscript. The
for the year 1986. expansion of the binomial formula as shown above appears to
Year 1921 1931 1941 1951 be a little difficult and complex. However, this can be done by
1961 1971 1981 1991 using the pascal’s triangle given below:

© Copy Right: Rai University


11.556 225
For value of ‘n’ which the power in expansion, there are n+1 Solution : Since the known figures are five, fifth leading
RESEARCH METHODOLOGY

number of terms – say for example n=5 there are 5 terms, Y 5, differences will be zero.
Y 4, Y 3, Y 2, Y 1 Y 0 Y5 - 5Y4 + 10Y3 - lOY2 + 5Yl – Y 0 = 0
With alternative sign Y5 = 5.8 Y4 = 5.7 Y3 = x Y2 = 3.1 Y1= 2.1 Yo = 0.7 substi-
+ Y 5 - Y4 + Y3 - Y 2 + Y 1 - Y 0 tuting values
Now for the coefficients of each term, use the pascal’s triangle 5.8 – 5(5.7) + 10(x) – lO (3.1) + 5(2.1) – 0.7 = 0

n Calculate x =……
X = 4.39
1 1 1
2 1 2 1 Thus the expected average number of children born per mother
3 1 3 3 1 aged 30-34 is 4.39 or 4.4.
4 1 4 6 4 1 Two or more missing values. We can use the same method
5 1 5 10 10 5 1 for more than one missing values. Where two or more values
For example the value 3 = 1 + 2, 6 = 3 + 3, and so on, are missing the binomial expansion method can easily be
( y − 1) 5 = y5 − 5 y 4 + 10 y3 −10 y2 + 5 y1 − 1 y 0 = 0 applied. When two values are missing in a series we get two
unknown quantities in the equation obtained by the binomial
Let us solve this example expansion. In such a case, if we are given n values, we assume
Example: Estimate the production for the year 1985 with the that the (n - l)th differences are constant, i.e., ‘we assume
help of the following table. ∆n−1 ; ∆n −2 ; ∆n −3 ... are constant.
Year Production (in tonnes) If (n - 1 )th differences are constant, the nth differences are zero
1960 20 i. e., ∆ny1 = 0 ∆ny 2 = 0 ; and so on.
1965 22 Solving ∆n = 0
1970 26 The following example shall explain how to find 2 missing
1975 30 values.
1980 35 Example : Estimate the production for the years 1994 and
1996 with the help of the
1985 ?
1990 43 Year Production Year Production
(in 000 (in 000
Solution. Since the known values are six, the sixth leading tonnes) tonnes)
differences will be zero i.e., 1991 200 1995 350
(y-1)6 =0 or ∆6 = 0 1992 220 1996 ?
y0 = 20 y1 = 22 y2 = 26 y3 = 30 1993 260 1997 430
y4 = 35 y5 = x y6 = 43 1994 ?
Solution. As five figures are known we, shall assume that the
( y − 1) 6 = y 6 − 6 y 5 + 15 y 4 − 20 y 3 + 15 y 2 − 6 y 1 + y 0 = 0 fifth order differences will be zero. In the problem there are two
Substituting the values of y0, y1, y2, y3, y4, y5, y6 unknown values hence two equations will be required 10
determine them. They are:
43 – 6x + 15 (35) - 20 (30) + 15 (26) - 6 (22) + 20 = 0
∆50 = y5 − 5 y 4 + 10 y 3 − 10 y 3 + 5 y1 − y 0 = 0
- 6x = - 43 - 525 + 600 - 390 + 132 - 20
y0 = 200; y1 = 220; y2 = 260; y3 = ?; y4 = 350; y5 = ?; y6 = 430
- 6x = - 246
∆61 = y 6 − 5 y 5 + 10 y 4 − 10 y 3 + 5 y 2 − y1 = 0
x = 41
Substituting the values
Hence the estimated production for the year 1985 is 41 tonnes.
y5 - 5 (350) + 10y 3 - 10 (260) + 5 (220) – 200 = 0
Example: Using an appropriate formula for interpolation estimate
the average number children born per mother aged 30-34. 430 - 5y5 + 10 (350) - 10.y 3 + 5 (260) - 220 = 0
Age of mother Average no. of y5 + 10y 3 = 3450
in years children born 5y5 – 10y 3 = 5010
15-19 0.7 4y5 = 1560
20-24 2.1 or y5 = 390
25-29 3.1 Subtracting Eqn. (i) from (ii)
30-34 ? Substituting the value of y5 in the above equation
35-39 5.7 390 + 10 y3 = 3450
40-44 5.8 or 10y3 = 3060

© Copy Right: Rai University


226 11.556
or y3 = 306

RESEARCH METHODOLOGY
x( x − 1) 2 x (x − 1)( x − 2 ) 3 x( x − 1)( x − 2 )(x − 3 ) 4
y x = y 0 + x∆10 + ∆0 + ∆0 + ∆ 0 + .....
2! 3! 4!
Thus the missing values corresponding to 1994 and 1996 are
306 and 390 thousand tones respectively. where yo represents the value of y at origin.
y x represents the figure to be interpolated, ∆ ‘s are the differ-
Newton’s Method
ences. The value of x is obtained as follows:
A number of formulae were given by Newton for different
situations. Some of these formulae are: x = The value to be interpolated - The value at origin
1. Newton’s Advancing Difference Method. Difference between the two adjoining values
2. Newton’s Divided Difference Method. If in the above example we are to compute the value of y for x
1. Newton’s Advancing Difference Method This method is = 25, the value of x shall be obtained as follows:
applicable in those cases where the independent variable x x=
25 − 10 15
= = 1.5
increases by equal intervals like 10, 20, 30, 40, etc. However, 20 − 10 10
like Binomial expansion method it is not necessary here that In case we are given years and the values of y variable then
the value of x for which y is to be interpolated is one of the
class limits of x series. For example, if the given data are: X= Year of interpolation - Year of origin

X Y Time difference between the two adjoining years

10 100 y0 When applying this method the differences between the various
values of Y are to be calculated. The differences are indicated by
20 120 y1 the sign . Thus the first differences would be indicated by ,
30 130 y2 second differences by, and the third differences by , and so on.
40 140 y3 The first differences in each column are called Leading Differ-
50 150 y4 ences. The following is the table of differences:

We can interpolate the value of y for x = 25, extrapolate the


value for x = 57. The formula for interpolation is :

Table Showing Finite or Advancing Differences

Y Differences
First Second Third Fourth Difference
Difference Difference∆2 difference∆3 ∆4
∆1
X0 Y0
y1 − y 0 = ∆10
X1 y1 ∆11 − ∆10 = ∆20
y 2 − y1 = ∆ 1
1
∆21 − ∆20 = ∆30
X2 Y2 ∆12 − ∆11 = ∆21 ∆31 − ∆30 = ∆40
y3 − y2 = ∆ 1
2 ∆22 − ∆21 = ∆31
X3 Y3 ∆12 − ∆11 = ∆22
y4 − y3 = ∆
1
3

X4 Y4

Example: Given the following pairs of corresponding values of X and Y


X: 20 25 30 35 40
Y: 73 198 573 1198 1450
Estimate the value of Y or X = 22

© Copy Right: Rai University


11.556 227
Solution: Applying Newton’s Method
RESEARCH METHODOLOGY

X y First Difference
∆1

20 73 y0
∆10 = 198- 73
= 125
25 198 y1 ∆20 = 375 – 125
∆11 = 573 - 198 = 250 ∆30 = 250-
= 375 250
=0
30 573 y2 ∆ 0 = - 623 – 0
4
∆21 = 625 –
∆12 =1198 – 573 375 ∆31 = -373 – = -623
= 625 = 250 250
= - 623

35 1198 y3 ∆22 = 252 –


∆13 = 1450 – 625
1198 = -373
= 252
40 1450
y4
x( x − 1) 2 x ( x − 1)( x − 2) 3 x( x − 1)( x − 2) 4
y x = y 0 + x∆10 + ∆0 + ∆0 + ∆0
2! 3! 4!
0.4(0.4 − 1) 0.4( 0.4 − 1)(0.4 − 2) 0.4( 0.4 − 1)(0.4 − 2)( 0.4 − 3)
y 22 = 73 + 0.4 * 125 + * 250 + *0+ * −623
2 3* 2 4*3* 2

y22 = 118.92
Newton’s Divided Difference Method This method is to be used when the value of the independent variable X advances by
unequal intervals.
The formula is yx = y0 + ( x − x0 ) ∆10 + ( x − x0 )( x − x1 )∆ 20 + ( x − x0 )( x − x1 )(x − x 2 ) ∆30 + ....

∆10 ; ∆20 ; ∆30 are the first, second and third leading divided differences respectively

© Copy Right: Rai University


228 11.556
Steps:

RESEARCH METHODOLOGY
• Prepare a table of divided. differences. The method of preparing this table is given below:

x y Differences
First Second Third Fourth
Difference Difference difference∆3 Difference
∆1 ∆ 2
∆4
X0 Y0
y1 − y 0
= ∆10
x1 − x 0
X1 y1 ∆11 − ∆10
y 2 − y1 = ∆20 ∆21 − ∆20
= ∆11 x2 − x0 = ∆30
x 2 − x1 x 3 − x0
X2 Y2 ∆12 − ∆11 ∆31 − ∆30
y3 − y2 = ∆21 = ∆40
= ∆12 x 3 − x1 ∆22 − ∆20 x4 − x0
x3 − x2 = ∆31
x 4 − x1
X3 Y3 ∆13 − ∆12
y4 − y3 = ∆22
= ∆13 x4 − x2
x 4 − x3
X4 Y4

• The value to be interpolated is denoted by x value of the function at the position 6 of the independent
variable?
• The above formula is applied.
Solution: Since the independent variable is advancing by
Example: The observed values of a function are respectively
unequal intervals we will have to use Newton’s divided
168, 120, 72 and 63 at the four positions 3, 7, 9 and 10 of
difference method.
independent variable. What best estimate can you give for the
Interpolating The Values of Y For X =6; By The Divided
Difference Method

x y Differences
Second Difference ∆
2
First Difference Third difference Fourth
∆1 ∆3 Difference
∆4
3 x0 168
y0 120 − 168
= −12
7 −3
∆10
7 x1 120 − 24 − (−12 )
72 − 120 = −2 5 − (−2 )
y1
= −24 9−3 =1
9− 7 ∆20 10 − 3
∆11 ∆30
9 72 − 9 − ( −24)
=5
x2 y2 63 − 72 10 − 7
= −9
10 − 7 ∆21
∆12
63
10 y3
x3

© Copy Right: Rai University


11.556 229
RESEARCH METHODOLOGY

y x = y 0 + ( x − x 0 )∆10 + ( x − x 0 )( x − x1 ) ∆20 + ( x − x 0 )( x − x1 )( x − x 2 ) ∆30 + ....

y 6 = 168 + ( 6 − 3) * −12 + (6 − 3)( 6 − 7) * −2 + ( 6 − 3)(6 − 7 )( 6 − 9) * 1


y6 = 147
Lagrange’s Method
This method is applicable for any data, whether our series advances by regular or irregular intervals or whether the value to be
interpolated is in the beginning or in the end.
The formula is –
( x − x 1 )( x − x 2 )( x − x 3 )....( x − x n ) ( x − x 0 )( x − x 1 )( x − x 2 )....( x − x n )
y = y0 + y1 +
( x 0 − x 1 )( x 0 − x 2 )( x 0 − x 3 )...( x 0 − x n ) ( x 1 − x 0 )( x 1 − x 2 )( x 1 − x 3 )...( x 1 − x n )
x

( x − x 0 )( x − x 1 )( x − x 2 )....( x − x n ) ( x − x 0 )( x − x 1 )( x − x 2 )....( x − x n − 1 )
y2 + .......... + yn
( x 2 − x 0 )( x 2 − x 1 )( x 2 − x 3 )...( x 2 − x n ) ( x n − x 0 )( x n − x 1 )( x n − x 2 )...( x n − x n − 1 )

where x0, x1, x2, … etc are the given values of x variables and y0, TUTORIAL
y1, y2, etc. are the corresponding values of y variables; yx is the Q1The annual sales of the company are given below. Estimate
figure to be interpolated. the sales for the year 1980
Year : 1970 1975 1980 1985 1990
Example: You are given the following information: Sales : 125 163 - 238 282
X: 5 6 9 11 Q2 Find by interpolation the number for 1996 from the
y: 12 13 14 16 following table of index numbers of production of a
Find the value of y when x = 8 certain articles in India:
Solution: Year :1994 1995 1997 1998
x0=5, x1=6, x2=9, x3=11 x=8 Index
y0=12 y1=13 y2=14 y3=16 yx=? number : 100 107 157 212
Q3By using a suitable interpolation formula, estimate the price
for the year using following data
(8 − 6)(8 − 9)(8 −11) (8 −5)(8 −9)(8 −11) (8 −5)(8 − 6)(8 − 9) (8 − 5)(8 −6)(8 −9)
yx = 12 +13 +14 +16
(5 − 6)(5 −9)(5 −11) (6 −5)(6 −9)(6 −11) (9 − 5)(9 −6)(9 −11) (11−5)(11− 6)(11− 9) Year: 1980 1985 1990 1995 2000
yx = 7
Price 12 15 20 24 31
Extrapolation
Q4 The observed values of a function are respectively 168, 120,
Extrapolation refers to estimating a value for future period. In
72 and 63 at the four positions 3, 7, 9 and 10 of the
order to extrapolate a particular value the various methods
independent variables. Hat is the best estimate you can give
discussed above for interpolation can be adopted.
for the value of the function at the position 6 of the
Example: Extrapolate the business in 2004 from the following independent variable?
Year : 1999 2000 2001 2002 2003 Q5 Estimate the annual premium payable at the age of 28 years
Business : 150 235 365 525 780 from the following data
Solution: We can extrapolate the business in 2004 by the Age (years) : 20 25 30 35
Binomial expansion method. Since the known values are five, Annual
the fifth leading difference will be zero.
premium(Rs.) : 36 39 43 47
∆5 = 0
(y-1)5 =0
∆ = y 5 − 5 y 4 + 10 y 3 − 10 y 2 + 5 y 1 − y 0 = 0
5

x: 1999 2000 2001 2002 2003 2004


y: 150 235 365 525 780 ?
y0 y1 y2 y3 y4 y5
(y-1) = y5 –5*780 +10* 525 – 10* 365 + 5*235 –150 =0
5

= y5 –3900+5250 –3650 –1175 –150 = 0


-y5 = 3900 – 5250 +3650-1175 –150 = 0
y5 = 1275

© Copy Right: Rai University


230 11.556
RESEARCH METHODOLOGY
LESSON 43:
CASE STUDY

Friends, now we will discuss some the real life case studies and the brand usage pattern and the reason for consistency/
understand how techniques of research methodologies are change.
useful for that. 3. To study the perception o f the consumers about Indian ASL
Case Study No. 1 brands vis-a-vis foreign, brands.
After Shave Lotion 3. Research Design
1. Marketing Brief Type of study Exploratory
Traditionally, and even today quite a few men use alum as an 3.1 Sources of Data
antiseptic after shaving or used nothing al all. Use of after shave As secondary data about the After Shave Lotion market is
lotion (ASL) in India is relatively a new phenomenon. With almost nonexistant, all the information is obtained from
rapid changes in lifestyle and values, more and more men are primary source.
today using this product.
3.2 Data Collection Mode
Initially, the use o f ASL was mainly confined to the upper class
The data collection instrument used for obtaining the desired
and most of them relied on imported brands. Many imported
information is a questionnaire. (a copy enclosed).
brands viz. English Leather, Williams, Givenchy and Yardley are
still in use among the upper segment of the ASL market. Since 3.3 Sampling Plan
these ASLs are expensive, all can not afford to buy them. Thus, a 1. Target Population
need for a cheaper and indigenous brand was felt. Seventies saw The target population of the study consisted of men from
the introduction of two indigenous brands made in collabora- middle and upper income groups residing in Calcutta in the
tion with the foreign companies. These were - Old Spice age group of 20-50 years.
manufactured in India by Colfax Laboratories Pvt. Ltd. in 2. Sampling Unit: Household
col1aboration with Shulton o f New York and the other brand
Monarch was manufactured in India by J.K. Helen Curtis in 3. Sample Size: 150
collaboration with Helen Curtis of USA. Old Spice came in a big 4. Sampling Method: Purposive Sampling
way to grab a major share of the ASL market in India and 4. Data Analysis And Findings
continues to be the market leader. Eighties saw the introduction Awareness - Top of Mind
of some more new brands-Savage by Wiltech India Ltd., Park
Avenue by J.K. Helen Curtis and Old Spice Musk by Coaliax 34% of the sample respondent out of 150 said that they did
Laboratories. ASL market is thus gradually becoming competi- not use any after shave lotion. Hence the following findings are
tive. based on an effective response from 101 (64%) respondents.
See the graphics in the appendix for various findings. 28% of
Marketing Issues the respondents had Old Spice at their top of mind, whereas
It is necessary to analyze the impact of the recent changes Brut had a top of mind awareness of 16%. Park Avenue and
mentioned above and gauge the ASL market. To have a closer English Leather 12% each. Monarch, Savage, Musk (Jovan) and
look at the ASL market, some of the issues which need to be others have a top of mind awareness of 8%.
studied are:
Second Level
1. Market share of various brands in Indian market
No particular brand was prominent at second level of awareness
2. Perception of consumers about the domestic vis-a-vis or recall. Three imported brands namely English Leather,
imported brands Givenchy and Yardley were mentioned by 16% respondents
3. Characteristics sought after in an ASL, reasons for consistency Patrichs and A von were at the second in terms of awareness by
or change in usage of a particular brand of the respondents (10%), whereas Denim and Brut had 6%
4. Purchase behaviour of the consumers and advertising (see Exhibit-1)
effectiveness The present study attempts to look at some of Present, Previous and Future Brands
these issues through marketing research. On the basis of the present brand being used, Old Spice
2. Marketing Research Objectives emerges as the leader. 33% of the respondents said they are
The objectives of this research study were: using Old Spice at present. Other brands being used were
Savage, Brut, Park Avenue and Yardley.
1. To find out consumer awareness about various ASL brands
Largest number of the respondents (44%) said they used Old
in the market.
Spice on the previous occasion. 20% of the respondents
2. To study the buying behaviour o f consumers, specially the mentioned that they used Brut on previous occasions. Mon-
choice criteria adopted. Also,

© Copy Right: Rai University


11.556 231
arch, Jovan (Musk) and English Leather were the other brands introduce better quality products and use advertising to
RESEARCH METHODOLOGY

previously used. improve the poor image of their brands.


Regarding the future choice of brand: Old Spice and Park Manufacturers call think of launching brands with attractive
Avenue appeared as the most likely choice. (see Exhibit- packing as a gift item because a sizeable number of sample
2)Reasons for Change of Brand/Consistency respondents said that they get an after shave lotion as a gift. At
Regarding switching of brands, 34% of the respondents present most of the after shave lotions, which are gifted, are
changed their brands “Just for a change” 12% changed because imported ones.
the brand they preferred, was nut easily available. Most consumers consistently use a particular brand because they
56% of the respondents continue using the same brand get used to it. To inspire a change, manufacturers can stress on
because they have become habituated to it. the unique or exciting benefits from a particular brand offers.
Appendix
22% said that their resent brand provides them value for
money. Questionnaire
12% of the respondents were of the opinion that they use a Dear Respondent,
brand because they don’t like other We are conducting a survey of the after shave lotion market. We
brands. (see Exhibit-3) would he grateful if you could fill-up the following question-
naire in this regard.
Reasons for use of ASL
1. Do you use an after shave lotion?
50% of the respondents said that antiseptic property is the
predominant reasons for using an after shave lotion. Among ( ) Yes ( ) No
the other reasons 30% said they use ASL because of the If you do not use an after shave lotion then go to the
freshness it gives, 18% use for its perfume, 12% of the Question-12
respondents like the sting of ASL. (See Exhibit-4) 2. Please name a few after shave lotions you have heard of.
Usage Time (a)……..
56% of the respondents use ASL immediately after shaving, (b)…….
44% of them use ASL after taking bath, while 33% of them
(c)………
use ASL before going to a party. (see Exhibit-5)
3. Which of the following brands have you heard of? TICK
Preference for Indian vs Imported Brand
42% of the respondents were found to prefer imported brands a Park Avenue b Old Spice
to Indian brands. The reasons mentioned for use of imported c Savage d English Leather
brands are better quality, brand image and status symbol. e Patricks f Williams
For Indian brands, easy availability was the reason cited by 66% g Aramis h Givenchy
of the respondents. 50% of the respondents felt that Indian i Brut j Yardley
brands were preferable because of lower price. Brand image was
4.
given as the reason for preferring Indian brands by 33% of the
respondents. (Exhibits-6A and 6B) a. Which after shave lotion are you using at present?.....

Purchase Decision b If you are to select an after shave brand now which brand
52% of the respondents said that they bought ASL themselves. will you choose? ......

38% of them got it as a gift. 5. Can you recall the name of the previous brand of after shave
lotion you used? Please
10% of the respondents said ASL is bought by their family
members. (Exhibit -7) mention......
6. Can you give reasons for consistency/change in your after
Purchase Factors
shave lotion’!
Brad names was found to be the most important factor that
influenced the purchase decision. Perfume and type of bottle are Consistency
considered as the next important factors. Surprisingly, price and a Habitual
antiseptic property of the brand appeared lesser important b Value for money
factors in brand choice decision (Exhibit-8)
c Don’t like others
Conclusions d Any other, please specify
Among the sample respondents, Old Spice turned out to be
Change
the most popular brand in the after shave lotion market. ‘Park
Avenue has carved a niche for itself in the upper segment of the a Like to try other brands.
market. b For a change;
Imported brands are still considered to be superior quality by a c All brands are same.
sizeable number of consumers. Indian manufacturers must d Any other, please specify.
7. Why do you use an after shave lotion? TICK

© Copy Right: Rai University


232 11.556
a. For its antiseptic properties

RESEARCH METHODOLOGY
b. As a perfume
c. To feel fresh
d. Girlfriend loves it
e To get the sting.
f Any other reason, please mention.
8. When do you use an after shave lotion?
a Immediate after shaving
b After a bath
c Anytime of the day
d Before going to a party.
e ............
9. Given an easy availability of Indian and foreign brands of
after shave lotion which brand do you prefer?
( ) Indian ( ) Imported
Why? TICK
a Perfume is better
b Quality is better
c Brand image
d Price is lower
e Status
f Easy availability
g Any other, please specify.
10.Who buys the after shave lotion for you?
a. Self
b Family members
c Normally get it as a gift
d ............
11. Here we have mentioned a set of factors that you may
consider while buying an after shave lotion? Give your
response on a seven point scale ranging from (1) most
important to (7) least important for each of them.
a. Price
b. Brand name
c. Perfume
d Antiseptic property
e Type of bottle (with/without atomizer)
12. Personal Information:
Age: ( ) less than 18 years ( )18-25 years
( ) 25-35 years ( ) above 35 years
Family Income
( ) less than Rs. 36000 p.a.
( ) Rs. 36000 to Rs. 72000 p.a
( ) above Rs. 72000 p.a.
Profession
Govt. Service/Private Service/Student/Business/ Any Other
Thanks a lot.

© Copy Right: Rai University


11.556 233
RESEARCH METHODOLOGY

© Copy Right: Rai University


234 11.556
RESEARCH METHODOLOGY
Company Million Litre Market Share(%)
Asian Paints 42.0 32.8
Goodlass Nerolac 16.0 12.5
Berger Paints 11.0 8.6
Jenson-Nicholson 10.5 8.2
ICI 5.5 4.3
Total 85.0 66.4

Exhibit 1 in Appendix-1 presents the different brand names of


major Paint manufacturers in India and Exhibit 2 shows major
companies market share in different class of decorative paints.
These two evidently indicate that decorative paint market in India
is a highly competitive business. Going by the interval of
purchase of any decorative paint, the product is
Case Study No. 2 treated as
Decorative Paints consumer durable. Paint purchase process is also thought to be a
The paint is used by two broad sectors called decorative and joint decision where the end user and Dumber of intermediaries
industrial purposes such as protective, automotive, refinish, (i.e., the painter, contractor, dealer/retailer) seem to interact. But
signboard and, coach painting. This case study will examine what is the possible type of role played by these intermediaries is
some issues pertaining to the marketing of decorative paints. A often not clearly known. In addition, consumers are found to
typical decorative paint is either water based or solvent based. consult their friends and own family members in the choice
The water based paints are emulsion (acrylic) or distemper (dry decision. Paint advertisers also seem to convey a lot of messages
or oil-bound). The solvent based paints arc known as enamel about their brands and specific
paints. There are currently three grades of synthetic enamel features available in their offerings. The idea is to create
paints; Premium, medium and of economic quality. In 1988 the awareness about brands and company names among the end
breakup of different grades of the decorative paints consump- users so that the brand falls under the acceptable lists in the
tion was as follows: consumers mind.
Enamel Emulsion
Given such a scenario of the decorative paint market in India,
Premium Enamel: 44% Premium Emulsion: 3% this study endeavoured to examine the following issues:
Medium Enamel: 12% Economic Emulsion: 13% 1. Marketing Research Objectives
Economy Enamel: 44% Flat Oil Paints: 5% i. To determine how consumers decide on the choice of shade,
Oil Bound /Dry Distemper: 79% packsize, brand in decora-tive paints. Specially to study how
Competitive Structure self/spouse, friend, contractor/painter, dealer/retailer.
The Indian paint market, today, has 22 large units and 1600 advertisement influence in the decision-making.
small units. The big and small scale units contribute an equal ii To assess the relative importance attached to the various
amount to the total supply. In the organised sectors the leading factors namely durability, washability, finish, instant drying,
paint companies are: range of pack size, company name, availability and price,
Asian Paints while buying any decorative paint.
Berger Paints iii To study consumers awareness about brand and
manufacturers’ name.
Garware Paints
ICI
2. Research Design
1. Type: Exploratory Study
Goodlass Nerolac
2. Source of Information: Consumer survey with a structured
Jenson & Nicholson
questionnaire (copy enclosed
Shalimar Paints
in the Appendix-II)
In the year 1988, the decorative paint market size was estimated
3. Sampling Decisions
to be 128 million litre, valued at Rs. 48.13 million. The average
annual growth of the market during the last five years was a. Target Respondent
roughly 5%. People who have got their house/furniture/some domestic
Market Share and Brand Names appliance painted during the last year.
In 1988, the top five companies market share in decorative paint b. Sampling Procedure
market was as follows:

© Copy Right: Rai University


11.556 235
To get a representative and unbiased sample it was decided to
RESEARCH METHODOLOGY

Source Consulted (i) Shade(ii) Size (iii) Brand/Company (iv)


choose certain localities in Calcutta.
Self/Spouse 96 30 36
Initially a sampling frame was prepared with the help of
Friend 20 4 20
dealers/retailers of these localities from their sales records.
Contractor Painter 35 80 84
The above list was divided area-wise and the study decided to
draw samples by systematic procedure to provide repre- Dealer/Retailer 8 42 48
sentation to different localities. Advertisement 15 6 30
People who refused to cooperate or were not available when
approached were replaced by other members in the list. Applying X2 test on this data we found that these decision-
To allow recency in the data collected, 50% of the samples were making and sources are not independent. That is, some of
selected by intercepting the customers at the shop. the sources are distinctly more operative in certain decision-
c. Sample size making. Consequently, the study tested the role of spouse/
One hundred (the number fixed on convenience). Keeping in self, contrac-tors/painter and dealer/retailer in different
view that it is an exploratory study this many respondent was decision-making.
assumed to be adequate. 2. From the table itself it is clear that self/spouse play an
important role in deciding on the shade of paint bought;
3. Data Analysis whereas contractor/painter is the major influencer regarding
(a) To analyse the data regarding the possible influence by the size and brand of paint purchased. Dealers and
different members in decision- advertisements were not found to be much key source of
making, it was initially hypothesised that these form of influence in any decision.
classifications (shown below) are independent. 3. In appendix-III we have shown the derivation of interval
Type of person Choice Decision scale of various attributes by Thurstone’s model. It shows
Regerding that durability and finish are the most sought after qualities
Shade Size in decorative paint. Surprisingly, company name and price
Brand came somewhere in the middle of the scale. It possibly
indicates that consumers surveyed attach more importance to
Self/Spouse
quality of paint than price.
Friend
4. Data also showed that majority of the consumers were not
Contractor/Painter aware of the exact brand they had used. They were, however,
Dealer/Retailer able to name the manufacturer. This observation would
possibly give many insight to paint manufacturers on the
Advertisement
issue of whether to have brand or corporate advertising.
This was tested by Chi-square test.
b. Simultaneously our intuition suggested that some of the
members among the above mentioned people, such as
contractor/painter, would influence the decision more than
others. So, the study tested whether majority of the
consumers expressed that they consulted some particular
type of person in certain decision-making. Hence, the null
hypothesis of this type can be expressed as,
Ho: Pi> = 0.50-vs- H1 : Pi < 0.50
where Pi = Proportion of consumers who seem to consult a
particular type of person in certain decision-making. ‘i’ suffix
denotes the particular type of person. This hypothesis was
tested; by Normality test.
c. Data regarding consumers relative importance to the given
set of attributes in choice decision were analysed by
Thurstone’s Case- V scale.
4. Findings
1. The table below shows the number of people who
consulted a particular type of person (in column-l) in
different decision-making.

© Copy Right: Rai University


236 11.556
RESEARCH METHODOLOGY
APPENDIX-l
EXHIBIT 1 : Decorative Paints

Brand Names of Major Manufacturers


MANUFACTURER

Class of British Asian Paints Jenson &: ICI Shalimar Good/ass Garware

Paint.!: Paints Nicholson Paints Naolac Paints


1. Premium Luxol Hi Apcolite Brolac Dulux Syn- Superlac Nerolac Syn- Ecomite
Enamel Gloss Syn. Synthetic Dulux thetic Synthetic thetic Pammd
Enamel Enamel Polyurethetic Enamel Enamel Enamel Syn. Enamel
Enamel Glosslite
2. Medium Parrot Syn- 3-Mangocs Jensons Duwel Durolac Glosslite Kinglac
Enamel thetic Synthetic Quick Speed-gloss Synthetic Synthetic High Gloss
Enamel Enamel Dl)'ing Enamel Enamel Syn. Enamel
3. Economy Butterfly Gattu G.P. Umbrella Maxilite Diamond Palm Tree King Quick
Enamel G.P. Syu. Enamel Synthetic Synthetic G.P.Syn. Dl)'ing S)l1
Enamel Enamel Enamel Enamel Enamel
4. Premium LuxolSilk Apcolite Robbialac Dulux Superlac Nerolac Eomite
Emulsion Acrylic Super- Acrylic Plas- Acrylic Acrylic Plas- ACl}'tic Acryl ic
Emulsion Acrylic tic Emulsion Emulsion tic Emulsion Emulsion Emulsion
Emulsion With Ror Int/Ext.
Silicone ror Use
(nt/Ext use
S. Economy B.P. Vinyl Super Jensolin x Durolac X X
Emulsion Wall Paint Decoplast Acrylic Acrylic Plas-
Water Thin- Emulsion tic Emulsion
nab Ie Wall & Shaliplast
Paint Styrene
6. Flat Oil Bison Syn. Apcolite Jensolin Syn. x Matt Kote Nerolac Emotite
Paints Ext./lnt. Syn. Ma tt Ext. Finish Paint Syn. Flat Flat Oil
Finish Paint, Pearl Paint.
Lustre
Finish, Oil paint
Neromatt Ext.
Syn. Flat
7. Oil Bound x Tractor Jensolin x Duradol New Soldier Eomite Syn.
Distemper Syn. Wash- Syn. Wash- Washable. Washable
able Dis- able Dis- Oil Bound Distemper
temper temper Distemper
DI
8. DI)' Dis- Castle x Jensolin DI}' x 6666 Dry X Blundell's
}'
temper Distemper Distemper Distemper Muresca
Dry Dis-
temper

© Copy Right: Rai University


11.556 237
EXHIBIT 2
RESEARCH METHODOLOGY

Decorative Paints - Maior Paint Companies Market Share


Unit: In Percentage (%)

COMPANY
Class of Paillls Asian PaintS Goodlass Berga Paints Jenson & ICI Ochers
Naola,: Nicholson
Premium
33 19 12 6 10 20
Enamel
Medium Enamel 44 10 12 5 1 27.5
Economy
33 4 13 6 0.7 43.3
Enamel
Premium
Emulsion
Matt 21 17 65 15 22.5 19
Silk - - 30 20 375 12.5
Economy
22 14 6.6 7.3 - 50.1
Emulsion
Oil I30und
Distemper 46 17 4 9 - 24

Appendix 2 durability. Say, you consider price as more important than


durability then put in the price row durability column the
Questionnaire
number 1.
1. When did you get your house/furniture/some appliance
Please note that there is no right or wrong answer here.
painted last?
2. Which type of paint you used? Perhaps it will be easier for you to fill this table if you start from
the 1st row factor: and compare it with all the column factors.
3. Can you recall the make (brand/company name) of the Put the number 1 in the row whenever the 1st row factor is
paints you used? more
Brand Name Factor Price InstantAvaila bility Ronge Com- Finish Durabilitu
Washabil
ity
Company Name Drying Of Pack size pony
Name
_________ __________ (a) Price x ( ) () () ( () () ()
(b) Instant
_________ __________ Dtying
x () () ( () () ()
(
4. What were the reasons for choosing this particular paint? (c) Availability X () ( )
)
()
(d) Range 01
5. Did you seek the advice of the below mentioned persons pack size
x ( () ()

while deciding on the shade, (e) Company


name
x X
(
)
()

pack size and make (brand/company) of the paint? (f) Finish x


(
)
()

State your views by ticking in the respective columns below (g) Durability
(h) Washability
X ()
X
Decision Regarding Choice of
Source Consulted Shade Pack size Brand /Company Appendix 3
Self/Spouse Exhibit-3A
Friend Factor Preference Pattern of Consumers
Contract /Painter
Table below shows the proportion of consumers (in %) who expressed that row factor is more
Dealer/Retailer important to the column factor.
Factor Row {actor marl: important 10 column factor (in
Advertisement %)
A b c d e f g H
6. While buying decorativc paints consumers seem to consider (a) Price
(b) Instant Drying
-
1.8
98.2
-
79.6
55.5
100
79.6
29.6
24.1
31.5
14.8
35.2
0
50
18.5
factors such as, price, instant drying, availability, range of pack (c) Availability 20.4 44.5 - 68.5 22.2 14.8 18.5 24.1
(d) Range or pack size 0 20.4 31.5 - 0 13.0 16.6 11.1
size, company name, finish, durability and washability etc. (e) Company name 70.4 75.9 77.8 100 - 35.2 9.3 48.2
Here you are asked to judge all possible pairs of such factors (f) Finish
(g) Durability
68.5
64.8
85.2
100
85.2
81.5
87.0
83.4
64.8
90.7
-
75.9
24.1
-
64.8
98.2
and indicate for any pair of factors which factor is more (h) Washability 50 81.5 75.9 88.9 52.8 35.2 1.8

important to you in the following table. For example,


suppose you are asked to compare between price and

© Copy Right: Rai University


238 11.556
Exhihit-3b Company

RESEARCH METHODOLOGY
Applying Thurstone’s formula to obtain the interval scale value Sriram Honda Birla Yamaha
under the Normality assumption we obtained the following
Market Segmentation: Portable generator has been marketed
table:
for domestic and commercial use. Typically on the basis of user
a b c d e f g H
(a) Price . 2.1 0.83 3.5 (-)0.54. (-)0.48 (-)0.38 0.0 capacity requirements (i.e. wattage demanded) the following two
(b) Instant Drying -2.1 0.14 0.83 (-)0.71 (-) 1.05 (- )3.5 (-)0.9 segments seem to form.
(c) Availability -0.83 (-)0.t4 . 0.48 (-).76 ( - )1.05 1- )2.09 O.1
(d) Range of Pack
-3.5 (-)0.83 (-)0.48 . (-)3.5 (-)1.13 (- ).97 (-) 1.22 • Domestic use: 600 watt
size
(e) Company name 0.54 0.71 0.76 3.5 . ( - )0.38 ( - ) 1.33 (- )0.5 • Commercial use: 1000-1500 watt.
(f) Finish 0.48 1.05 1.05 1.13 0.38 . (-)0.71 0.3S
(g) Durability 0.38 3.5 2.09 0.97 1.33 0.71 2.10
But the emerging market segment seems to be the rural market.
(h) Washability 0.0 0.9 0.71 1.22 0.5 ( - )0.38 (-)2.10 As such, the rural market requires generators mainly to run the
Computing the average of each row and shifting the origin, we get the. scale value for each
factors as the following
pump sets in the farm. This market has been totally ignored by
Thurstone's Value the two market leaders. While it is true that these two compa-
a b c d e F g h
nies manufacture a better quality generator and they are more
Scale 2.95 0.87 1.30 0 3.08 3.09 4.54 2.41 expensive, light and fragile and hence perhaps cannot be left in
the open in the farms. Local brands so far used to satisfy the
Case Study No. 3 requirements of the rural sector. But today the market leaders
Portable Generator have realised the importance of rural market. However, it is also
1. Marketing Brief true that the features looked for in a portable generating set
seem to vary from urban to rural market. For example, till
The new economic liberalization policy of 1985 has led to recently portable generators used to be marketed on factors like
increased foreign industrial collabora-tions in India. There has low noise level, fuel efficiency, reliable machine etc. But now the
been a spurt in the industrial tie-ups and consequently indus- market requirements have changed. Hence this needs to conduct
trial output has gone up. Portable generator is one such a marketing research. For example, if the research reveals that
industrial item where many new units were set up during 1984- hours of continuous operation is the most important factor in
85. For example, Sriram group collaborated with Honda of rural setting, it would imply that the product should have a
Japan and set up a unit with a capacity of 500 portable genera- large fuel tank as efficiency of the machine can be increased only
tors a day. The Birla group, in collaboration with Yamaha of up to a point. This in turn implies that the dry weight would be
Japan entered the portable generator market with like capacity. higher. Thus testing such hypotheses would enable to identify
Similarly, Greaves Cotton tied up to the optima! feature mix consistent with technological feasibility
produce its brand Called ‘Lombardini’ portable generator, and consumer preferences. The study tests many such hypoth-
Kirloskar group introduced a 1.5 KVA eses on data collected from rural and urban markets to see the
portable generator; Enfield India followed with Gee generator difference in the requirement pattern of different class of users.
and so on. Simultaneously, the local brands were also gradually 2. Marketing Research Objectives
introduced. It is estimated that there are about 50-60 units Broadly speaking, the research objectives are
operating in the local sector with capacities in the range of 100 a
1. To compare the utility of a portable generator in rural and
day. Thus by 1986 the total output of portable generator
urban market.
industry was in the range of 2.5 lakhs a month. This demand
was however shortlived and by 1987 many units had closed 2. To examine the perception about the price of a portable
down. For example, Kirloskar group has withdrawn the 1.5 generator by rural vis-a-vis urban consumers.
KVA machine, Lombardini has also disappeared from the 3. To study how rural and urban market attach importance to
market and so on (see Exhibit-l to have an idea about shift in below mentioned qualities of a portable generator:
market share). Two major competitors Sriram Honda and Birla a Noise level
Yamaha are locked in a fierce competition in the market
b Hours of continuous running
indulging in price war. (See the following table to see the price
changes undertaken by these two companies within a year). c Ruggedness
d Dry weight.
TABLE I
Price Fluctuation or Two 3. Research Hypotheses
Leading Companies The issues mentioned above are tested with formulation of
Unit : Rs. some hypotheses. The different hypotheses and the specific
Thousand measurement used are explained below.
JA MA AP MA JU AU OC NO DE
N
FER
R R Y N
JUL
G
SEP
T V C Hypothesis-l
90 89 86 84 83 82 81 80 79 78 77 0" Ho: Portable generator’s utility is equal for both rural and urban
based consumers.
89 87 85 83 82 81 80 79 78 77 77 6
Hl: Portable generator is perceived as a low utility item in urban
market than in rural sector.

© Copy Right: Rai University


11.556 239
This test was based on the response to the following question. Sample Size
RESEARCH METHODOLOGY

Q :Please list your order of preference for the purchase of the The study had selected an equal size of sample of fifty consum-
following items: ers from urban and rural markets.
a. Colour television Test Statistic
b. Music system All the above mentioned hypotheses were tested by applying
c. Air conditioner the t test whose general structure was as follows:
d. Portable generator Xl - x2
e. VCR t -=
f. Camera S.e. (Xl - X2)
g. Refrigerator
The mean rank obtained by portable generator in rural and where X1, X2 sample mean scores obtained from the rural and
urban segments was analysed. urban sector respectively. S.c. stands for standard error of the
difference between sample mean. (See chapter on t-test for its
Hypothesis-2
computa-tional formula).
Ho: Portable generator is perceived as equally expensive in both
Depending on the structure of the alternative hypothesis the
rural and urban segments
test had chosen appropriate acceptance/rejection rule for the null
HI: Portable generator is perceived as less expensive by the rural hypothesis. Data analysis by this t-test on the sample response
consumers. showed the following results.
This was tested using the following question.
4. Findings
Q :Portable generator is expensive for you to afford.
Hypothesis-l : The data indicated that portable generator had a
greater utility for the rural segment than for the urban segment.
Hypothesis-2: The rural segment perceives portable generator to
Strongly Disagree Neither agree Agree Strongly be less expensive compared to the urban segment.
disagree nor disagree agree
Hypothesis-3 : Test shows that noise level is a less important
The hypotheses on product features are analysed based on factor for the rural segment.
responses to the following question. Hypothesis-4 : Hours of continuous operation is a very
Q :Please rank the following features of a portable generator in important factor for the rural segment.
your order of preference. An implication of this result is that rural segment would prefer’
a Noise level during operation higher tank capacities. This is understandable since a farmer has
to procure fuel from a far distance that he can’t afford to do this
b Hours of continuous operation
very often.
c Ruggedness of the machine
Hypothesis-5 : Ruggedness of the machine is a feature which
d Dry weight of the machine. the farmers look for more, as the machine would be in the open
Hypothesis-3 field.
Ho: Noise level during operation is an equally important factor Hypothesis-6 : Dry weight is a more important consideration
for both segments. of the urban segment. A farmer docs not require frequent
HI: Noise level is a less important factor for rural segment. mobility of a generator. On the contrary, the urban consumers
perhaps put it in multiple uses and hence look for lighter
Hypothesis-4
weight.
Ho: Hours of continuous operation is equally important for
both segments. Recommendation
It is seen that the research results reveal some important lacunae
HI: Hours of continuous operation is more important for rural
in the existing marketing strategy. While the utility for the rural
segment.
sector is maximum, this very, segment has been somewhat
Hypothesis-5 ignored. Further the rural sector does not perceive the price of
Ho: Ruggedness is an equally important factor for both the portable generator to be expensive, may be in view of their
segments pressing requirement. Also a review is needed with regard to
HI: RU&8edness is more important for rural segment. different size of features of a portable generator and accord-
ingly, a portable generator may be positioned in the rural and
Hypothesis-6
urban market.
Ho: Dry weight is equally important factor for both segments
Case Study No.4
HI: Dry weight is more important for urban segment.
Typewriter
1. Marketing Brief

© Copy Right: Rai University


240 11.556
A typewriter machine is an indispensable item in any 2. Each manufacturer’s perception about strengths and

RESEARCH METHODOLOGY
organisation and also possessed by many professionals. In weakness of its competitors.
India the number of producers of manual typewriter has 3. As in this instance two categories of personnel, typists and
remained four for many years and portable machine is manufac- purchase managers, are usually involved in the decision-
tured by Remington Rand only. The core product of most of making process, so separate response from these two groups
the companies are similar. But users develop some perception of people in an organisation was obtained. In each
based on their experience or otherwise. As such, the unique organisation the study asked
selling proposition of all the companies is invariably after sales
(a) Purchase/administrative managers to state the degree of
service and low price. Typewriter is a typical product where the
importance’ attached to price, past
people involved in the choice decision, of which model or make
performance, typist’s opinion, manufacturer’s reputation,
of the typewriter to buy, are often different from the user of the
after sales service, guarantee, discount offered and terms of
machine. And these two sets of people have different prefer-
payment. (see enclosed question-naire-1 in the appendix)
ences about the various existing brands of typewriters and their
role in the choice decision vary. Particularly in such a situation (b) Also, each purchase/administrative manager was asked to
any marketer would require to assess how the decision makers state the merits and
and users judge its offering. Also an analysis of the demand demerits of electronic typewriter.
trends, market share, strengths and weaknesses of different (c) Moreover, each purchase/administrative manager was asked
manufacturers is essential for to indicate the possibility of switching over to electronic
any marketing decision. Moreover, there is a prevailing notion typewriter.
that the buying behavior of typewriter in government (public) (d) Five typists were asked to judge different brands of type
and private sector companies are different. So it is desirable to writers which they have
examine the possible behavioural differences in public vis-a-vis
used with regard to clarity of prints, lightness of touch,
private sector companies. Also, in recent times electronic
availability of after sales
typewriter has invaded the market in a big way. In the beginning
the electronic typewriter was positioned as a status symbol. But services, speed and durability. .
gradually the electronic typewriter has been projected as having (e) Similarly, these typists were asked to state their impressions
superior features than any manual typewriter machine. So there about the merits and
is a growing need to assess the impressions of purchase demerits of electronic typewriter. (See enclosed questionnaires
managers and users about electronic detailed).
typewriters. B. Source of Information
Given this background the following paragraph explicitly states 1. Secondary Scanning of annual reports of typewriter
the various marketing research objectives of this study. companies and discussions with marketing personnel of the
Marketing typewriter companies located at Calcutta.
Research Objectives 2. Primary Conduct Consumers surveys with the help of two
questionnaires, one meant for typists and other for
1. To study the typewriter market in terms of competitive
purchase/administrative managers.
structure and demand trends
2. To analyse the perceived strengths and weakness of the C. Schedule of Information Collection
companies operating in the typewriter business in the eyes of Two sets of questionnaires were developed to secure data on
the manufacturers the lines suggested under the section titled ‘Information
3. To examine the buying behaviour regarding the purchase of Required’ (copy enclosed in annexure 1).
manual typewriter in public vis-a-vis private sector. D. Data Collection
4. To assess the perceptions of the user-organisations Location of the Study - Calcutta
(decision-makers) and the typists (users) about existing 1. Information about typewriter industry was obtained from
brands of manual typewriter and the manufacturers through
5. To ascertain decision-makers and users impression about in-depth interview of key marketing personnel ,within each
electronic typewriter. company.
2. Research Design (ii) Sampling decisions for consumer behaviour survey consists
This study used an exploratory design to analyse the market size of the following elements:
and competitive scenario and descriptive design to examine
1. Population: All public and private sector organisations
buying behaviour and opinions about electronic typewriter.
including educational in-
A. Information Required stitutions located in Calcutta.
1. Companies operating in the typewriter business, their 2. Sampling Unit: It included the coverage of five typists and
production level during 1977-82 and demand trends. one purchase/ad-
ministrative manager in each of the selected organisations.

© Copy Right: Rai University


11.556 241
3. Sampling procedure: (a) The study had a-priori decided to a Portable typewriter is manufactured by Remington Rand
RESEARCH METHODOLOGY

include an equal representation of public and private sector only and it comes in three languages; English, Hindi and
organisations in the sample. The exact number chosen was Gujarati. The major users are professionals, journalists,
fifty each which was decided as per the sample size students/research scholars and small business owners.
determina-tion rule. (see Annexure-II for details) Interestingly, government docs not buy this type of
b. The organisations were selected on a random basis after typewriter machine. It is classified as a luxury item and pays
preparing a master list of population on the basis of 50% duties. It is a light machine but suffers from low
pooling of customer list of the manufacturing companies. durability and uncertain servicing facility. Smuggled portable
typewriters also compete with indigenous variety.
c . In each organisation five typists were selected by following
systematic sampling procedure on the master list of b Standard manual typewriters arc widely used in offices. These
secretarial staff maintained by each organisation. are available in different Indian languages and various ranges
of carriage sizes. Four major companies are supply-ing this
3. Data Analysis type of machines under different brand names (given
1. The research objectives regarding the assessment of market below).
size, competitive structure, strengths and weaknesses of 1. 2. 3. 4.
different companies were analysed from the data gathered
through in-depth interviews with marketing personnel of Company Brand Name
companies. 1. Remington Rand of India Ltd.
Remington
2. As per the convention of descriptive study it was essential to
formulate a few hypothesis pertaining to 2. Godrej & Boyce Pvt. Ltd. Godrej AD
& PB
a. Purchase behaviour of organisations
3. Rayala Corporation Pvt. Ltd. Halda
b. User’s perceptions about important features in a manual
typewriter 4. Facit Asia Ltd. FACIT
c. User’s opinions on superiority of model(s) on specific feature c Network, PCL, Godrej, Remington and Facit are the main
and producers of electronic typewriters in India. This type of
typewriter is, by and large, assembled in India from CKD
d. Possible association between typewriter model on which
and SKD complete/semi-knocked down) kits. It still
typists learnt and models which are preferred now.
commands a very high price, (min. Rs. 18,000) and high
Hypothesis (1) operating cost (Rs.1 per page compared to 2 to 3 paise per
a. The private sector companies attach higher importance to page for manual typewriters).
the typist’s opinion than the public sector/units. 3. Demand Trends
b. Price and terms of payment receive more importance in the
(a) Market trends for manual typewriters (no. of units) from 1977 to
purchase-decision of public sector than in the private sector. 1982 is as follows:
Hypothesis (2) 1977 1978 1979 1980 1981 1982
a. Lightness of touch is the most sought after quality in a 334
Remington 31330 33100 32700 29175 17539 26
manual typewriter.
Godrej 21250 26771 26210 27362 32261 37250
b. All available models of manual typewriter do not perform Halda 8982 11349 14743 13453 13920 14t90
equally well with regard to offering the lightness of touch. Fadt 2196 13244 18864 24700 26100 31170
3. As stated in the marketing research objectives, the study tried 11603
Total 63758 84464 92517 94690 89820
to assess how purchase managers and users look at an 6
electronic typewriter. The data obtained through open end Between 1983-87 the market demand remained between
questions about the merits and demerits of electronic 1,10,000 to 1,20,000 units.
typewriter were subjectively analysed Likewise, opinions
Growth Rates
expressed on the given list of merits/demerits of electronic
typewriter Were summarised by pooling the data. Here the 10 years ago 30%
opinions of two sets of respondents, purchase managers 5 years ago Stagnant
and users, were treated separately to examine the contrasting Now No growth from 1986-87 figure
views.FINDINGS
b. Regional Demand Pattern
1. The market size (in units) for three types of typewriters
Manual
produced in India is as follows:
Region %
Portable : 7,000
West 30
Standard Manual : 1,10,000
North 30
Electronic : 15,000
South 20
2. Competitive Structure
East 20

© Copy Right: Rai University


242 11.556
40% of the demand originates from the Metro Cities. 1. Gourej’s AB model is perceived as of inferior quality and

RESEARCH METHODOLOGY
In case of electronic machines, 90% of the market lies in metro machine requires hard touch.
cities, with more or less same 2. PB model, though better, is yet to establish in the market.
regional pattern of demand. Facit
Godrej enjoys the highest share of public sector market. Strengths
Facit is comparatively more successful in South and found more 1. Captured fairly large market share despite somewhat new
popular among private sector entrant
units. 2. Easy to operate and light in touch
Remington has a monopoly in typing school segment. 3. Efficient sales policy (tim1ely visit for repair and
In manual typewriter market, Godrej adopts a somewhat liberal maintenance).
credit (terms of payment) and Weaknesses
discount policy. Users of Remington machine initially find it difficult to operate.
c. Market Share Halda
Manual Typewriter Market Share in Strength
1982(%) 1987(%) First feather touch machine in India.
Godrej 29 40 Weaknesses
Remington 32 30 1. Lacks innovative skill in marketing.
Facit 27 25 2. Could not withstand the competitive pressure and
Halda 12 5 3. Unable to generate large number of copies at a time.

Electronic Typewriter 1987 Market Share (%) 5. Buying Behaviour In Publlc-Vs-private


Network 33 Sector
PCL 25 (See annexure-III for detailed results of the test of
Godrej 13 hypothesis).
Facit 7 b. Price and terms of payment are given a somewhat higher
degree of importance in public
Remaining Companies 22
sector than in the private sector.
4. Strengths and Weakness of Different
c. Reputation of the manufacturer and after sales services are
Companies
given equal importance in
Remington
public and private sector.
Strengths
1. Ruggedness and reliability
6. Consumer Perception About Existing
Brands
2. Services and maintenance requirement negligible
a. Lightness of touch is the most sought after quality ina
3. Durability (lasts for 20 years or so) typewriter.
4. Automatic pull in market and (See annexure-V for details about testing of this particular
5. Monopoly in typing school segment. hypothesis)
Weaknesses - b. Typists, by and large, felt that FACIT is the best available
1. Obsolete model typewriter machine on the
2. Poor after sales service lightness of touch quality. (See annexure- V for details about
the method of testing this
3. Hard touch and
hypothesis)
4. Lockout in 1982-83 led to some los:; of market.
c. Halda was also perceived as a light touch machine, but it
Godrej Strengths
fair~d poorly on aftersales service.
1. Efficient after sales service
d. Remington and Godrej were rated as “harder” typewriter
2. 10% lower price than others. This is probably the reason why machines.
it is the market leader in
e Analysis of the open-end question. “Why a particular brand
public sector segment. is preferred”, reveals that
3. Good image as a supplier of different office equipment, about 60% mentioned that they are used to this machine
furniture etc. previously (i.e. Familiarity).
Weaknesses

© Copy Right: Rai University


11.556 243
f An association between brand on which a typist learnt and • Sole reason for avoiding it and thus, possibly apprehended
RESEARCH METHODOLOGY

brand preerred today was poor reliability of electronic parts.


tried to be established. The Chi-square homogeneity test Annexure-l
showed that there is some
Questionnaire-I
positive relationship (See annexure- VI for details)
Target Respondent: Purchase Manager/Administrative Officer
Buyer prol11e of three leading brands Dear Sir/Madam
Facit Buyers : Attach more importance to opinion of the
l~’pist, after sales service, little importance to price and We are conducting a marketing research study on typewriters.
manufacturer’s repm,;~ion. These consumers come mainly Here we sJ.tali put before you a few questions. We shall
from private sector companies. appreciate if you please answer our questions. Thanking you for
your coopera-tion.
Godrej Buyers:Highly satisfied with after sales service, price and
past performance. 1 Name of Organisation
Remington Buyers: Perceive their typewriter to be as high priced, 2. Type of Qrganisation ( ) Public Sector
quality product with an established market image. ( ) Private Sector
7. Imfressions About Electronic ( ) Educational Institute
Typewriters 3. How many typewriters have your organisation purchased
a There is, by and large, still much hesitation to adopt during the last three years
electronic typewriter in both public and private sectors. High Type Year Manual Elecrornic
price is probably the biggest barrier for its immediate accep- 1987-88
tance.
1986-87
b Purchase/administrative managers were, by and-large, of the
1985 -86
opinion that an electronic typewriter has the following
merits: 4. Who takes the decisions regarding the purchase of
typewriters in your organisation? (Name all the persons and
1. Quality of print-outs
their designations)
2. Speed and
(Instructions to the interviews: If the answer to
3. Memory.
Q. 4 does not include this person whom you have so long
In contrast, the typists were of the opinion that its merits arc: interviewed, then terminate the interview and seek this
i. Automatic correction facility person’s help to meet any of the persons who is involved in
ii. Speed and the purchase decision).
iii Display. 5. Here we have mentioned a few factors related with purchase
of typewriters. Will you please assign your degree of
c. Similarly, commenting on the demerits of electronic
importance on a scale ranging from:
typewriter, the two groups expressed
5. Very Important
the following opinions:
4. Important
Reaction on the possible demerits
3. Neither important nor unimportant
Manager’s Opinions .
2. Unimportant
(Decision Makers)
1. Very unimportant
• High costs of operation
O. Can’t comment
• Inability to use due to power failure
a Price ()
• Poor reliability of electronic parts.
b Past performance ()
Typists’ Opinions
c Typist’s opinion ()
• Poor reliability of electronic parts
d Manufacturer’s reputation ()
• Inability to use due to power failure
e After sales service ()
• Special skill required
I Guarantee ()
d Open-end query about the merits/demerits of electronic
typewriter showed. g Discount offered ()

• Managers found it to be a prestige value item and hence,


h Terms of payment ()
choose to install it in the office of
Chief Executive / Director’s Office only 6. Do you know a particular preference for any particular make
of typewriter?
• Typists, by and large, expressed their unfamiliarity with this
new innovative machine as the ( ) Yes ( ) No

© Copy Right: Rai University


244 11.556
If yes, state the special reason( s). a Which typewriter do you prefer most to work on?

RESEARCH METHODOLOGY
7. When do you decide to replace/dispose off an old b What Ware probable reason(s)?
typewriter? 4. Here we have selected a few distinct features of a manual
8. (a) Given the price of tqe manual typewriter is Rs. 6,000 will typewriter namely,
you be willing to install an a. Clarity of print
electronic typewriter? b. Lightness of touch
( )Yes ( )No ( )Can’t say c. After sales service
(b) What maximum price are you willing to pay for this d. Speed
electronic typewriter
e. Durability
Maximum price to be paid - Rs.
Use a rating scale such as
9. In your opinion what are the merits and demerits of an
(1) Very poor, (2) Poor, (3) Average, (4) Good, (5) Very good (6)
electronic typewriter?
Cannot comment, and ,indicate your opinions about the
Merits various brands of typewriters you have used on each of (he
a ……………………………. above mentioned select features.
b ............................................. Brand/Model Calrity of Ligthness After sales Speed Durability
c............................................. Print Touch service
d ................................................ 1.
e……………………………….. 2.
Demerits 3.
a ................................................ 4.
b ................................................ 5.
c ................................................ a. Have you ever used an electronic typewriter?
d………………………………..... ( )Yes ( ) No
e…………………………………. If no, then terminate the interview.
10. From our experience we have found the below-mentioned (Q) What do you feel are the distinct merits/demerits of an
merits/demerits in an electronic typewriter. State your electronic typewriter?
possible degree of agreement/disagreement on a scale i Merits ............................
ranging from ............................
a Strongly agree ............................
b Agree ...........................
c Neither agree nor disagree ............................
d Disagree ii Demerits ............................
e Strongly disagree ............................
f Can’t comment ............................
Possible Merits ............................
(i) Speed ( ); (ii) Production capability ( ); (iii) Memory ( ); ............................
(iv) Editing feature ( ); (v) Automatic correction ( );
6. From our experience we state below a few possible merits
(vi) Display ( ); (vii) Quality of Printouts ( )
and demerits of an electronic typewriter. If you are allowed
Possible Demerits to scale labelled as (a) strongly agree (b) agree (c) neither agree
(i) Cost of operations, ( ); (ii) Special skill required ( ); (iii) Poor nor disagree (d) disagree (e) strongly disagree (f) cannot
reliability of electronic parts ( ); ,(iv) Likelihood of obsolescence comment; indicate your opinion on the below mentioned:
( ); (v) Inability to use due to power failure ( ); (vi) Inability of Possible Merits/Demerits
use due to non-availability of spares ( ) i Possible Merits () Display ()
Thanks you very much for your support. Speed () Automatic Display ()
Questionnaire. II Memory () Quality of Printouts
Target Respondent : Typists/any Secretarial Staff Editing features ()
1. What was the typewriter on which you learnt typing? Reproduction capability ()
2. Which typewriter are you using now?
ii Possible Demerits ()
3:
High cost of operation ()

© Copy Right: Rai University


11.556 245
Special skills required ()
RESEARCH METHODOLOGY

Poor reliability of electronic parts ()


Likelihood of obsolescence ()
Inability to use due to power failure ()
Inability of use due to
non-availability of spares ()
Thank you very much for your cooperation.
Annexure-II
Sample Size Determination
The exact sample size decision in a survey of this nature
depends upon
i. The average estimate of the response to a question asked
ii. The precision with which it is desired to estimate the
parameter
iii. The confidence with which it is desired to achieve the level of
positive chosen.
Moreover, in this study the response to each of the questions
asked are either binary or five point multiple choice. For
example, in the; questionnaire we have asked whether the
organisation has an electronic typewriter or not; similarly, users’
views on different purchase related dimensions are recorded on
a five point scale. Therefore, the optimal sample size will be the
maximum of the sim -ple size calculation for different response
types

© Copy Right: Rai University


246 11.556
RESEARCH METHODOLOGY
LESSON 44:
CASE STUDY

Friends, now let us discuss some of the real life case studies and the objective would be to find out what people see in Boroline
see how the techniques of research methodology is helpful to cream which they find lacking in DAC.
solve them. On taking a closer look, it would seem that DAC is perceived as
Case Study No.5 an antiseptic cream to be used specifically for cuts and wounds,
may be because of the Dettol brand name. In case of cuts and
Antiseptic Cream
wounds,
1. Marketing Brief people may prefer to use established antiseptic liquids, like
1.1 Background Dettol or Savlon, which they might already have at home and
Dettol Antiseptic Cream (DAC) was launched in the Indian arc currently using. In that case, it would appear that there is not
market by Reckitt & Colman of (India) Ltd. DAC was intro- enough market potential for a cream like DAC, given the way it
duced as part of a line extension strategy. DAC was to be a is being currently perceived by the people. .
complementary product to dettol liquid, which presently was On the other hand, it seems Boroline is perceived as a “general
the market leader with almost 90% share of the antiseptic liquid purpose cream” which can be used for cosmetic purposes. like
market. dry skin and chapped lips, pimples etc. as well as for medicinal uses
1.2 Intended Positioning like cuts and wounds. Hence, Boroline would be a handy all-
. “DAC offers Dettol protection in cream form. It is effective purpose cream to have at home.
against minor cuts, burns, wounds, insect bites, shaving nicks, This research study proposes to verify these hunches.
boils and rashes”. 3. Media Support
1.3 Competition Media support for DAC has been restricted to insertion in
Boroline has the lion’s share of the antiseptic cream market, newspapers and magazines, hoarding and point of purchase
especially in Eastern India. The other well known brands in the displays. These media have also not been extensively used.
market are Savlon, Boro-Plus and Boro-Calendula. In addition Thus the problem could be that the media support is insuffi-
to these, there are a few products/brands with very specific cient. Insufficient media support would mean
usage areas like • People are not aware of DAC.
Burns: Burnol • People may be aware of DAC but may not be convinced
Dry skin/chapped Lips: Various Cold Creams enough to buy it.
On the other hand, the media support may be sufficient but the
Pimples: Various cosmetic creams
message might not have got across to the consumer. This
like Clearasil, Fair & Lovely
would mean that the intended positioning might not have
Shaving Nicks: Various after shave been achieved.
lotions.
A detailed study of the media support is beyond the scope of
One aspect of the competition which was not anticipated earlier this report. However, as a spin-off benefit the study proposes
was that DAC ,might face competition from DETTOL Liquid to find out the level of awareness of DAC’s “advertisement”
itself. among the people.
1.4 Recognising a Problem Area Marketing Research Objectives
Over the last few years performance of I’ AC has been a matter
1. To study the incidence of skin problems and brands used in
of concern as the sales did not reach the expected level with
those situations.
time. This raises the basic question: “Why are sales not picking
up and what should be done to rectify the position?” 2. Ascertain the elements (attributes) consumers look for in an
antiseptic cream.
Let us examine where the problem could lie.
3. To examine how satisfied the consumers are with DAC and
1. Distribution
Boroline for different elements that they look for.
This does not seem to be the problem area since the extensive
distribution network for other products of Reckitt is being 4. Assess consumers awareness and recall of DAC’s
used for DAC. advertisement.
2. Potential Market 2. Research Design
There are a number of brands in the antiseptic cream market 2.1 Reserach Hypothesis
and some of them, specially Boroline is doing very well. It is The research study tested the following hypothcses.
indicative enough that there is a market for antiseptic cream. So

© Copy Right: Rai University


11.556 247
Hypothesis 1 end and ‘good’ at the other. In the final questionnaire this was
RESEARCH METHODOLOGY

“If I have minor cuts or wounds, I’d rather use Dettol liquid or changed to ‘odour’ varying from ‘medicinal ‘odour’ to ‘per-
any other antiseptic. Why should fumed ‘odour’.
I use DAC?” . 2.58 Final Questionnaire
Hypothesis 2 (a copy enclosed in the Appendix-I) In consonance with the
information requirements, questions were designed in the
“If I were to use an antiseptic cream I’d use. Boroline which is a
sequence to collect the following data.
general purpose cream rather than DAC which is not a general
purpose cream”. Question No. Content
2.2 Information Required 1. This question gives us an indication of the most frequently
To achieve the research objectives the following information is occurring skin problems.
required 2. This open ended question would give an indication of
1. What are the most frequently occurring skin problems? what people currently do for the various skin problems
mentioned.
2. What do people do when they have these skin problems?
3. The perception of an ideal antiseptic cream is sought from
3. What do people want in an antiseptic cream in terms of the respondents by asking what “magnitude” of each of
various attributes and benefits derived from the product? the mentioned attributes would they desire in an antiseptic
4. How is DAC viewed in terms of the above attributes/ cream.
benefits?
4. Here the respondent is asked to rank the chosen attributes
5. How is Boroline viewed in terms of the above attributes/ in order of importance on a seven point scale.
benefits?
5. This question is used to obtain the level of unaided recall
6. What is the brand DAC’s awareness in the market? of the various brands of antiseptic creams among
7. What is the extent of DAC’s advertisement recall? respondents.
8. What is the message retained from the DAC’s advertisement? 6. This question indicates whether the respondent has heard
2.3 Sources of Data of the concerned brand (i.e. DAC).
All the above information is collected from primary sources. 7. This question is used to find out if the respondent has
used DAC and Boroline.
2.4 Data Collection Mode
The data collection instrument used for obtaining the desired 8. Consumer’s perception about DAC is obtained from this
information is questionnaire. The logic of questionnaire question.
development is highlighted below. 9. The perception of Boroline is obtained from this question.
2.5 Questionnaire Design 10. This question determines the level of DAC’s advertisement
At the outset a fairly exhaustive list of usage occasions and recall and message recall.
qualities of an antiseptic cream was arrived at and a pilot 11. Finally some basic information about the person
survey was conducted to narrow this list down. responding are collected.

2.5A Pretesting 2.6 Sampling


The pretesting of the questionnaire (i.e. the process of adminis- 1. Respondent
tering the questionnaire on a conveniently selected group of The target respondent of the study consisted of people
people to test its clarity, ease of response etc.) was done on a from different income groups residing in Calcutta. .
sample of fifteen respondents. Depending on the difficulties
2. Sampling Unit
encountered by them in answering the ques-tionnaire, its initial
format was suitably modified to finally arrive at the one given in Household
this report. 3. Sample Design and Sample Size
For example, the query as to what people use for various usage The study had purposely chosen a convenient sampling
occasions was made open ended as it was observed that the procedure. It was decided to take a sample of 100
close ended question in the pretested questionnaire made the respondents.
respondent biased. 3. Data Analysis
Terms like ‘value for money’ and ‘after use visibility’ did not The data obtained from the respondents was first edited and
seem to make much sense to the respondent and so these two the valid (87) responses were retained for the purpose of
terms were omitted in the final questionnaire from the list of analysis. Data were represented graphically using ‘Lotus’ package
attributes. in the personal computer.
Few changes were also incorporated in the questions pertaining Scaling: The ordidnal scale data on ranking of usage occasion
to rating of attributes for the different brands so as to make frequency (Q.1) and importance of attributes (Q.4) was
them unbiased. For example, in the pretest respondents were converted to an interval scale using the Thurstone’s Case scaling
asked to rate brands for smell on a 1- 5 scale with bad at one technique.

© Copy Right: Rai University


248 11.556
(See Appendix-II for details). Table 1

RESEARCH METHODOLOGY
Percentage break-up of brand used for each skin
problem
3.1 Dissatisfaction Score for Each Brand Usage Do Boroline DAC Detrol Bumol Oyher
The ‘dissatisfaction’ score for a particular attribute for a Occasion Nothing Liquid Brands
Boils 35.23 9.09 1.14 1.14 2.27 51.13
particular brand is defined as the dif-ference between the score Dry skin/chapped
4.6 25.26 - - - 70.14
on that attribute for the ideal antiseptic cream and the brand. lips
Shaving Cuts 18.39 18.39 5.76 12.64 - 44.82
The average ‘dissatisfaction score’ for each attribute was Minor burns 7.53 9.68 2.15 - 65.59 15.05
calculated for DAC and Boroline. Weightage was then assigned Insect bites 42.39 6.52 7.61 7.61 4.35 31.52
Cuts/scratches 8.32 23.96 11.46 30.21 3.13 22.92
to the various attributes, using the interval scale derived for Blisters/ skin peels 53.34 7.78 1.11 1.11 4.44 32.22
their ‘importance of attributes’, using the Thurstone scaling Rashes 38.64 - 1.14 1.14 - 59.08

technique. Then, the average dissatisfaction score for the brand “Other brands” comprised mainly of cold creams for dry skin/
as a whole was calculated both for DAC and Boroline, using the chapped lips, after shave lotionsor shaving cuts,’doctor’s advice’
formula: 7 for rashes, and cosmetic creams like Clearasil.
See Exhibit-A, in the Appendix-III
Average dissatisfaction score = Σdi .Wi
Various Usage Occasions of Boroline, DAC and Dettol is
i =1
shown in Table-2
where di= average dissatisfaction score for attribute i, Table 2
Wi = weight age for attribute i Percentage break-up or usage occasions for Boroline, DAC and
i ranges from] to 7 Detlol Liquid
The dissatisfaction scores for Boroline and DAC were com- Boroline DAC Dettol
pared. 7 Boils/Pimples 8.79 5.57 2
Frequency of use index for brand ‘a’ = Σ Wipai Dry skin 24.18 0 0
i Shaving cuts 17.59 17.86 22
where p i = percentage of people who use brand ‘a’ for usage
a Minor Burns 9.89 7.14 0
occasion i Insect Bites 6.59 25.0 14
Wi = weightage for usage occasion i. Cuts & Scratches 25.27 39.29 58
Here the percentage of pcof1lc who use Boroline for a particular Blisters/skin
7.69 3.57 2
peels
occasion was multiplied by the weightage for that usage
Rashes 0 3.57 2
occasion and this was added over all the usage occasions. A
100.00 100.00 100.00
similar index was calculate for DAC.
3.2 Presentation of Data For a Pie-Chart See Exhibit – B1 and B2 in the Appendix-III
This is a summary of the results obtained from respondents. 3. The average scores of the respondents on their agreement/
1. Usage occasions: The ranking for the frequency of occurrence disagreement with the seven attributes statements is given in
of skin problems when converted to an interval scale using Table-3 for an ideal antiseptic cream, for Boroline and for
the Thurstone Case V method, (see Appendix-IIA for DAC. The scores range from -2 to + 2, as follows
details) the following picture was obtained -2: strongly disagree
-1: disagree
2.8 Dry skin/chapped 0: neither agree nor disagree
lips
2.6 1: agree
Cuts/scratches 2: strongly agree
2.4
It is observed that DAC is perceived as having high antiseptic
Insect bites
2.2 qualities, a medicinal odour and is not a general purpose cream.
Also, it is non-staining, does not sting on application, is not
2.0 greasy and is easily available. Boroline does not have high
Minor Burns antiseptic qualities and is a general purpose cream with a
1.8 perfumed odour. It is also not staining, does not sting on
1.6 Boils/Pimples application and easily available.
1.4
1.2 Ideally, the consumers would like a general purpose cream,
Blisters/skin peels
1.0 with high antiseptic qualities. They are indifferent to the odour
but would not want the cream to be staining. It should not
Figure 1: Thurstone Case VRashes
scale for frequency of occurrence of sting on application and should be non-greasy.
skin problems
Table: 3
2. What people do when a skin problem arises is shown below
Average scores on product attribute, for Ideal antiseptic cram,
Boroline and DAC

© Copy Right: Rai University


11.556 249
Shaving cuts 0.104 18.39 5.76
RESEARCH METHODOLOGY

Attributes Ideal Antiseptic DAC Boroline


Cream Minor burns 0.130 9.68 2.15
Medicinal Odour Insect bites 0.162 6.52 7.61
0 0.7 - 0.7
rather
than Perfumed Cuts/Scratches 0.172 23.96 11.46
Odour Bisters/Skin peels 0.07 7.78 1.11
Staining - 1.0 -0.6 - 0.6 Rashes 0.68 0 1.14
Greasy -0.4 -O.1 0.7
Sting on Application -0.6 -0.2 - 0.7 Weight Frequency of Use Index Boroline: 14.5
General Purpose DAC : 2.6
0.7 0 1.0
Cream
See brand usage summary in the graphics enclosed in Appendix – III
Easily available 2" 1.0 0.7
7. Antiseptic creams respondents were aware of (unaided
High Antiseptic
2" 0.9 0.3 recall).
qualities
“ These were assumed to be 2, since the consumer would Brand Percentage of respondents
obviously want these attribute in an ideal antiseptic cream. Boroline 62
See Exhibit C in the Appendix-III for a graphical exposition. DAC 45
Dissatisfaction Scores 8. Number of respondents who had heard of Boroline and
4. The ranking for the importance of the various product DAC
attributes. When converted to an interval scale using the Heard Not heard
Thurstone Case V scaling Technique is represented in Figure 2.
Boroline 86 1
(See Tables in the Appendix II for details).
DAC 78 9
2.8
Antiseptic qualities 9. Number of respondents who have ever used Boroline and
2.6
2.4 DAC
2.0 General Purpose usability/
1.8 Availability % Used % No: used
1.6 Boroline 89.7 10.3
1.4
1.2 Non-staining characteristics/odour DAC 67.8 32.2
1.08 Non-stinging characteristic
1.0 Non-greasiness 10. AD-recall
Number of respondents who had seen the DAC advertisement
Figure 2: Thurstone Case V scale for importance of attributes before
5. Dissatisfaction Score in different attributes brand combination Number Percentage
Table 4 Seen Advertisement 48 55.2%
Mean dissatisfaction scores for each attributes for DAC and Not seen advertisement 39 44.8%
Boroline Of the 48 respondents who had seen the advertisement, 15
Attribute Brand respondents or 31.3% could correctly recall the product’s
DAC Boroline message.
Odour 0.10 0.14 11. Demographic Data
Staining characteristic 0.10 0.14 Age Percentage of respondents
Greasiness 0.09 0.12 Less than 35 years 63
Sting on application 0.28 0.17 Greater than 35 years 37
Availability 0.20 0.20 Monthly income
Antiseptic qualities 0.32 0.43 Less than Rs. 2000 25
6. Weighted Frequency of Use Index Grtater than Rs. 2000 75
Table 5 Sex:
Weighted frequency of Use Index for Boroline and DAC Male 42
Usage Occasion Weightage % of Users Female 58
Boroline DAC
Boils/Pimples 0.108 9.09 1.14 Testing of Hypothesis
Dry skin/chapped lips 0.184 25.06 0 Hypothesis 1

© Copy Right: Rai University


250 11.556
Ho: “If I have cuts or wounds. I’d use dettol liquid or some 2. Ho: DAC is perceived as a general purpose cream

RESEARCH METHODOLOGY
other antiseptic rather than DAC v/s vs H1: DAC is not perceived as a general purpose cream. Similar
HI: Negation of Ho to (i) Reject Ho if T < t
i.e. Ho: πL = πDAC vis HI: πi > π DAC here Yn = 0.013, S = 1.123, n = 87
Where pL is population proportion of people using any since value of Test statistics> tabulated value, NO is rejected i.e.
branded antiseptic other than DAC; pDAC is the population DAC is not perceived as a general purpose cream.
proportion using Dettol Antiseptic Cream. 3. Ho: I’d use Boroline rather than DAC.
PL - PDAC This hypothesis can be inferred to be correct by the following
Test statlstte Z = S.e (PL- PDAC) facts. The weighted frequency of use index has a value 14.5 for
Boroline and 2;6 for DAC. This implies that Boroline has a
where PL is the sample proportion of consumer using any
greater change of being used than DAC. The usage of DAC and
branded antiseptic other than Dettol Antiseptic Cream and
Boroline for the 4 most frequently occurring skin problems is
PDAC is the sample proportion of consumers using Dettol
compared in the Table below.
Antiseptic Cream.
% Who use
Sample data shows PL = 22/87 pDAC =11/87
Usage occasion DAC Boroline
P= nl + PL+ n2PDAC
Dry skin/chapped lips 0 22
n1+n2
Cuts/scratches 11 23
=0.212
Insect Bites 8 6
(P is a pooled estimate of usage)
Minor Burns 2.15 9.68
This table confirms that for the most frequently occurring skin
S.e. of (PL - PDAC) = Öpq/n
problems, Boroline is used more often than DAC. Combinil1g
the earlier results it can be inferred that Boroline is used more
Z = 2.156 because of its general purpose usability.
The tabulated value of Z at 5% level of significance = 1.64 Hypothesis 3
Advertising Effect on DAC usage
Since Z calculated > Z tabulated,
H1: There is no relationship between the DAC use vi5-a-vis
the. null hypothesis is rejected. exposure to the various DAC advertisements.
i.e. The sampled consumers seem to use Dettol liquid and V/s H1: There is some relationship.
other antiseptic creams more than the DAC. Dettiol Antiseptic Cream
Hypothesis 2 User Non User
Seen Advertisement of DAC 30 18
Ho: If I require to use an antiseptic cream, I would use
Not seen advertisement of DAC 29 10
Boroline which is a general purpose cream
59 28
rather than DAC. . Since X2 calculatcd (= 1.386) < X2 tabulated (= 3.84) the null
v/s HI: Negation of Ho hypothesis is accepted. This is, use of DAC is not related to
To test this hypothesis some sub-hypothesis have to be exposure to DACs advertisements.
formulated. Conclusions
1. Ho = Boroline is a general purpose cream To obtain an answer to the question, “Why arc sales of DAC
v/s HI: Boroline is not a general purpose cream. not picking up?” which was the major thrust of this study, we
may recapitulate the results of different hypotheses.
i.e. Ho : µ < 0 where, m = mean score of consumer perception
on a five point scale Hypothesis 1: “If I have cuts or wounds, I’d use dettol liquid
or some other antiseptic liquid rather than DAC” has been
H1: µ > 0
accepted. Hence, for cuts and wounds which is a frequent skin
Reject Ho if test statistic (T) > tabulated value of t with (n - 1) problem, DAC does not find significant application.
degrees of freedom.
Hypothesis 2: “If I require to use an antiseptic cream, I’d use
where Test Statistics (T) = Yn Boroline which is a general purpose cream rather than DAC”.
s/√n This hypothesis has also been inferred to be true.
Data Shows These two hypotheses together indicate why there is not
Yn = 1.034 sufficient market demand for DAC. Further, the way DAC is
currently perceived, gala showed it is not perceived as a general
S = 0.65
purpose cream but has Specific medicinal application, there is
n = 87 not enough market potential for it. when a person wants to buy
Since T < t Ho is accepted. In other words, Boroline is perceived an antiseptic cream, he examines the total bundle of benefits
as a general purpose cream. that the cream offers. People do not buy an antiseptic cream

© Copy Right: Rai University


11.556 251
which has specific medicinal usage viz., cuts and scratches. They you simply ignore any particular skin problem then write
RESEARCH METHODOLOGY

would rather use Dettol liquid. If at all they are to buy a cream, nothing
it would be a general purpose cream. Boils/Pimples
This view is further confirmed by the weighted frequency of use Dry Skin/Chapped Lips
index which has a value 14.5 for Boroline and 2.6 for DAC.
Shaving Cuts/After-shave Dry Skin
This means that Boroline has a greater chance of being used
than DAC. Minor Burns
Hypothesis 3: Use of DAC is not related to the consumer’s Insect Bites
exposure to various DAC advertise-ments. Cuts/Scratches
Blisters/Skin Peels
Brand Awareness
Rashes
Awareness for DAC among the sample was quite high 55% of
the respondents had seen the DAC advertisement before and 31 3. Listed below are some statements about an IDEAL ANTI-
% of the respondents could correctly recall how the product was SEPTIC CREAM. Indicate your response to each statement by
advertised. However, advertisement recall and use of DAC does putting a tick-mark against the response you prefer most.
not seem to be related. 1. An ideal antiseptic cream should have a medicinal odour
Recommendation rather than a perfumed
It seems unlikely that sales for DAC will pick up if the current
state of perception prevails. However, it is true that there is lot Strongly disagree neither agree agree strongly agree
of potential in the antiseptic cream market. Boroline enjoys disagree nor disagree
high sales volume and there are other brands like Boro-Plus and 2. An ideal antiseptic cream should be staining.
Boro-Calendula in the market. But these are all positioned as
‘General purpose cream’. So it can be concluded that the
potential lies in ‘general purpose’ antiseptic cream. Strongly disagree neither agree agree strongly agree
This leads to the question whether DAC should be re-posi- disagree nor disagree

tioned as a ‘general purpose cream or not’. But it would be 3. An ideal antiseptic cream should be greasy.
difficult for DAC to achieve such a position in the consumer’s
mind. This is because the brand name “Dettol” has a medicinal
connotation and it would be a hard task to convince the Strongly disagree neither agree agree strongly agree
disagree nor disagree
consumers that DAC is a ‘general purpose cream’.
4. An ideal antiseptic cream should sting on application.
Appendix 1
Questionnaire
Strongly disagree neither agree agree strongly agree
Dear Respondent, disagree nor disagree
We arc conducting a survey about antiseptic cream. We would be
grateful if you express your opinions on the following list of 5. An ideal antiseptic cream should be a general purpose cream.
questions.
1. The following is a list of eight common skin problems. Strongly disagree neither agree agree strongly agree
disagree nor disagree
Please rank them from 1 to 8 in order of how frequently they
occur in your family. Give rank ‘1’ to the problem which
4. When you buy an antiseptic cream, which of the following is
occurs most frequently and rank ‘8’ to the least occurring skin
the most important to you, the next most important, and so
problem.
on...Rank them from 1 to 8 where ‘1’ indicates the most
Boils/Pimples important and ‘7’ the least.
Dry Skin/Chapped Lips Odour
Shaving Cuts/After-sha Ve Dry Skin Non-g R Easiness
Minor Burns Non-stalning Characteristics
Insect Bites (Ants, Mosquitos Etc) Non-stinging Characteristics
Cuts/Scratches Availability
Blisters/Skin Peels Antiseptic Qualities
Rashes General Purpose Usability
2. When any of the above problems arise, what do you 5. Please name the antiseptic creams you are aware of.
generally do? Indicate your response to each by writing
against the problem. Whether you use any antiseptic, in case

© Copy Right: Rai University


252 11.556
RESEARCH METHODOLOGY
strongly disagree neither agree agree strongly agree
disagree nor disagree

2. Boroline is staining

6. Have you heard of the following creams?


strongly disagree neither agree agree strongly agree
Boroline [ yes/no] disagree nor disagree
Dettol Antiseptic Cream [ yes/no]
3. Boroline is greasy.
7. Have you ever used the following creams?
Boroline [yes/no]
strongly disagree neither agree agree strongly agree
Dettol Antiseptic Cream [yes/no] disagree nor disagree
( If You Ha Vent Heard Of Dettol Antiseptic Cream, Please
4. Boroline stings on application.
Skip The Following Question)
8. Listed below are some statements about Dettol Antiseptic
Cream. Indicate your response to each statement by choosing strongly disagree neither agree agree strongly agree
disagree nor disagree
one of the five response available.
1. Dettol Antiseptic Cream has a medicinal odour, rather than a 5. Boroline is a general purpose cream
perfumed one
strongly disagree neither agree agree strongly agree
disagree nor disagree
strongly disagree neither agree agree strongly agree
disagree nor disagree
6. Boroline is easily available.
2. Dettol Antiseptic Cream is easily staining
strongly disagree neither agree agree strongly agree
strongly disagree neither agree agree strongly agree disagree nor disagree
disagree nor disagree
7. Boroline has high antiseptic qualities.
3. Dettol Antiseptic Cream is greasy
strongly disagree neither agree agree strongly agree
strongly disagree neither agree agree strongly agree disagree nor disagree
disagree nor disagree
10. We are showing you an advertisement (It is an
4. Dettol Antiseptic Cream is stings on application advertisement of DAC. But the brand name is disguised)
(If Your Answer Is ‘No’, Please Skip The Following Ques-
strongly disagree neither agree agree strongly agree tions)
disagree nor disagree
1. What is the brand being advertised? …………………..
5. Dettol Antiseptic Cream is a general purpose cream 2. What do you recall from the advertisement?
…………………..
strongly disagree neither agree agree strongly agree Kindly Furnish Some Personal Information Which Shall Be
disagree nor disagree Treated In Confidence
1. Age (Years); Greater Than 35 Less Than 35
6. Dettol Antiseptic Cream is easily available
2. Monthly Income: Less Than Rs. 2000
strongly disagree neither agree agree strongly agree Greater Than Rs. 2000
disagree nor disagree
3. Sex: Male Female
7. Dettol Antiseptic Cream has high antiseptic qualities
4. Address
strongly
disagree
disagree neither agree
nor disagree
agree strongly agree
Appendix-IIa
The thurstone’s scaling technique has been used to construct a
(If You Haven’t Heard of Boroline Please Skip The Following
univariate interval scale from the input data of rankings for
Question)
usage occasions and importance of product attributes.
9. Listed below arc some statements about Boroline. Indicate
your response to each statement by choosing one of the five 1. Usage Occasions
responses available. Using the data ‘obtained from the respondents, the following
table-A1 was constructed
1. Boroline has a medicinal odour, rather than a perfumed one.

© Copy Right: Rai University


11.556 253
Table A1: Observed proportions findings usage occasion x (top From the data of Table Bb the ne).:t table 82 was prepared,
RESEARCH METHODOLOGY

of table) more frequent than usage which summarises the z values appropriate for each proportion.
occasion y (side of table) Table B2:
MORE FREQUENT USAGE ATTRIBUTE
OCCASION Attribute 1 2 3 4 5 6 7
1 2 .3 4 5 6 7 8 1 0 –0.10 0.23 0.18 –0.47 –1.28 –0.80
1. 0.00 0.26 0.47 0.45 0.35 0.31 0.65 0.62 2 0.10 0 0.28 0.13 –0.525 –1.40 –0.64
3 –0.23 –0.28 0 –0.05 – 0.74 –1.56 -0.77
. 4 -0.18 –0.13 0.05 0 –0.61 –1.56 –0.61
2. 0.74 0.00 0.73 0.73 0.57 0.62 0.86 0.86 5 0.47 0.525 0.74 0.61 0 –0.955 – 0.255
6 1.28 1.48 1.56 1.56 0.955 0 0.955
3. 0.53 0.27 0.00 0.47 0.31 0.26 0.60 058 7 0.805 0.645 0.77 0.61 0.25 –0.955 0
4. 0.55 0.27 0.53 0.00 0.32 0.29 0.66 O.64 Total –2.245 –2.14 –3.63 –3.04 1.135 7.79 2.13
5. 0.65 0.43 0.69 0.68 0.00 0.49 0.78 0.81 Mean(Z) –0.321 –0.306 –0.519 –0.434 0.162 1 .113 0.304

6. 0.69 0.38 0.74 0.71 0.51 0.00 0.87 0.83 R. 1.198 1.13 1 1.084 1.681 2.631 1.823
(Case V scale values)
7. 0.35 0.14 0.40 0.34 0.22 0.13 0.00 058
8. 0.38 0.14 0.42 0.36 0.19 0.17 0.42 0.00 Weightage 0.113 0.114 0.094 0.102 0.158 0.248 0.171
Wi
From the data of this table, the nex1 table A2 was prepared. R’ values have been obtained by adding 1.519 to each mean Z
Which summarises the z values appropriate for each proportion. value.
Table A2 Weightage have been assigned as follows
MORE FREQUENT USAGE
OCCASION R.i
USAGE
1 2 3 4 5 6 7 8
OCCASION Wi = 7
1.
2.
3.
0.000
0.643
0.075
- 0.643
0.000
-0.615
- 0.075 - 0.125
0.615 0.615
0.000 - 0.075
- 0.385
0.175
- 0.495
- 0.495
0.305
- 0.645
0.385 0.305
1.68 1.08
0.252 - 0.205
Σ Ri .

4. 0.126 - 0.615 0.075 0.000 - 0.465 - 0.555 0.415 0.355


5. 0.385 - 0.175 0.495 0.465 0.000 0.103 0.775 0.875
i =1
6. 0.495 - 0.305 0.645 0.555 - 0.403 0.000 1.125 0.955
7. - 0.385 - 1.08 - 0.252 - 0.415 - 0.775 -1.125 0.00 0.205 APPENDIX III
8. - 0.305 - 1.08 - 0.205 0.355 - 0.875 - 0.955 - 0.205 0.000 Abbreviations and Their Expansions Used in the Charts
Total - 1.035 4.513 - 1.298 - 0.664 2.92 3.67 - 3.827 -3.98 BOL BOILS
Mean-Z -0.207 - 0.903 - 0.259 0.133 0.585 0.735 - 0.765 - 0.796 ANTISEPTIC D-SK DRY SKIN
R' 1.589 2.697 1.536 1.929 2.381 2.53 1.031 1 USAGE OCCASIONS S-CU SHAVING CUTS
Wi 0.108 0.184 0.104 0.130 0.162 0.172 0.07 0.068 M-BU MINER BURN
No. Usage Occasion No. Usage Occasion No. Usage Occasion I-BI INSECT BITS
C-CS CUTS & SCRATCHES
1. Boils/Pimple 4 . Minor Burns 7. Blisters/skin peels
Dry skin/Chapped BLIS BLISTERS
i 5. Insect bites 8. Rashes RAS RASHES
lips
Shaving cuts/after-shave dry DAC DATTOL ANTISEPTIC
3 6 . Cuts/Scratches CREAM
skin
ANTISEPTIC BRAND NAMES BOR
Appendix-II B BOROLINE
DET DETTOL LIQUID
2. Product Attributes BUR BURNOL
OTH OTHERS
From the data obtained from the respondents, the following ATTRIBUTES OR QUALITIES OF AN
table B1 was constructed. ANTISEPTIC CREAM
USABILITY
G.P. GENERAL PURPOSE
ODO ODOUR
Table-B1: Observed proportions preferring attribute x (top of STA STAINING
CHARACTERISTICS
table) to attribute y (side of t-able) GRE GREASINESS
PREFERRED AVA AVAILABILITY
ANTI ANTISEPTIC
ATTRlBUTE CHARACTERISTICS
Non- STI STINGING
Attribute: Odour Non- Non-Slin- Avail. Aq GPU
slaining CHARACTERISTICS
Greasines
Charac. illg charac.
s
1 2 3 4 5 6 7
1. Odour" 0.00 0.46 0.59 0.57 0.32 0.10 0.21
2. Non Stain. 0.54 0.00 0.61 0.55 0.30 0.07 0.26
3. Non Greas, 0.41 0.39 0.00 0.48 0.23 0.06 0.22
4. Non Sting. 0.43 0.45 0.52 0.00 0.27 0.06 0.27
5. Avail. 0.68 0.70 0.77 0.73 0.00 0.17 0.40
6. Antiseptic
0.90 0.93 0.94 0.94 0.83 0.00 0.83
quality
7. G.P.U. 0.79 0.74 0.78 0.73 0.60 0.17 0.00
G.P.U = General Purpose
Usability

© Copy Right: Rai University


254 11.556
Case Study No.6

RESEARCH METHODOLOGY
Sunrise (India) Ltd’
Introduction
India has a unique destination of being the biggest producer,
consumer and exporter of tea in the world. Even though in
many parts of the world tea is emerging as the most popular
hot beverage, the domestic market for tea was almost saturated,
wherein the segment sizes and market shares of individual
companies were not amenable to drastic changes without major
upheavals occurring in the market. This relative stability in the
market was a source of comfort to the leading firms in the
industry like Sunrise (India) Ltd. as it made high entry barriers
for any new entrant and thus posed great challenge in terms of
getting a foothold in the market. However, over the last two
years or 50, a wind of change has been blowing across the tea
market in India. While the change is perceptible at the national
level too, its intensity is sufficiently strong in some markets, to
be a cause for concern even to the existing large firms in the
industry.
Tea Industry
In 1987 India produced 674.2 million kg of tea and thus
claimed more than a half of the total world production, with
Sri Lanka securing a distant second position by producing 213.3
million kg. Of the 674.2 million kg produced in India, 209
million kg were exported leaving around 465 million kg for
domestic consumption. Even though the increase in the
quantum of production between 1986 and
1987 is of the order of8%, the growth rate in domestic
consumption is only around 4% per annum.-
The production of tea in India is primarily concentrated in two
regions of the country, namely:
• In the north eastern states of Assam, Tripura and the
Dooars and Darjeeling areas of West Bengal.
EXHIBIT-B-2 • In the Nilgiri hills of Tamilnadu and Ktrala in the South.
BOROLINE Tea is basically marketed in two blends i.e. ‘leaf and ‘dust’.
USAGE SUMMARY While the northern states of the country are primarily leaf
consuming areas, the southern states and Maharashtra comprise
mainly of dust consumers. The major categories of the two
blends that are available in the market shown in
Exhibit -1
Tea which is produced in the gardens is either sold through the
various auction centres in the
country or directly sold to the intermediaries in the trade. In
1987 the various auction centres in the
country handled a total of 472.5 million kg of tea of an
annualised average price of Rs. 25.12 per kg.
Auction centres in South India sold 111.2 million kg at an
average of Rs. 22 per kg, while the rest of the market sold 361.3
million kg at an average price of Rs. 26 per kg.
Exhibit 2
Shows the principal channel flows in the tea trade. From the
chart it is apparent that tea is finally sold to the consumer
in two forms namely.

© Copy Right: Rai University


11.556 255
• In loose form 1. Star (India) Ltd. Star (I) Ltd. is the Indian subsidiary of an
RESEARCH METHODOLOGY

• As a branded product in packets. Anglo Dutch multinational company named Omo Interna-
tional PIc Currently International holds about 40% of Star
At the national level ,in early 1988 it was estimated that 70% of
India’s equity capital. In early 1988 it was estimated that Star
the tea in the country is sold in the loose form with the rest
India’s share of the packaged tea market was around 60%. The
being sold as a branded product in packets as shown in Exhibit
list of its brands is given in Exhibit-6. One of its brands
3. However, this figure varies widely from state to state, with
namely ‘Red Star’ which is a CTC leaf blend is the largest selling
states like West Bengal having 95% of the tea sales in the loose
brand of tea in the country.
form. The mode of operation of the loose and the packet tea
segments are very different. The two forms also differ widely in ‘Since the early 1980s Star (I) Ltd. has been attempting to
terms of the profitability of the channel members. The margins diversify into other lines of business. While many of its
of the various channel members in the two segments are diversification attempts specialiy into the non-related areas
shown in Exhibit-4. failed, the company has build up a strong presence in the
national coffee market, as also in some other lines of food
Due to the high margins involved in the loose tea trade
products.
considerable amount of dealers push exists for the same. Thus
to compete with loose tea, packet tea manufacturers have 2. Sunrise (India) Ltd : Sunrise (I) Ltd. was established in
traditionally relied on generating demand through pull strate- India in 1893, as a part of the world
gies aided by mass media advertising. wide operations of a British tea company. In 1972, Omo
Loose tea and branded tea also differ in terms of their International Pic acquired a major part of the world wide
prices. While loose tea prices generally vary from Rs. 24 to Rs. 46 businesses of its overseas principals, and consequently Sunrise
per kg, packet tea prices vary from Rs. 40 per kg to as high-as India became a subsidiary of Omo International. This change
more than Rs. 160 per kg. However the bulk of the packet tea however did not affect the operations of Sunrise (I) ‘Ltd.,
sales are in the price range of Rs. 40 to Rs. 65 per kg. There is a which continued to be a single product company being only in
widely held consumer perception that the higher prices of the business of tea blending,
packaged brands are due to the packaging and various promo- packaging and marketing. By the end of 1970s due to a variety
tional costs involved in their marketing. Thus a packaged brand of factors, the general performance of Sunrise deteriorated
of comparable price as a loose blend, is generally perceived to be sharply and its profits carne under severe squeeze.
of inferior quality than the same. This may be one of the To revive Sunrise (I) Ltd., Omo International sought the help
reasons for the dominant status of the loose segment in the of another of its subsidiaries in
market.
India, namely Detergents (India) Ltd. Detergents (1) Ltd. is one
From the demand side, the tea market may be classified into of the major manufacturers and
two segments, namely:
marketers of soaps and detergents in the country, and enjoyed
• The household consumers and an excellent reputation amongst its
• The ‘hot tea shop’ trade and other institutional buyers. investors, customers and in professional circles too. As a part of
The two segments differ considerably in terms of their the rehabilitation scheme for
consumption patterns. The preferences of the household Sunrise (I) Ltd. two major changes were brought about:
consumers across the different regions of the country are not
(i) The complete management team of Sunrise (I) Ltd. was
exactly known. But the major hotels mostly purchase tea ‘bags’
overhauled and a team of highly
and other packaged leaf blends, the ‘hot tea’ shop segment by
and large prefers dust blends throughout the country. This is so competent professionals were transferred from Detergents (I)
because it is felt that dust blends generally give more number of Ltd. to Sunrise (I) Ltd.
cups per unit weight than leaf blends, and are in general cheaper (ii) Detergents (I) Ltd. which was also in the business of
too. manufacturing and marketing dairy products, vanaspati, edible
Despite the fact that the loose tea segment comprised the bulk oils and animal feeds, transferred these businesses to Sunrise (I)
of the market, structurally it is a fragmented industry. Each Ltd. for a consideration. This implied that from being a single
major town or city in the country has its major tea traders, who product company,
buy in bulk from the various auction centres in the country or Sunrise India became a diversified food and beverages company.
from the gardens directly. Subsequently this stock is sold either Due to the addition of all these product lines the turnover of
through their own parlours or through secondary dealers. Sunrise India jumped from around Rs. 105 crores in 1982-83 to
In contrast to the loose tea segment, till recently the packaged more than Rs. 350 crores in 1986-87. The ‘length’ of Sunrise
tea segment was virtually dominated by two firms. The India’s Tea ‘line’ is shown in Exhibit-6. While star (I) Ltd. is
structure of the packaged tea market in the pre pouch period, is stronger in the leaf markets, Sunrise India has a strong presence
shown in Exhibit-5. in the dust segment with its ‘Emerald’ and ‘Suntop’ brands.
Packaged Tea Segment - A Profile of the Major Companies 3. Seth Tea Co : Seth Tea Co. was promoted by Seth Oil Co
Ltd., which is a member of the famous Seth group of
This section will give a brief overview of the current businesses
companes in India. Initially Seth Oil Co. Ltd. had acquired a
of the major firms comprising this segment.
number of tea gardens belonging to a British tea company, a!1d

© Copy Right: Rai University


256 11.556
launched a few brands in the market. The tea brands of Seth advertise separately for the different blends. In fact one

RESEARCH METHODOLOGY
Tea Co. are shown in Exhibit-6: However none of these brands advertisement was run for this brand on the National
could capture a ‘decent’ market share. It is noteworthy here that Network of Doordarshan, featuring a popular star of Hindi
while Star (I) Ltd. and Sunrise (I) Ltd. bought bulk of their tea films.
at the auctions, Seth Tea Co. has a captive source of supply from 5. Generate the Much Needed Dealer’s Push for the Brand.
the number of tea gardens owned by it. As mentioned in the previous section, branded tea always
In late 1986, Seth Tea Co. launched a new brand in the market lacked in terms of dealers push as compared to loose tea due
called ‘Seth Tea’. Initially the tea to the lower margin offered to the trade members by the
sold under this brand name was a leaf blend, which the marketers of major packet brands. Seth Tea Cc as shown in
company claimed was of the Assam ere variety, being straight Exhibit-4, decided to offer the retailers a margin of around
from its gardens in Assam. Subsequently a dust blend was also 10% on the company’s basi price as compared to the 5%
launched in the areas which are primarily ‘dust’ consuming. In margin offered on the traditional packets. “It was hoped that
March.1988 the company launched another blend of CTC leaf this margin which compares favourably with the margin on
tea, which is cheaper than the Assam CTC blend introduced by loose tea, will generate the much needed dealer’s push for the
the company earlier. This cheaper blend of ere leaf was launched brand.
primarily in the dust belts, with an eye to capture the Since its introduction, this brand has notched up impressive
market from ‘Red Star’ which has been the most popular brand, sales, in practically all areas of the country. Current estimates are
amongst the leaf blends in these that nationally it has acquired approximately 15% market share
in the packet tea segment. However in some states like
areas.
Maharashtra its gains are much more impres-sive.
The salient points of the marketing strategy adopted by Seth
Tea Co. vis-a-vis their new brand may be summarized as: Research Problem
As mentioned earlier that the tea market in India is virtually
1. 0ffer Value for Money to the Consumer All these blends of
saturated, a substantial gain in market share by any player in the
‘Seth Tea’ were priced at around Rs. 4O/kg. Thus these were
market means a concomitant loss to one or more of its
priced at the lower end of the packaged segment. The quality
competitors. Thus the gains of Seth Tea Co. in the market must
offered for this price was however claimed, to be better than
have been at the cost of some other packaged tea producer or
other packaged brands of comparable price. Thus it was
the same may have one from the loose tea segment. However,
hoped that consumers of more expensive brands of tea will
as yet thc source of gain for ‘Seth Tea’ is not clear. Thus the
switch over to the new brand. Simultaneously it was
change in market structure/shares that is occurring with the
expected that the low price of the brand will upgrade some
emergence of this new segment namely that of ‘pouches’ is also
loose tea consumers to packaged brands. The prices of
not known with certainty.
various blends of ‘Seth Tea’ along with ‘Red Star’, ‘Emerald’
and ‘Fine Dust’ which are dominant brands in their All these changes occurring in the market, warranted” reappraisal
respective segments are shown in Exhibit-7 of the strategies of the existing firms. However, any such action
must be preceded by a thorough study of the market.
2. Effective Positioning. The available tea brands in the market
have been positioned using diverse strategies. The marketing Consequently to start with Sunrise (India) Ltd. decided to study
strategy for ‘Seth Tea’ utilized benefit positioning for the the tea market in Maharashtra. Some characteristics of the
brand, with the claim that it offers the freshness of garden Maharashtra tea market is given in Exhibit-9. The study should
packed tea. It may be hypothesized in view of the success of cover in detail the following aspects:
the brand that, ‘freshness’ is one of the most -sought for 1. Estimate the market size of tea by segments.
attributes in tea. The various attributes/images that are 2. Do “a gain/loss analysis to determine the source of growth
sought in any tea by the consumers are shown in Exhibit-8. of ‘Seth Tea’ pouches.
3. Innovative Packaging. Till the introduction of this brand, 3. Study the Distribution System-(Width and Depth) of ‘Seth
tea has traditionally been packed in cardboard carton or Tea’ and their mode of
aluminium foil packs. ‘Seth Tea’ unlike any other till then operation.
was launched in plastic pouches. The customer perceptions
about plastic pouches, as a packaging medium is still not 4. Ascertain Pricing/Margin strategies of ‘Seth Tea’, and its
known for certain. However there is a perception in the impact on its rapid growth.
market, that customers consider plastic packaging as being 5. Ascertain support to ‘Seth Tea’ through various media.
cheaper than the conventional forms of packaging. Thus for 6. Ascertain perception/consumer responses of current users of
the same price the consumer may perceive the tea in a plastic pouch vis-a-vis loose/pack-et tea in the following cities in
pouch to be of a superior quality than that in a conventional Maharashtra; namely Bombay; Sholapur and Nagpur.
packet. These perceptions should pertain to both ‘product attributes
4. Save on Promotion Costs Through Use of Family Brand and packaging.
Name. The different blends were all marketed under the 7. Study the ‘Hot Tea Shop’ trade, in terms of its consumption
brand name of ‘Seth Tea’. This implied substantial reduction pattern and the effect of ‘ Seth
in promotional costs, as the company. did not need to

© Copy Right: Rai University


11.556 257
Tea’ on the same.
RESEARCH METHODOLOGY

Note: The figures within brackets indicate the corresponding


Keeping in view the background information given in the case, margins offered by Seth Tea Co. on ‘Seth Tea’.
decide on the following:
1. Research Design
2. Key infom1ation needed
3. Sources of Information
4. Questionnaire ‘Design
5. Plan of action
Exhibit-1
Principal Categories Of Tea Blend
Segment
Leaf Dust
Orthodox Leaf premium Dust
Premium C.T.C Leaf Popular Dust
Popular C.T.C Leaf Discount Dust
Exhibit-2

The dashed lines indicate flow of loose tea and the solid
Exhibit-3
Loose vs. Packet Tea Shares

Position Pre Pouch Period

Margin Structure In Tea Trade

SL No. Channel Intermediary Margin(%)

1 Broker 1.5-2.0
2 Loose Tea Trader 3.0-6.0
3 Secondary Wholesaller (Loose Tea) 3.0-4.0
4 Loose Tea Retailer 15.0-20.0
5 Redistribution Stockists 2.5-3.5
(Packet Tea) (5.0)
6 Packet Tea Retai1er 5.0-6.0
(10.0)

© Copy Right: Rai University


258 11.556
Serh

RESEARCH METHODOLOGY
Market Structure Town Popular Discount
Dust
Dust Pkts. Loose Total Dust
Segment Dust Dust (T/Wk) (T/Wk) Dust (T/Wk.)
(T/Wk.) (T/Wk) (T/Wk.)
Post 'Seth
Pre 'Seth Tea' Bmbay 2.25 055 - 2.8 57.2 60.0
Tea' Pune 0.45 0.75 0.1 1.2 11.0 12.2
1. Packet Tea lounch paiod taunch period Nagpur (11 Towns) 9.4 1.6 5.2 16.2 275 43.7
2. Pouch Tea 1986 1988 Nasik 3.8 2.3 1.0 7.1 7.0 14.1
3. Loose Tea Sholapur (5 Towns) 8.1 25 3.1 13.7 5.3 19.0
(T/Wk) (T/Wk) TOTAL 24.0 7.6 9.4 41.0 108.0 149.0
Total 364.0 330.0 State o(Maharnshtrn 191.0 60.0 79.0 330.0 510.0 840.0

- 90.0 Pre 'Seth Tea' Lounch Period 1986 Post 'Seth Tea' Launch Period 1988
612.0 600.0 (TIWk) (T/Wk.)
Pkls Pouch Loose T oral Pkls Pouch Loose Total
976.0 1020.0
Bombay 73.0 - 118.0 191.0 72.0 8.0 121.0 201.0
Pune 10.0 - 25.0 35.0 9.0 1.0 27.0 37.0
Note: Annual growth rate of 2.% has been assumed for the Nagpur (11
20.0 - 32.0 52.0 18.0 6.0 31.0 55.0
Towns)
state of Maharashtra. this being the growth rate of population Nasik. 5.0 - 15.0 19.0 4.3 1.7 14.0 20.0
for the state. Sholapur (5
11.0 - 10.0 21.0 9.7 3.3 9.0. 22.0
Towns)
Segment Wise Analysis Total 118.0 - 200.0 318.0 113.0 20.0 202.0 335.0

1. Leaf Vs. Dust Segments: Maharashtra is predominantly a dust


tea market. However the cities of Bombay and Pune and to a Hence, Gain from packaged segment = 11 Tens (55%)
great extent Nagpur and Nasik constitute a sizeable leaf tea Gain from loose tea segment = 9 Tons (45%)
market. Projecting for the whole of Maharashtra:
It is estimated that even in the upcountry towns, on the average Total Market Size for ‘Tata Tea’ Pouch = 90 Tons/Week.
5-6% of the packet tea sales are of leaf blends, with Star India’s
‘Red Star’ being the major brand. Gain from packaged tea segment = 50 Tons/Week

Seth Tea Ltd.’ had initially introduced two blends namely, ‘ere Gain from loose tea segment = 40 Tons/Week
DUST’ and ASSAM ere LEAF in the dust and leaf consuming Hot Tea Shop' Trade

areas respectively. However since March 1988 another leaf blend No. of Towns Total no. HTS Sample Size Tot Consump/Week Sun Scar Seth
Loose
namely ‘CTC LEAF has been introduced into the market. -Tons/Wk.
17 16,600 130 62 tons 3.0 3.85
Most of the leaf tea sales in thc state are of CTC varieties. The 2.85 52.75
segment wise market structure is as follows:
In these 17 markets, the market size is
Segment Segment size (T/Wk) Total Market size =326.0 Tones/week
Consumer Segment = 264.0Tones/Week (80%)
Leaf Tea Segment 180.0 HTS Trade = 62.0 Tons/Week (20%)
Dust Tea Segment 840.0 Projecting for the whole of Maharashtra:
1020.0 Total ‘HTS’ segment size = 200 Tons/Week.
Segment Wise Market Structure of Leaf Tea Segment
Loose Tea Market
TOWN Popular Pronium Seth Leaf Total Pkt. Loose Leaf
The loose tea market can be divided into two categories based
Leaf Leaf (TIWk.) Leaf (T/Wk)
(T/Wk) (T/Wk.) (T/Wk.) on the size/population of the town.
Bombay 45.0 15.0 8.0 68.0 63.0 1. Those in cities with population of 3 Lakhs and above.
Pune 5.2 1.2 0.9 7.3 16.0
In these markets usually the share of the loose tea segment
Nagpur (11 Towns) 5.4 1.0 0.8 7.2 15
Nasik 2.2 1.0 0.7 3.9 3.1 varies from 60-70% in the total tea market. The retail price of
Sholapur & Others 2.2 0.8 0.6 3.6 6.4 loose tea varies from Rs. 3O/kg to Rs. 44/kg,”
Total 60.0 19.0 11.0 90.0 90.0 However the bulk of the tea sales are of variety priced
Segment Wise Market Structure for Dust Segment around Rs. 38!kg to Rs. 42!kg. The dealer’s margin varies
from 15% to 20% and hence the product enjoys considerable
The following analysis is for the tC’,’.1lS which comprised the
amount of dealer’s push. This explains to a great extent the
sample for the -survey. The results from this analysis have been
large market share enjoyed by this segment in these markets.
used to project the s~grnent sizes for the whole of
The big dealers in these towns buy the tea directly from the
Maharashtra.
Calcutta/Cochin/Conoor auctions. The broker’s
commission is 1.5% and the Wholesaler’s commission is
6%. Most of the sales of loose tea in Maharashtra arc of
dust variety and the ‘HTS’ trade is totally dominated by this
segment.

© Copy Right: Rai University


11.556 259
2. Those in towns with population less than 3 lakhs. In these zone while east zone managed only Rs. 34.69 crores. Market size
RESEARCH METHODOLOGY

towns typically, the loose tea trade occupies a smaller share of of Narth India was Rs. 55.32 crores and South India accounted
the tea market. The useful market shares being of the order for Rs. 52.79 crores worth of sales in 1987-88.
of 10% to 30% the reasons for this low market share of the According to various types of personal computers marketed in
loose tea segment may be attributed to: India, the break-up of different models’ usage are as follows:
• Lack of proper supply of loose tea. The tea has to be PC-386 (with 2 Floppy Drive): 15%
procured by the ‘Kirana Merchants’ of these towns from the
PC-AT (with a 2c% chip) : 40%
nearest big city. Consequently the cartage and other associated
expenses have to be borne by the concerned merchant in the PC-XT (with hard disk drive) : 40%
small town. Compared to this, there is regular supply of PC (The latest variety of powerful PCs including mini comput-
most of the packaged brands; thus saving the dealer all the ers) : 50% Industry experts are of the opinion that the new
trouble of transporting the product himself. variety of very powerful PCs with more than 100 times memory
• The tea available in these markets is of a cheaper variety, capacity of PC-386 will cover 15% of the PC market with
selling for around Rs. 28/kg to Rs. 32/kg. Thus lower price percentage shares of PC-XT and AT coming down to 30% each
combined with higher costs yields a margin to the dealer of in the next 3 years.
the order of 4% to 5% which is comparable to his margin on The main user groups of personal computer are corporate
most of the packaged brands. sector, small business, education –al/scientific institutions and
The above two factors have to a great extent mitigated the software houses.
dealer’s push for loose tea in the said markets. 3. Demand Scenario
Case Study No .7 The demand for PCs has gone up due to the fall in prices. This
was made possible when the government decided to liberalise
Personal Computer
its import policies. The new computer policy (NCP) provided a
1. Introduction fill up to the industry. The underlying factors which boosted the
Computer industry is the fastest growing industry in the world demand are termed as:
with an estimated size of $160 billions in 1989. This ubiquitous
a. Accelerating Elements
machine is revolutionising many fields of human activity and
Demand for PCs is expected to maintain this upward trend for
posing itself indispensable to the burgeoning populations of
many reasons. These are:
users all over the world. The range of computers available in the
market extends from the mighty super computers like Cray 1. With fall in prices, they are becoming more affordable,
XMP-24 to the small personal computers. especially to institutions and some individuals. Although a
PC (priced at Rs. 280(0) is not yet within the reach of the
The Indian PC industry came into its own in 1984 with the
common man, people belonging to the upper segment of
launching o f Neptune PC, as IBM PC compatible by Minicomp.
income have, however, started installing PCs in their home
Since then it has been a story of phenomenal growth for the
as a means of entertainment for their grown-up children.
industry. Sensing the tremendous opportunities available in the
industry, many new companies entered the market. The PC 2. Rapid computerisation in banking and service sectors like
market is now characterised by cut-throat competition, although Railways, Airlines, Telephones, T.V’s network programmes
the core product and NICNET have boosted the demand for PC. It has
greatly improved the efficiency of operations and people at
of most of the companies are similar, if not identical. The USP
large have appreciated its value.
of the companies is understandably after sales support, low
price etc. Although the Indian market is not growing as fast as 3.User friendly software is increasingly available and this has
its western counterpart, it is certainly rising compared to other greatly facilitated many to install a PC.
products. In the personal computer business the marketing 4. Improvements in complimentary products like peripherals
function is assuming greater importance due to the dynamic, (e.g. better printers, more memory space, etc.) have enhanced
competitive nature of the industry and high rate of technologi- the benefits possible from the PC.
cal obsolescence. 5. Better after-sales service is being offered. Indian firms are
2. Market Size and Trend tying up with foreign com-panies and making use of their
In value terms, the PC market was worth Rs. 400 crores in 1988- superior technology and established brand names, both of
89, up from Rs. 230 crores in 1987-88 and Rs. 108 crores in which increase the effectiveness of the product to the
1986-87. But still, the installed base of computers is dispropor- customer.
tionately small for an economy o f India’s size. In volume term 6. Increasing awareness of the potential of the product has also
26000 machines were sold in 1987-88. A growth rate of 50% is increased its demand.
envisaged till 1995. The growth in value terms and volume
b Inhibiting Factors
terms of the PC market over the years is shown in Exhibits 1
and 2 Personal computer sales in four geographic regions in The demand has been somewhat inhibited due to:
India show that the market is not uniformly distributed 1. High rate of technological obsolescence - customer feels the
throughout the country. Sales was Rs. 87.2 crores in thc west machine may become out-

© Copy Right: Rai University


260 11.556
dated soon.

RESEARCH METHODOLOGY
2. Lack of awareness, Indian customers at large still do not
know the capabilities of the product.
3. High price - The existing price of a PC generally puts off
many small organisations and individual customers to adopt
it.
4. The various fears being expressed in association with large
scale computerisation, specially the fear of further
unemployment and dehumanization of life etc. have also in-
hibited the growth of demand.
The competitive scenario of the Indian Personal Computer
industry in 1988-89 is briefly projected in Exhibits 3.1, 3.2
and 3.3.
Q. Given this background, outline a marketing research study
among the manufacturers of personal computers to examine
the following issues for each of the companies.
a What is its position in the PC market?
b What are its comparative strengths and weaknesses vis-a-vis
the major competitors?
c Who are its target customer
d What promotional schemes does it adopt?
e Its perception about buyer’s need.
f The views on growth potential of different customer
groups.
g The distribution method(s) with specific emphasis on
retailers and after sales service arrangements.

© Copy Right: Rai University


11.556 261
References
RESEARCH METHODOLOGY

1. Research For Marketing Decisions, Paul E Green and Donald


S Tull. Englewood Cliff NJ. Prentice Hall. Inc., 4th Edition,
2. 1976. 2. Handbook of Marketing Research, Robert Ferber
(Ed.) New York. McGraw Hill, 1974.
3. Marketing Research: An Applied Approach, Thomas C.
Kinner and Jammes R. Taylor, Singapore, McGraw Hill,
1983.
4. Marketing Research: Text and Cases, H.W. Boyd, Ralph
Westfall, S.F. Stasch. Homewood, Itlinois, Richard D. Irwin
Inc., 1977 (4th Edition)
5. Marketing Research: Methodological Foundations, G.H.
Churchill (Jr.) Hinsdale, Illinois, Dryden Press, 1976.
6. Marketing research: Applications and Problems, Arun K.
Jain, Christian Pinson and Brian T. Ratchford, New York,
Wiley, 1982.
7. Marketing Research: Information Systems and Decision
Making, Bortrem Schoner and K.P. Uhl, New York, Wiley,
1975 (2nd Edition).
8. Cases in Maneting Research, W.E. Wentz, New York, Harper
& Row, 1975.
9. Marketing Research: Meaning, Measurements and Methods,
D.S. Tull and 0.1. Hawkins, New York: Macmillan, 1976.
10.Marketing Research: Fundamentals and Dynamics, Gerald
Zaltman and P.S. Berger,
Hinsdale, Illinois, Dryden Press, 1975. .

© Copy Right: Rai University


262 11.556

You might also like