You are on page 1of 35

Page |1

Qno1). Solution:
What is a good business research?
Good research is carefully planned and conducted; resulting in dependable data that manager can use to reduce the decision-making risks. It follows the standards of scientific method and is systematic, clearly defined and planned and is based on empirical procedures. Following are the attributes of a GOOD BUSINESS RESEARCH: 1. The PURPOSE is clearly defined. The problem involved should be clearly stated, indicating the scope, limitations and the actual meaning of the terms involved should be explained. 2. The research PROCESS should be explained in details. Significant procedural details should be described to permit another researcher to repeat the research. Each step, such as acquiring participants, sampling methods, and gathering procedures should be revealed. If this information is omitted it is difficult to estimate The reliability and validity of the data and the research overall. 3. Research DESIGN should be thoroughly planned. The procedural design should be planned to yield as objective results as possible. If possible methods like opinion survey should be substituted with more reliable methods like direct observation and getting data from documented sources. All possible efforts should be made to minimize the influence of personal bias while working with data collection and recording. 4. LIMITATIONS of the research should be revealed. The researcher should report the flaws in procedures and methods to be used and how it might affect the data and findings. It is important as some of the imperfections in the research design and conduct may invalidate the results completely. 5. The data should be ABEQUATELY analyzed. The data should be classified and analyzed in a way that will help the researcher to come to valuable conclusions. 6. FINDINGS should be presented unambiguously. The report should include the clearly stated findings and should be reported with maximum objectivity. Presentation of the data should be clear, precise to be reasonably interpreted and easily understood by the decision maker and organized in a manner so that manager is able to locate crucial data.

Page |2 7. All CONCLUSIONS must be justified! Only those conclusions should be included into the report for which the data provides solid basis. Researcher should omit the mistake of broaden the basis with the help of his personal Experience and should try not to draw universal conclusions form the study, which is uses limited population sample.

8. Clearly Define Your Objectives


One characteristic of good business research is clearly defining your objectives. You have to know in advance what information you want to procure. For example, you might want to see how satisfied customers are with various product features and with current prices of your products. Additionally, you might want to determine how your customers rate your customer service department on variables such as professionalism, timeliness and accuracy. Another objective might be to determine why 10 percent of your customers are returning merchandise to the store. Whatever the case, outline your objectives, then write a questionnaire that will address those objectives.

9. Choose the Right Methodology


Your methodology is the method you use for obtaining your business research data. Business research methodologies include phone, mail and Internet surveys as well as personal interviews. Mall intercept studies are another type of methodology, where researchers talk to consumers in stores. A phone survey might be the right methodology if you want to garner information quickly. However, phone respondents can hang up on you before the survey is completed. You might consider using personal interviews for longer questionnaires. One advantage of personal interviews is that the respondent is generally more attentive, according to Via-Interactive, a business advisory company. Internet surveys might work best if you run an online business. Mail surveys can take a considerable amount of time, but you can generally collect more information by mail. Pay people a few bucks as an incentive for mail surveys. People might be more open about providing demographic information such as race and income by mail, especially if they remain anonymous.

10. Use Both Open- and Closed-Ended Responses


It is best to use combination open- and- closed-ended responses in your questionnaire. Closed-ended responses are multiple choices in nature. Respondents can answer "yes" or "no" or state that they are "very satisfied" or "very dissatisfied with closed-ended responses. Closed-ended responses should make up the bulk of your questionnaire. Open-ended responses are "fill-in-the-blank" responses. Open-ended responses will allow your respondents to tell why they answered a question a certain way. An example of an open-ended response is the question "Why?" For example, ask a customer "Why do you feel that way?" when they respond negatively to a survey question.

Page |3

11. Do Ongoing Research


Business research should be conducted on a regular basis instead of just periodically. Ongoing surveys can prevent certain data skews that occur with periodic surveys. For example, a soft drink manufacturer's customers might drink fewer soft drinks in the winter than summer. The soft drink company likely will get a better read on customer usage if they survey customers year-round. That way the usage information can be averaged throughout the year. Ultimately the research should help the managers to select more effective, less risky and more profitable alternatives.

12. What are the dimensions of a MANAGER?


1. Lodging managers 2. Property, real estate, and community association managers 3. Financial managers 4. Administrative services managers 5. Construction managers 6. Food service managers 7. Purchasing managers, buyers, and purchasing agents 8. Industrial production managers 9. Hotels and Other Accommodations 10. Sales worker supervisors 11. Occupational Information Network Coverage 12. Home appliance repairers 13. Clothing, Accessory, and General Merchandise Stores 14. State and Local Information 15. Advertising sales agents 16. The Career Guide to Industries 17. Construction 18. Medical assistants

Page |4 19. Security guards and gaming surveillance officers 20. Top executives

2a) Solution>
The business research process entails learning everything possible about company customers, competitors and the industry. The major objectives of the process are determining what products or services to offer, which customers are most likely to buy them, where to sell them and how to price and promote them. Various steps in the business research process help a company achieve these objectives. Until the sixteenth century, human inquiry was primarily based on introspection. The way to know things was to turn inward and use logic to seek the truth. This paradigm had endured for a millennium and was a well-established conceptual framework for understanding the world. The seeker of knowledge was an integral part of the inquiry process.

Identifying Competitors
The first step is identifying key competitors in the industry. One way to garner information on the competition is through secondary research. Secondary research information is data that are already available about the industry: market share and total market sales. Secondary research may also provide detailed information about competitors, such as number of employees, products they sell and their key strengths. Secondary research can be obtained through various sources, depending on the industry. For example, the NPD Group uses their CREST analysis for restaurants. Nielsen provides data about consumer package goods.

Studying Customers
The process continues with a study of the consumer or business customer. It is important to determine what the customer wants and needs before developing products to meet those needs. The consumer will usually dictate which products will sell. If consumers needs are not met, they will usually buy competitive products. The best way to determine customer needs is through primary research. Primary research includes phone surveys, personal interviews and even mail surveys. With these surveys, marketing research professionals will often test certain product concepts, measure customer satisfaction and determine the best features and prices for their products.

SWOT Analysis
Once detailed information on customers and the competition has been garnered, a SWOT analysis can be used to study the company strengths, weaknesses, opportunities and threats. Strength may be the company market share or a good reputation among customers, according to SWOT Analysis at a popular business reference site. A weakness may be inexperienced management. Additionally, a company may

Page |5 have an opportunity to purchase another company. Threats may include new government regulation in the industry or a well-financed new competitor. A company uses the SWOT analysis to exploit its strengths via available opportunities. For example, a company with strong financial backing could purchase another company to increase its distribution and market share. A business can also minimize its weaknesses against potential threats, for example by hiring more experienced marketing people to deal with an increase in competition.

Studying the Target Audience


At least part of the business research process should be devoted to studying a company target audience-the customers who are most likely to purchase the company products. For example, a small radio station& primary target audience may be white professional women between the ages of 35 and 54. A company can determine its target audience through primary research. Application

The Basic Steps of the Marketing Research Process


The steps taken during the business research process are effective only if the company uses them to develop marketing strategies. Also, business research is a constant endeavor. Technologies change, as do customer tastes. Therefore, it is important to conduct business research throughout the year.

Problem Recognition & Definition


All research begins with a question. Intellectual curiosity is often the foundation for scholarly inquiry. Some questions are not testable. The classic philosophical example is to ask, "How many angels can dance on the head of a pin?" While the question might elicit profound and thoughtful revelations, it clearly cannot be tested with an empirical experiment. Prior to Descartes, this is precisely the kind of question that would engage the minds of learned men. Their answers came from within. The modern scientific method precludes asking questions that cannot be empirically tested. If the angels cannot be observed or detected, the question is considered inappropriate for scholarly research. A paradigm is maintained as much by the process of formulating questions as it is by the answers to those questions. By excluding certain types of questions, we limit the scope of our thinking. It is interesting to note, however, that modern physicists have began to ask the same kinds of questions posed by the Eastern philosophers. "Does a tree falling in the forest make a sound if nobody is there to hear it?" This seemingly trivial question is at the heart of the observer/observed dichotomy. In fact, quantum mechanics predicts that this kind of question cannot be answered with complete certainty. It is the beginning of a new paradigm. Defining the goals and objectives of a research project is one of the most important steps in the research process. Clearly stated goals keep a research project focused. The process of goal definition

Page |6 usually begins by writing down the broad and general goals of the study. As the process continues, the goals become more clearly defined and the research issues are narrowed. Exploratory research (e.g., literature reviews, talking to people, and focus groups) goes hand-in-hand with the goal clarification process. The literature review is especially important because it obviates the need to reinvent the wheel for every new research question. More importantly, it gives researchers the opportunity to build on each others work. The research question itself can be stated as a hypothesis. A hypothesis is simply the investigator's belief about a problem. Typically, a researcher formulates an opinion during the literature review process. The process of reviewing other scholar's work often clarifies the theoretical issues associated with the research question. It also can help to elucidate the significance of the issues to the research community. The hypothesis is converted into a null hypothesis in order to make it testable. "The only way to test a hypothesis is to eliminate alternatives of the hypothesis." (Anderson, 1966, p.9) Statistical techniques will enable us to reject a null hypothesis, but they do not provide us with a way to accept a hypothesis. Therefore, all hypothesis testing is indirect.

Creating the Research Design


Defining a research problem provides a format for further investigation. A well-defined problem points to a method of investigation. There is no one best method of research for all situations. Rather, there are a wide variety of techniques for the researcher to choose from. Often, the selection of a technique involves a series of trade-offs. For example, there is often a trade-off between cost and the quality of information obtained. Time constraints sometimes force a trade-off with the overall research design. Budget and time constraints must always be considered as part of the design process (Walonick, 1993). Many authors have categorized research design as either descriptive or causal. Descriptive studies are meant to answer the questions of who, what, where, when and how. Causal studies are undertaken to determine how one variable affects another. McDaniel and Gates (1991) state that the two characteristics that define causality are temporal sequence and concomitant variation. The word causal may be a misnomer. The mere existence of a temporal relationship between two variables does not prove or even imply that A causes B. It is never possible to prove causality. At best, we can theorize about causality based on the relationship between two or more variables, however, this is prone to misinterpretation. Personal bias can lead to totally erroneous statements. For example, Blacks often score lower on I.Q. scores than their White counterparts. It would be irresponsible to conclude that ethnicity causes high or low I.Q. scores. In social science research, making false assumptions about causality can delude the researcher into ignoring other (more important) variables.

Page |7 There are three basic methods of research: 1) survey, 2) observation, and 3) experiment (McDaniel and Gates, 1991). Each method has its advantages and disadvantages. The survey is the most common method of gathering information in the social sciences. It can be a faceto-face interview, telephone, or mail survey. A personal interview is one of the best methods obtaining personal, detailed, or in-depth information. It usually involves a lengthy questionnaire that the interviewer fills out while asking questions. It allows for extensive probing by the interviewer and gives respondents the ability to elaborate their answers. Telephone interviews are similar to face-to-face interviews. They are more efficient in terms of time and cost, however, they are limited in the amount of in-depth probing that can be accomplished, and the amount of time that can be allocated to the interview. A mail survey is generally the most cost effective interview method. The researcher can obtain opinions, but trying to meaningfully probe opinions is very difficult. Observation research monitors respondents' actions without directly interacting with them. It has been used for many years by A.C. Nielsen to monitor television viewing habits. Psychologists often use oneway mirrors to study behavior. Social scientists often study societal and group behaviors by simply observing them. The fastest growing form of observation research has been made possible by the bar code scanners at cash registers, where purchasing habits of consumers can now be automatically monitored and summarized. In an experiment, the investigator changes one or more variables over the course of the research. When all other variables are held constant (except the one being manipulated), changes in the dependent variable can be explained by the change in the independent variable. It is usually very difficult to control all the variables in the environment. Therefore, experiments are generally restricted to laboratory models where the investigator has more control over all the variables.

Sampling
It is incumbent on the researcher to clearly define the target population. There are no strict rules to follow, and the researcher must rely on logic and judgment. The population is defined in keeping with the objectives of the study. Sometimes, the entire population will be sufficiently small, and the researcher can include the entire population in the study. This type of research is called a census study because data is gathered on every member of the population. Usually, the population is too large for the researcher to attempt to survey all of its members. A small, but carefully chosen sample can be used to represent the population. The sample reflects the characteristics of the population from which it is drawn.

Page |8 Sampling methods are classified as either probability or nonprobability. In probability samples, each member of the population has a known probability of being selected. Probability methods include random sampling, systematic sampling, and stratified sampling. In nonprobability sampling, members are selected from the population in some nonrandom manner. These include convenience sampling, judgment sampling, quota sampling, and snowball sampling. The other common form of nonprobability sampling occurs by accident when the researcher inadvertently introduces no randomness into the sample selection process. The advantage of probability sampling is that sampling error can be calculated. Sampling error is the degree to which a sample might differ from the population. When inferring to the population, results are reported plus or minus the sampling error. In nonprobability sampling, the degree to which the sample differs from the population remains unknown. (McDaniel and Gates, 1991) Random sampling is the purest form of probability sampling. Each member of the population has an equal chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased. Random sampling is frequently used to select a specified number of records from a computer file. Systematic sampling is often used instead of random sampling. It is also called an Nth name selection technique. After the required sample size has been calculated, every Nth record is selected from a list of population members. As long as the list does not contain any hidden order, this sampling method is as good as the random sampling method. Its only advantage over the random sampling technique is simplicity. Stratified sampling is commonly used probability method that is superior to random sampling because it reduces sampling error. A stratum is a subset of the population that share at least one common characteristic. The researcher first identifies the relevant stratums and their actual representation in the population. Random sampling is then used to select subjects for each stratum until the number of subjects in that stratum is proportional to its frequency in the population. Convenience sampling is used in exploratory research where the researcher is interested in getting an inexpensive approximation of the truth. As the name implies, the sample is selected because they are convenient. This nonprobability method is often used during preliminary research efforts to get a gross estimate of the results, without incurring the cost or time required to select a random sample. Judgment sampling is a common nonprobability method. The researcher selects the sample based on judgment. This is usually and extension of convenience sampling. For example, a researcher may decide to draw the entire sample from one "representative" city, even though the population includes all cities. When using this method, the researcher must be confident that the chosen sample is truly representative of the entire population. Quota sampling is the nonprobability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population.

Page |9 Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling. Snowball sampling is a special nonprobability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.

Data Collection
There are very few hard and fast rules to define the task of data collection. Each research project uses a data collection technique appropriate to the particular research methodology. The two primary goals for both quantitative and qualitative studies are to maximize response and maximize accuracy. When using an outside data collection service, researchers often validate the data collection process by contacting a percentage of the respondents to verify that they were actually interviewed. Data editing and cleaning involves the process of checking for inadvertent errors in the data. This usually entails using a computer to check for out-of-bounds data. Quantitative studies employ deductive logic, where the researcher starts with a hypothesis, and then collects data to confirm or refute the hypothesis. Qualitative studies use inductive logic, where the researcher first designs a study and then develops a hypothesis or theory to explain the results of the analysis. Quantitative analysis is generally fast and inexpensive. A wide assortment of statistical techniques is available to the researcher. Computer software is readily available to provide both basic and advanced multivariate analysis. The researcher simply follows the preplanned analysis process, without making subjective decisions about the data. For this reason, quantitative studies are usually easier to execute than qualitative studies. Qualitative studies nearly always involve in-person interviews, and are therefore very labor intensive and costly. They rely heavily on a researcher's ability to exclude personal biases. The interpretation of qualitative data is often highly subjective, and different researchers can reach different conclusions from the same data. However, the goal of qualitative research is to develop a hypothesis--not to test one. Qualitative studies have merit in that they provide broad, general theories that can be examined in future research.

P a g e | 10

Data Analysis
Modern computer software has made the analysis of quantitative data a very easy task. It is no longer incumbent on the researcher to know the formulas needed to calculate the desired statistics. However, this does not obviate the need for the researcher to understand the theoretical and conceptual foundations of the statistical techniques. Each statistical technique has its own assumptions and limitations. Considering the ease in which computers can calculate complex statistical problems, the danger is that the researcher might be unaware of the assumptions and limitations in the use and interpretation of a statistic.

Reporting the Results


The most important consideration in preparing any research report is the nature of the audience. The purpose is to communicate information, and therefore, the report should be prepared specifically for the readers of the report. Sometimes the format for the report will be defined for the researcher (e.g., a dissertation), while other times, the researcher will have complete latitude regarding the structure of the report. At a minimum, the report should contain an abstract, problem statement, methods section, results section, discussion of the results, and a list of references (Anderson, 1966).

Validity and Reliability


Validity refers to the accuracy or truthfulness of a measurement. Are we measuring what we think we are? "Validity itself is a simple concept, but the determination of the validity of a measure is elusive" (Spector, 1981, p. 14). Face validity is based solely on the judgment of the researcher. Each question is scrutinized and modified until the researcher is satisfied that it is an accurate measure of the desired construct. The determination of face validity is based on the subjective opinion of the researcher. Content validity is similar to face validity in that it relies on the judgment of the researcher. However, where face validity only evaluates the individual items on an instrument, content validity goes further in that it attempts to determine if an instrument provides adequate coverage of a topic. Expert opinions, literature searches, and pretest open-ended questions help to establish content validity. Criterion-related validity can be either predictive or concurrent. When a dependent/independent relationship has been established between two or more variables, criterion-related validity can be assessed. A mathematical model is developed to be able to predict the dependent variable from the independent variable(s). Predictive validity refers to the ability of an independent variable (or group of variables) to predict a future value of the dependent variable. Concurrent validity is concerned with the relationship between two or more variables at the same point in time.

P a g e | 11 Construct validity refers to the theoretical foundations underlying a particular scale or measurement. It looks at the underlying theories or constructs that explain a phenomena. This is also quite subjective and depends heavily on the understanding, opinions, and biases of the researcher. Reliability is synonymous with repeatability. A measurement that yields consistent results over time is said to be reliable. When a measurement is prone to random error, it lacks reliability. The reliability of an instrument places an upper limit on its validity (Spector, 1981). A measurement that lacks reliability will necessarily be invalid. There are three basic methods to test reliability: test-retest, equivalent form, and internal consistency. A test-retest measure of reliability can be obtained by administering the same instrument to the same group of people at two different points in time. The degree to which both administrations are in agreement is a measure of the reliability of the instrument. This technique for assessing reliability suffers two possible drawbacks. First, a person may have changed between the first and second measurement. Second, the initial administration of an instrument might in itself induce a person to answer differently on the second administration. The second method of determining reliability is called the equivalent-form technique. The researcher creates two different instruments designed to measure identical constructs. The degree of correlation between the instruments is a measure of equivalent-form reliability. The difficulty in using this method is that it may be very difficult (and/or prohibitively expensive) to create a totally equivalent instrument. The most popular methods of estimating reliability use measures of internal consistency. When an instrument includes a series of questions designed to examine the same construct, the questions can be arbitrarily split into two groups. The correlation between the two subsets of questions is called the splithalf reliability. The problem is that this measure of reliability changes depending on how the questions are split. A better statistic, known as Chronbach's alpha (1951), is based on the mean (absolute value) interim correlation for all possible variable pairs. It provides a conservative estimate of reliability, and generally represents "the lower bound to the reliability of an unweight scale of items" (Carmines and Zeller, p. 45). For dichotomous nominal data, the KR-20 (Kuder-Richardson, 1937) is used instead of Chronbach's alpha (McDaniel and Gates, 1991).

Variability and Error


Most research is an attempt to understand and explain variability. When a measurement lacks variability, no statistical tests can be (or need be) performed. Variability refers to the dispersion of scores. Ideally, when a researcher finds differences between respondents, they are due to true difference on the variable being measured. However, the combination of systematic and random errors can dilute the accuracy of a measurement. Systematic error is introduced through a constant bias in a measurement. It

P a g e | 12 can usually be traced to a fault in the sampling procedure or in the design of a questionnaire. Random error does not occur in any consistent pattern, and it is not controllable by the researcher.

Summary
Scientific research involves the formulation and testing of one or more hypotheses. A hypothesis cannot be proved directly, so a null hypothesis is established to give the researcher an indirect method of testing a theory. Sampling is necessary when the population is too large, or when the researcher is unable to investigate all members of the target group. Random and systematic sampling are the best methods because they guarantee that each member of the population will have an known non-zero chance of being selected. The mathematical reliability (repeatability) of a measurement, or group of measurements, can be calculated, however, validity can only be implied by the data, and it is not directly verifiable. Social science research is generally an attempt to explain or understand the variability in a group of people.

References:
Anderson, B. (1966) the Psychology Experiment: An Introduction to the Scientific Method. Belmont, CA: Wadsworth. McDaniel, C. and R. Gates (1991) Contemporary Marketing Research. St. Paul, MN: West. Carmines, E., and R. Zeller, (1979) Reliability and Validity Assessment. Beverly Hills: Sage. Spector, P. (1981) Research Design. Beverly Hills: Sage. Willowick, D. (1993) Stat Pac Gold IV: Marketing Research and Survey Edition. Minneapolis, MN: Stat Pac, Ink

Qno2b). Solution:
What is a research question?
A research question is a clear, focused, concise, complex and arguable question around which you center your research. You should ask a question about an issue that you are genuinely curious about.

Research Question:
A research question is the methodological point of departure of scholarly research in both the natural and social sciences. The research will answer any question posed. At an undergraduate level, the answer to the research question is the thesis statement.

P a g e | 13

IMPORTANCE
The research question is one of the first methodological steps the investigator has to take when undertaking research. The research question must be accurately and clearly defined. Choosing a research question is the central element of both quantitative and qualitative research and in some cases it may precede construction of the conceptual framework of study. In all cases, it makes the theoretical assumptions in the framework more explicit, most of all it indicates what the researcher wants to know most and first.

USES
The student or researcher then carries out the research necessary to answer the research question, whether this involves reading secondary sources over a few days for an undergraduate term paper or carrying out primary research over years for a major project. Once the research is complete and the researcher knows the (probable) answer to the research question, writing can begin. In term papers, the answer to the question is normally given in summary in the introduction in the form of a thesis statement.

TYPES AND PURPOSE


The Research Question serves two purposes: (1) it determines where and what kind of research the writer will be looking for and (2) it identifies the specific objectives the study or paper will address. Therefore, the writer must first identify the type of study (Qualitative, Quantitative, or Mixed) before the Research Question is developed. Qualitative Study: A Qualitative study seeks to learn why or how, so the writers research must be directed at determining the why and how of the research topic. Therefore, when crafting a Research Question for a Qualitative study, the writer will need to ask a why or how question about the topic. For example: How did the company successfully market its new product? The sources needed for qualitative research typically include print and internet texts (written words), audio and visual media. Here is Creswell's (2009) example of a script for a qualitative research central question: _________ (How or what) is the _________ (story for for narrative research; meaning of the phenomenon for phenomenology; theory that explains the process of for grounded theory; culture-sharing pattern for ethnography; issue in the case for case study) of _________ (central phenomenon) for _________ (participants) at _________ (research site).

P a g e | 14

Quantitative Study: A Quantitative study seeks to learn what, where, or when, so the writers research must be directed at determining the what, where, or when of the research topic. Therefore, when crafting a Research Question for a Quantitative study, the writer will need to ask a what, where, or when question about the topic. For example: Where should the company market its new product? Unlike a Qualitative study, a Quantitative study is mathematical analysis of the research topic, so the writers research will consist of numbers and statistics. Here is Creswell's (2009) example of a script for a quantitative research question: Does _________ (name the theory) explain the relationship between _________ (independent variable) and _________ (dependent variable), controlling for the effects of _________ (control variable)? Alternatively, a script for a quantitative null hypothesis might be as follows: There is no significant difference between _________ (the control and experimental groups on the independent variable) on _________ (dependent variable). Quantitative Studies also fall into two categories: (a) Correlational Studies and (b) Experimental Studies: A Quantitative-Correlational study is non-experimental, requiring the writer to research relationships without manipulating or randomly selecting the subjects of the research. The Research Question for a Quantitative-Correlational study may look like this: What is the relationship between long distance commuters and eating disorders? A QuantitativeExperimental study is experimental in that it requires the writer to manipulate and randomly select the subjects of the research. The Research Question for a Quantitative-Experimental study may look like this: Does the consumption of fast food lead to eating disorders? Mixed Study: A Mixed study integrates both Qualitative and Quantitative studies, so the writers research must be directed at determining the why or how and the what, where, or when of the research topic. Therefore, the writer will need to craft a Research Question for each study required for the assignment. Note: A typical study may be expected to have between 1 to 6 Research Questions. Once the writer has determined the type of study to be used and the specific objectives the paper will address, the writer must also consider whether the Research Question passes the so what test. The so what test means that the writer must construct evidence to convince the audience why the research is expected to add new or useful knowledge to the literature.

Why is a research question essential to the research process?


Research questions help writers focus their research by providing a path through the research and writing process. The specificity of a well-developed research question helps writers avoid the all-about paper and work toward supporting a specific, arguable thesis.

P a g e | 15

Steps to developing a research question:


Choose an interesting general topic. Even directed academic research should focus on a topic in which the writer is at least somewhat personally invested. Writers should choose a broad topic about which they genuinely would like to know more. An example of a general topic might be Slavery in the American South or Films of the 1930s. Do some preliminary research on your general topic. Do a few quick searches in current periodicals and journals on your topic to see whats already been done and to help you narrow your focus. What questions does this early research raise? Consider your audience. For most college papers, your audience will be academic, but always keep your audience in mind when narrowing your topic and developing your question. Would that particular audience be interested in this question? Start asking questions. Taking into consideration all of the above, start asking yourself open-ended how and why questions about your general topic. For example, How did the slave trade evolve in the 1850s in the American South? or Why were slave narratives effective tools in working toward the abolishment of slavery? Evaluate your question. Is your research question clear? With so much research available on any given topic, research questions must be as clear as possible in order to be effective in helping the writer direct his or her research.

Is your research question focused? Research questions must be specific enough to be well covered in the space available. (See flip side for examples of focused vs. unfocused research questions.) Is your research question complex? Research questions should not be answerable with a simple yes or no or by easily-found facts. They should, instead, require both research and analysis on the part of the writer. Hypothesize. After youve come up with a question, think about what the path you think the answer will take. Where do you think your research will take you? What kind of argument are you hoping to make/support? What will it mean if your research disputes your planned argument?

Sample Research Questions Unclear: Why are social networking sites harmful?

P a g e | 16

Clear: How are online users experiencing or addressing privacy issues on such social networking sites as MySpace and Facebook? The unclear version of this question doesnt specify which social networking sites or suggest what kind of harm the sites are causing. It also assumes that this harm is proven and/or accepted. The clearer version specifies sites (MySpace and Facebook), the type of harm (privacy issues), and who the issue is harming (users). A strong research question should never leave room for ambiguity or interpretation. Unfocused: What is the effect on the environment from global warming? Focused: How are glacial melting affecting penguins in Antarctica? The unfocused research question is so broad that it couldnt be adequately answered in a book-length piece, let alone a standard college-level paper. The focused version narrows down to a specific cause (glacial melting), a specific place (Antarctica), and a specific group that is affected (penguins). When in doubt, make a research question as narrow and focused as possible. Too simple: How are doctors addressing diabetes in the U.S.? Appropriately Complex: What are common traits of those suffering from diabetes in America, and how can these commonalities be used to aid the medical community in prevention of the disease? The simple version of this question can be looked up online and answered in a few factual sentences; it leaves no room for analysis. The more complex version is written in two parts; it is thought provoking and requires both significant investigation and evaluation from the writer. As a general rule of thumb, if a quick Google search can answer a research question, its likely not very effective.

Qno3). Solution: Various Descriptors to Classify the Research Design:


Definition of Research design:
A blueprint to collect measure and analyze data. Helps in the allocation of limited resources: Choice between: Experiment Observation Interview

P a g e | 17

Simulation Data collection, whether it should be structured or unstructured The sample size should be large or small Quantitative or qualitative research.

Degree of Question Crystallization

Exploratory Study:
Loose structure Expand understanding Provide insight Develop hypotheses

Formal Study:
Precise procedures Begins with hypotheses Answers research questions

Approaches for Exploratory Investigations


Participant observation Film, photographs Projective techniques Psychological testing Case studies

P a g e | 18

Ethnography Expert interviews Document analysis Proxemics and Kinesics

Commonly Used Exploratory Techniques

Experience Surveys:
What is being done? What has been tried in the past with or without success? How have things changed? Who is involved in the decisions? What problem areas can be seen? Whom can we count on to assist or participate in the research?

Focus Groups:
Group discussion 6-10 participants Moderator-led 90 minutes-2 hours

P a g e | 19

Data Collection Method:


Monitoring Communication

The Time Dimension


Cross-sectional Longitudinal Panel Cohort

The Topical Scope


Statistical Study Breadth Population inferences Quantitative Generalizable findings Case Study Depth Detail Qualitative Multiple sources of information

The Research Environment


Field conditions Lab conditions Simulations

Purpose of the Study

P a g e | 20

Descriptive Studies

Experimental Effects
Ex Post Facto Study
After-the-fact report on what happened to the measured variable

Experiment
Study involving the manipulation or control of one or more variables to determine the effect on another variable

Ex Post Facto Design:

P a g e | 21

Causation:
The basic element in causal research is that A produces B attain A forces B to happen

Causation and Experimental Design:

Mills Method of Agreement:

P a g e | 22

Mills Method of Difference:

Example of a Descriptive Research:


A bank manager would like to profile the individuals who are behind on payment more than 6 months. The profile will describe the individual in terms of age, income, type of job and whether the person is employed full time or part time. This information will help him in making future decisions related to loans to the same individuals Descriptive Studies Descriptions of population characteristics Estimates of frequency of characteristics Discovery of associations among variables

Participants Perceptional Awareness


No deviation perceived Deviations perceived as unrelated Deviations perceived as researcher-induced

P a g e | 23

Descriptors of Research Design

Qno4 (i) Solution: Qualitative vs. Quantitative Research


Qualitative Research Objective / purpose To gain an understanding of underlying reasons and motivations To provide insights into the setting of a problem, generating ideas and/or hypotheses for later quantitative research To uncover prevalent trends in thought and opinion Usually a small number of nonrepresentative cases. Respondents selected to fulfill a Quantitative Research To quantify data and generalize results from a sample to the population of interest To measure the incidence of various views and opinions in a chosen sample Sometimes followed by qualitative research which is used to explore some findings further Usually a large number of cases representing the population of interest. Randomly selected

Sample

P a g e | 24

given quota. Data collection Data analysis Unstructured or semi-structured techniques e.g. individual depth interviews or group discussions. Non-statistical.

respondents. Structured techniques such as online questionnaires, on-street or telephone interviews. Statistical data is usually in the form of tabulations (tabs). Findings are conclusive and usually descriptive in nature. Used to recommend a final course of action.

Outcome

Exploratory and/or investigative. Findings are not conclusive and cannot be used to make generalizations about the population of interest. Develop an initial understanding and sound base for further decision making.

Qno4 (ii) Solution:


Discrete and continuous variables
In quantitative research there are two broad types of variable: discrete and continuous. Discrete, or categorical, variables are those for which subjects or observations can be categorized. For example, vote choice is a discrete variable since there is a limited set of parties or candidates to vote for. A useful distinction for quantitative variables is that they can be discrete or continuous. The most common type of discrete variable is a count, in which case it must be a whole number. Examples include family size, number of road deaths and number of cigarettes smoked per day. Continuous variables are defined on a continuous scale. Examples include length, weight, parts per million of waste material in water. Inevitably, the rounding of the measurement means that continuous variables are often reported on a discrete scale. For example, taxable income is measured in dollars and cents but reported to the Tax Office rounded to the nearest whole dollar. Similarly, when someone asks you for your height, you usually round it to the nearest

P a g e | 25

whole cm, and you do not report it on a continuous scale even though height (length) can be measured to great accuracy on a continuous scale.

P a g e | 26

Qno4 (iii) Solution: Correlational hypotheses


State merely that the variables occur together in some specified manner without implying that one causes the other. Such weak claims are often made when we believe that there are more basic causal forces that affect both variables. For example: Level of job commitment of the officers is positively associated with their level of efficiency. Here we do not make any claim that one variable causes the other to change. That will be possible only if we have control on all other factors that could influence our dependent variable.

Explanatory (causal) hypotheses


Imply the existence of, or a change in, one variable causes or leads to a change in the other variable. This brings in the notions of independent and the dependent variables. Cause means to "help make happen." So the independent variable may not be the sole reason for the existence of, or change in the dependent variable. The researcher may have to identify the other possible causes, and control their effect in case the causal effect of independent variable has to be determined on the dependent variable. This may be possible in an experimental design of research

Different ways to state hypotheses


Hi motivation causes hi efficiency. Hi motivation leads to hi efficiency. Hi motivation is related to hi efficiency. Hi motivation influences hi efficiency. Hi motivation is associated with hi efficiency. Hi motivation produces hi efficiency. Hi motivation results in hi efficiency. If hi motivation then hi efficiency. The higher the motivation, the higher the efficiency

P a g e | 27

Qno4 (iv) Solution: Stratified and cluster sampling Stratified sampling


Basically in a stratified sampling procedure, the population is first partitioned into disjoint classes (the strata) which together are exhaustive. Thus each population element should be within one and only one stratum. Then a simple random sample is taken from each stratum, the sampling effort may either be a proportional allocation (each simple random sample would contain an amount of varieties from a stratum which is proportional to the size of that stratum) or according to optimal allocation, where the target is to have a final sample with the minimum variability possible.

Stratified sampling strategies


A real-world example of using stratified sampling would be for a political survey. If the respondents needed to reflect the diversity of the population, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling. Similarly, if population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equal statistical power. For example, in Ontario a survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north.

Cluster sampling
The main difference between stratified and cluster sampling is that in stratified sampling all the strata need to be sampled. In cluster sampling one proceeds by first selecting a number of clusters at random and then sampling each cluster or conduct a census of each cluster. But usually not all clusters would be included.

P a g e | 28

Qno5) Solution: Reliability


Simply put, a reliable measuring instrument is one which gives you the same measurements when you repeatedly measure the same unchanged objects or events. We shall briefly discuss here methods of estimating an instruments reliability. The theory underly ing this discussion is that which is sometimes called classical measurement theory. The foundations for this theory were developed by Charles Spearman (1904, General Intelligence, objectively determined and measures. American Journal of Psychology, 15, 201-293). If a measuring instrument were perfectly reliable, then it would have a perfect positive (r = +1) correlation with the true scores. If you measured an object or event twice, and the true scores did not change, then you would get the same measurement both times. We theorize that our measurements contain random error, but that the mean error is zero. That is, some of our measurements have error that make them lower than the true scores, but others have errors that make them higher than the true scores, with the sum of the score-decreasing errors being equal to the sum of the score increasing errors. Accordingly, random error will not affect the mean of the measurements, but it will increase the variance of the measurements. Our definition of reliability is rXX
2 2 T T 2 . That is, reliability is the 2 2 rTM 2 M T E

proportion of the variance in the measurement scores that is due to differences in the true scores rather than due to random error. Please note that I have ignored systematic (nonrandom) error, optimistically assuming that it is zero or at least small. Systematic error arises when our instrument consistently measures something other than what it was designed to measure. For example, a test of political conservatism might mistakenly also measure personal stinginess. Also note that I can never know what the reliability of an instrument (a test) is, because I cannot know what the true scores are. I can, however, estimate reliability.

Validity
Simply put, the construct validity of an operationalization (a measurement or a manipulation) is the extent to which it really measures (or manipulates) what it claims to measure (or manipulate). When the dimension being measured is an abstract construct that is inferred from directly observable events, then we may speak of construct validity.

P a g e | 29

Face Validity. An operationalization has face validity when others agree that it looks like it does measure or manipulate the construct of interest. For example, if I tell you that I am manipulating my subjects sexual arousal by having them drink a pint of isotonic saline solution, you would probably be skeptical. On the other hand, if I told you I was measuring my male subjects sexual arousal by measuring erection of their penises, you would probably think that measurement to have face validity. Content Validity. Assume that we can detail the entire population of behavior (or other things) that an operationalization is supposed to capture. Now consider our operationalization to be a sample taken from that population. Our operationalization will have content validity to the extent that the sample is representative of the population. To measure content validity we can do our best to describe the population of interest and then ask experts (people who should know about the construct of interest) to judge how well representative our sample is of that population. Criterion-Related Validity. Here we test the validity of our operationalization by seeing how it is related to other variables. Suppose that we have developed a test of statistics ability. We might employ the following types of criterion-related validity: Concurrent Validity. Are scores on our instrument strongly correlated with scores on other concurrent variables (variables that are measured at the same time). For our example, we should be able to show that students who just finished a stats course score higher than those who have never taken a stats course. Also, we should be able to show a strong correlation between score on our test and students current level of performance in a stats class. Predictive Validity. Can our instrument predict future performance on an activity that is related to the construct we are measuring? For our example, is there a strong correlation between scores on our test and subsequent performance of employees in an occupation that requires the use of statistics? Convergent Validity. Is our instrument well correlated with measures of other constructs to which it should, theoretically, be related? For our example, we might expect scores on our test to be well correlated with tests of logical thinking, abstract reasoning, verbal ability, and, to a lesser extent, mathematical ability. Discriminant Validity. Is our instrument not well correlated with measures of other constructs to which it should not be related? For example, we might expect scores on our test not to be well correlated with tests of political conservatism, ethical ideology, love of Italian food, and so on.

P a g e | 30

Item Analysis. If you believe your scale is one-dimensional, you will want to conduct an
item analysis. Such an analysis will estimate the reliability of your instrument by measuring the internal consistency of the items, the extent to which the items correlate well with one another. It will also help you identify troublesome items. To illustrate item analysis with SPSS, we shall conduct an item analysis on data from one of my past research projects. For each of 154 respondents we have scores on each of ten Liker items. The scale is intended to measure ethical idealism. People high on idealism believe that an action is unethical if it produces any bad consequences, regardless of how many good consequences it might also produce. People low on idealism believes that an action may be ethical if its good consequences outweigh its bad consequences. Bring the data (KJ-Idealism.sav) into SPSS.

Click Analyze, Scale, Reliability Analysis.

P a g e | 31 Select all ten items and scoot them to the Items box on the right.

Click the Statistics box.

Check Scale if item deleted and then click Continue. Back on the initial window, click OK. Look at the output. The Cronbach alpha is .744, which is acceptable. Reliability Statis tics
Cronbac h's A lpha .744 N of Items 10

P a g e | 32 Look at the Item-Total Statistics.


Item -Total Statis tics Scale Mean if Item Deleted 32.42 32.79 32.79 32.33 32.33 32.07 34.29 32.49 33.38 33.43 Scale Varianc e if Item Deleted 23.453 22.702 21.122 22.436 22.277 24.807 24.152 24.332 22.063 24.650 Correc ted Item-Total Correlation .444 .441 .604 .532 .623 .337 .247 .308 .406 .201 Cronbach's Alpha if Item Deleted .718 .717 .690 .705 .695 .733 .749 .736 .725 .755

Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10

There are two items, numbers 7 and 10, which have rather low item-total correlations, and the alpha would go up if they were deleted, but not much, so I retained them. It is disturbing that item 7 did not perform better, since failure to do ethical cost/benefit analysis is an important part of the concept of ethical idealism. Perhaps the problem is that this item does not make it clear that we are talking about ethical cost/benefit analysis rather than other cost/benefit analysis. For example, a person might think it just fine to do a personal, financial cost/benefit analysis to decide whether to lease a car or buy a car, but immoral to weigh morally good consequences against morally bad consequences when deciding whether it is proper to keep horses for entertainment purposes (riding them). Somehow I need to find the time to do some more work on improving measurement of the ethical cost/benefit component of ethical idealism.

{The Factors That Affect the Reliability}


Reliability is not a property of only the measurement tool; it is also the property of the results of the measurement tool (Oncu 1994). Reliability is the measurement of the faultless. There are some factors affecting the reliability of the result taking from the scale (Tekin 1977). Some of the factors are related with the scale, some others are related with the group the scale is applied, and some others are related with the environment (Gay 1985). So those factors must be taken into account at the stage of constructing a scale and at the stage of application. Some factors affecting the reliability are as follows:

P a g e | 33

The Length of the Scale


The length of the scale affects the real values and the variances of the observed values. The measurement errors are smaller in the measurement values obtained from the long scales than the short scales (OConnor 1993). Because huge number of items present the abstract characteristic better (Gay 1985). In this case the number of the items must be increased to increase the reliability But if the scale is not reliable, to increase the number of the items does not make the scale reliable (Tekin 1977). Spearman-Brown equation is used to calculate how many times increment or decrement in the item number causes how much increment or decrement in the reliability coefficient.

The Expression of the Items in the Scale


To get the reliable information in relative subject depends on the expression of the item as required. If the item isnt expressed as required, it is difficult to get required answers (Sencer & Sencer 1978). The reliability of the scale negatively affected in this case since the answer in different times would be different (Tekin 1977). The basic problem of getting information is not related with the respondents. The units required for the information do not avoid to answer, to state their situations or opinions in case of expressing the item as required.

Insufficiency
The item is called insufficient if it is prepared that it lets overlooks of the details. The in sufficiency basically may become because of deficiency, having several meanings, and indefiniteness. A defected item is an item deprived of the knowledge it has to. An item with several meanings is an item having more than one meaning and without a limitation in its subject. Indefinite item is an item deprived of the measurement that let to a certain measurement

Misunderstanding
Misunderstanding basically sources from the linguistic properties the item has. There are several reasons that may cause misunderstanding of the item: (Sencer & Sencer 1978). i) The words that are out of knowledge and experiences of the respondents should not be used. ii) The words in the item should not include various meanings

P a g e | 34

Biasness:
Some items have a tendency to get some answers in one way as a result of their way of constructing. Those items, not giving the same chance to all answers, are called biased items. The item types causing biasness can be listed as follows.

a) Directing Items
The items, affecting the respondents and directing the answers to one way, construct that class.

b) Loaded Items
The items with some feeling or meaning in a defined subject and tending an approval and remembrance by itself or the items with sayings.

c) The items that cause prestige biasness


Those items cause respondents seem more superior than they are.

d) The items that cause restlessness


The items that cause restlessness since they research the illegal attitudes and behaviors private subjects or since they require the answers with low prestige construct that class.

Homogeneity of the Group


Another factor affecting the reliability of the item is homogeneity of the group. When other conditions are equal, the more homogeny group provides the more reliable scales. That can be seen with a transformation to the equation of the reliability

The Duration of the Scale


If the scale is prepared to measure in a limited time, the insufficiency of time decreases the reliability of the scale. The time must be enough for respondents to answer all the items (Oncu 1994). Since the limit in time causes excitement and carelessness, the reliability of the scale decreases. In case of insufficiency time in scales, careless answers will be given and that will cause to get values closed to zero for the scales reliability

Objectivity in Scoring
The reliability of a scale is affected from the scorings being whether objective or the researchers being whether subjective. The consistency of the scores observed from the same or different subject in different times is called that scales scoring reliability. If a score obtained from a scale is not changing due to the person scoring the scale or the time of scoring that means the scales scoring reliability is high.

P a g e | 35

The Conditions in Making a Measurement


Respondents being tired, careless, and sleepless, the atmosphere of the measurement and the temperature will cause unwillingness. This will affect the scales reliability in negative way (Oncu 1994).

The Explanation of the Scale


The same expressions must be used in the first page of the scale explanation part so all the respondents will understand the same things. The aim of the scale must be told the respondents clearly; the information about how the scale will be responded and determined the principals will be taken into account about anonymous (Serper & Gursakal 1989).

The Characteristics of the Items of the Scale


The reliability of the values obtained from a scale must be dependent on then items characteristic properties. In the items of the scale two characteristic properties are taken into account about the reliability of the measurement values. These properties are differentiating index and reliability index. Other than those two properties, in the scales that aim to measure some other information, another property about then reliability of the measurement values, the difficulty index is taken into account

Difficulty of the Scale


Another factor is the difficulty degree of the items in the scales that aim the level of the knowledge on subject. The knowledge scales must be prepared appropriately to the knowledge level of respondents. The reliability of the scales with very difficult or very easy items will be low since the variability among the values will be low.

You might also like