You are on page 1of 11

I.

Neo-liberalism (Project Management) Since the 1990's activists use the word 'Neo-liberalism' for global market-liberalism ('capitalism') and for freetrade policies. In this sense, it is widely used in South America. 'Neo-liberalism' is often used interchangeably with 'globalization'. But free markets and global free trade are not new, and this use of the word ignores developments in the advanced economies. The analysis here compares Neo-liberalism with its historical predecessors. Neo-liberalism is not just economics: it is a social and moral philosophy, in some aspects Qualitatively different from liberalism. Last changes 02 December 2005. What is Neo-liberalism? "Neo-liberalism" is a set of economic policies that have become widespread during the last 25 years or so. Although the word is rarely heard in the United States, you can clearly see the effects of neo-liberalism here as the rich grow richer and the poor grow poorer. "Liberalism" can refer to political, economic, or even religious ideas. In the U.S. political liberalism has been a strategy to prevent social conflict. It is presented to poor and working people as progressive compared to conservative or Rightwing. Economic liberalism is different. Conservative politicians who say they hate "liberals" -- meaning the political type -- have no real problem with economic liberalism, including Neoliberalism. "Neo" means we are talking about a new kind of liberalism. So what was the old kind? The liberal school of economics became famous in Europe when Adam Smith, an Scottish economist, published a book in 1776 called THE WEALTH OF NATIONS. He and others advocated the abolition of government intervention in economic matters. No restrictions on manufacturing, no barriers to commerce, no tariffs, he said; free trade was the best way for a nation's economy to develop. Such ideas were "liberal" in the sense of no controls. This application of individualism encouraged "free" enterprise," "free" competition -- which came to mean, free for the capitalists to make huge profits as they wished. Economic liberalism prevailed in the United States through the 1800s and early 1900s. Then the Great Depression of the 1930s led an economist named John Maynard Keynes to a theory that challenged liberalism as the best policy for capitalists. He said, in essence, that full employment is necessary for capitalism to grow and it can be achieved only if governments and central banks intervene to increase employment. These ideas had much influence on President Roosevelt's New Deal -- which did improve life for many people. The belief that government should advance the common good became widely accepted. But the capitalist crisis over the last 25 years, with its shrinking profit rates, inspired the corporate elite to revive economic liberalism. That's what makes it "neo" or new. Now, with the rapid globalization of the capitalist economy, we are seeing neo-liberalism on a global scale. A memorable definition of this process came from Subcomandante Marcos at the Zapatista-sponsored (Intercontinental Encounter for Humanity and Against Neo-liberalism) of August 1996 in Chiapas when he said: "what the Right offers is to turn the world into one big mall where they can buy Indians here, women there ...." and he might have added, children, immigrants, workers or even a whole country like Mexico." The main points of neo-liberalism include: 1. THE RULE OF THE MARKET. Liberating "free" enterprise or private enterprise from any bonds imposed by the government (the state) no matter how much social damage this causes. Greater openness to international trade and investment, as in NAFTA. Reduce wages by de-unionizing workers and eliminating workers' rights that had been won over many years of struggle. No more price controls. All in all, total freedom of movement for capital, goods and services. To convince us this is good for us, they say "an unregulated market is the best way to increase economic growth, which will ultimately benefit everyone." It's like Reagan's "supply-side" and "trickle-down" economics -- but somehow the wealth didn't trickle down very much.

2. CUTTING PUBLIC EXPENDITURE FOR SOCIAL SERVICES like education and health care. REDUCING THE SAFETY-NET FOR THE POOR, and even maintenance of roads, bridges, water supply -again in the name of reducing government's role. Of course, they don't oppose government subsidies and tax benefits for business. 3. DEREGULATION. Reduce government regulation of everything that could diminish profits, including protecting the environment and safety on the job. 4. PRIVATIZATION. Sell state-owned enterprises, goods and services to private investors. This includes banks, key industries, railroads, toll highways, electricity, schools, hospitals and even fresh water. Although usually done in the name of greater efficiency, which is often needed, privatization has mainly had the effect of concentrating wealth even more in a few hands and making the public pay even more for its needs. 5. ELIMINATING THE CONCEPT OF "THE PUBLIC GOOD" or "COMMUNITY" and replacing it with "individual responsibility." Pressuring the poorest people in a society to find solutions to their lack of health care, education and social security all by themselves -- then blaming them, if they fail, as "lazy." II. What is Meta Analysis? Meta-analysis is a statistical technique in which the results of two or more studies are mathematically combined in order to improve the reliability of the results. Studies chosen for inclusion in a meta-analysis must be sufficiently similar in a number of characteristics in order to accurately combine their results. When the (or effect size) is consistent from one study to the next, meta-analysis can be used to identify this common effect. When the effect varies from one study to the next, meta-analysis may be used to identify the reason for the variation. Advantages of Meta Analysis: Advantages of meta-analysis (eg. over classical literature reviews, simple overall means of effect sizes etc.) include: Derivation and statistical testing of overall factors / effect size parameters in related studies Generalization to the population of studies Ability to control for between-study variation Including moderators to explain variation Higher statistical power to detect an effect than in n=1 sized study sample

Weaknesses of Meta Analysis: Meta-analysis can never follow the rules of hard science, for example being double-blind, controlled, or proposing a way to falsify the theory in question. Weaknesses include: Sources of bias are not controlled by the method A good meta-analysis of badly designed studies will still result in bad statistics. Heavy reliance on published studies, which may create exaggerated outcomes, as it is very hard to publish studies that show no significant results. (File Drawer Problem) (two smaller studies may point in one direction, and the combination study in the opposite direction) Dangers of Agenda Driven Bias: From an integrity perspective, researchers with a bias should avoid meta-analysis and use a less abuse-prone (or independent) form of research.

III. Cost-benefit analysis: Cost-benefit analysis is the exercise of evaluating an action's consequences by weighing the pluses, or benefits, against the minuses, or costs. It is the fundamental assessment behind virtually every business decision, due to the simple fact that business managers do not want to spend money unless the resulting benefits are expected to exceed the costs. As companies increasingly seek to cut costs and improve productivity, cost-benefit analysis

has become a valuable tool for evaluating a wide range of business opportunities, such as major purchases, organizational changes, and expansions. Some examples of the types of business decisions that may be facilitated by cost-benefit analysis include whether or not to add employees, introduce a new technology, purchase equipment, change vendors, implement new procedures, and or relocate facilities. In evaluating such opportunities, managers can justify their decisions by applying cost-benefit analysis. This type of analysis can identify the hard dollar savings (actual, quantitative savings), soft dollar savings (less , qualitative savings, as in management time or facility space), and cost avoidance (the elimination of a future cost, like or equipment leasing) associated with the opportunity. Although its name seems simple, there is often a degree of complexity, and subjectivity, to the actual implementation of cost-benefit analysis. This is because not all costs or benefits are obvious at first. Take, for example, a situation in which a company is trying to decide if it should make or buy a certain of a larger assembly it manufactures. A quick review of the accounting numbers may suggest that the cost to manufacture the component, at $5 per piece, can easily be beat by an outside vendor who will sell it to the company for only $4. But there are several other factors that need to be considered and quantified (if possible): When production of a subcomponent is contracted to an outside vendor, the company's own factory will become less utilized, and therefore its fixed overhead costs have fewer components over which to be spread. As a result, other parts it continues to manufacture may show an increase in costs, consuming some or possibly all of the apparent gain. The labor force may be concerned about of work to which they feel an entitlement. Resulting morale problems and labor unrest could quickly cost the company far more than it expected to save. The consequences of a loss of control over the subcomponent must be weighed. Once the part is outsourced, the company no longer has direct control over the quality, timeliness, or reliability of the product delivered. Unforeseen benefits may be attained. For example, the newly freed factory space may be deployed in a more productive manner, enabling the company to make more of the main assembly or even another product altogether.

This list is not meant to be comprehensive, but rather illustrative of the ripple effect that occurs in real business decision settings. The cost-benefit analyst needs to be of the subtle interactions of other events with the action under consideration in order to fully evaluate its impact. A formal cost-benefit analysis is a multi-step process which includes a preliminary survey, a , and a final report. At the conclusion of each step, the party responsible for performing the analysis can decide whether continuing on to the next step is warranted. The preliminary survey is an initial evaluation that involves gathering information on both the opportunity and the existing situation. The feasibility study involves completing the information gathering as needed and evaluating the data to gauge the short- and long-term impact of the opportunity. Finally, the formal cost-benefit analysis report should provide decision makers with all the information they need to take appropriate action on the opportunity. It should include an executive summary and introduction; information about the scope, purpose, and methodology of the study; recommendations, along with factual justification; and factors concerning implementation. Capital budgeting has at its core the tool of cost-benefit analysis; it merely extends the basic form into a multiperiod analysis, with consideration of the time value of money. In this context, a new product, venture, or investment is evaluated on a start-to-finish basis, with care taken to capture all the impacts on the company, both cost and benefits. When these inputs and outputs are quantified by year, they can then be discounted to present value to determine the net present value of the opportunity at the time of the decision.

IV. What is strategic planning? Strategic Planning is a management tool that helps an organization focus its energy, to ensure that members of the organization are working toward the same goals, to assess and adjust the organization's direction in response to a changing environment. In short, strategic planning is a disciplined effort to produce fundamental decisions and actions that shape and guide what an organization is, what it does, and why it does it, with a focus on the future. (Adapted from Bryson's Strategic Planning in Public and Nonprofit Organizations) The process is strategic because it involves preparing the best way to respond to the circumstances of the organization's environment, whether or not its circumstances are known in advance; nonprofits often must respond to dynamic and even hostile environments. Being strategic, then, means being clear about the organization's objectives, being aware of the organization's resources, and incorporating both into being consciously responsive to a dynamic environment. The process is about planning because it involves intentionally setting goals (i.e., choosing a desired future) and developing an approach to achieving those goals. The process is disciplined in that it calls for a certain order and pattern to keep it focused and productive. The process raises a sequence of questions that helps planners examine experience, test assumptions, gather and incorporate information about the present, and anticipate the environment in which the organization will be working in the future. Finally, the process is about fundamental decisions and actions because choices must be made in order to answer the sequence of questions mentioned above. The plan is ultimately no more, and no less, than a set of decisions about what to do, why to do it, and how to do it. Because it is impossible to do everything that needs to be done in this world, strategic planning implies that some organizational decisions and actions are more important than others - and that much of the strategy lies in making the tough decisions about what is most important to achieving organizational success. V. Participatory project planning This section is about the planning of a project, like a process, where the most important thing is that those affected by the project participate throughout and make it their own. This section also provides a best practice model of participatory project planning, and shows how this relates to methods for goal-led management and measurement of outcomes. The methods described are about processes owned by partner and member organizations in the South, not by the Swedish organizations. They are generally well-known methods within development cooperation and organizational work, based on the values that permeate this methods handbook. The material is available to all organizations that want to use them as a basis for discussion within their own organizations. The Swedish organizations staff should be aware of these methods to be able to act as a sounding board to the partner/member organizations during planning and reporting of activities. As regards goal-led management we are inspired by the so-called LFA method (Logical Framework Approach), since it is the main planning method for all kinds of development cooperation. The diagram below demonstrates a best practice scenario of a participatory process in planning and evaluation of a project. The diagram is an adaptation of Robert A. Dahls idea about effective participation in the democratic process, which has been combined with the different stages of an LFA process.

VI. What is logical framework analysis (LFA)? A log frame (also known as a Project Framework) is a tool for planning and managing development projects. It looks like a table (or framework) and aims to present information about the key components of a project in a clear, concise, logical and systematic way. The log frame model was developed in the United States and has since been adopted and adapted for use by many other donors, including the Department for International Development (DFID) A log frame summaries, in a standard format: What the project is going to achieve? What activities will be carried out to achieve its outputs and purpose? What resources (inputs) are required? What are the potential problems which could affect the success of the project? How the progress and ultimate success of the project will be measured and verified?

WHY USE LFA? Because most donors prefer it? LFA can be a useful tool, both in the planning, monitoring and evaluation management of development projects. It is not the only planning tool, and should not be considered an end in itself, but using it encourages the discipline of clear and specific thinking about what the project aims to do and how, and highlighting those aspects upon which success depends. LFA also provides a handy summary to inform project staff, donors, beneficiaries and other stakeholders, which can be referred to throughout the lifecycle of the project. LFA should not be set in concrete. As the project circumstances change it will probably need to reflect these changes but everyone involved will have to be kept informed. What is so intimidating about using LFA? Perhaps because we are very conscious of the complexity of development projects, we find it

hard to believe that they can be reduced to one or two sides of A4. Remember that the log frame isn't intended to show every detail of the project, or to limit the scope of the project. It is simply a convenient, logical summary of the key factors of the project. The LFA is a way of describing a project in a logical way so that it is: Well designed. Described objectively. Can be evaluated. Success depends. Clearly structured. I. Validity (Social Research) Validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researcher set out to measure. Researchers should be concerned with both external and internal validity. External validity refers to the extent to which the results of a study are generable or transferable. (Most discussions of external validity focus solely on generalizability; see Campbell and Stanley, 1966. We include a reference here to transferability because many qualitative research studies are not designed to be generalized.) Internal validity refers to (1) the rigor with which the study was conducted (e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and wasn't measured) and (2) the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore (Huitt, 1998). In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity. Scholars discuss several types of internal validity. For brief discussions of several types of internal validity, click on the items below: a. Face validity is concerned with how a measure or procedure appears. Does it seem like a reasonable way to gain the information the researchers are attempting to obtain? Does it seem well designed? Does it seem as though it will work reliably? Unlike content validity, face validity does not depend on established theories for support (Fink, 1995). b. Criterion related validity, also referred to as instrumental validity, is used to demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure which has been demonstrated to be valid. For example, imagine a hands-on driving test has been shown to be an accurate test of driving skills. By comparing the scores on the written driving test with the scores from the hands-on driving test, the written test can be validated by using a criterion related strategy in which the hands-on driving test is compared to the written test. c. Construct validity seeks agreement between a theoretical concept and a specific measuring device or procedure. For example, a researcher inventing a new IQ test might spend a great deal of time attempting to "define" intelligence in order to reach an acceptable level of construct validity. Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity. Convergent validity is the actual general agreement among ratings, gathered independently of one another, where measures should be theoretically related. Discriminate validity is the lack of a relationship among measures which theoretically should not be related.

To understand whether a piece of research has construct validity, three steps should be followed. First, the theoretical relationships must be specified. Second, the empirical relationships between the measures of the concepts must be examined. Third, the empirical evidence must be interpreted in terms of how it clarifies the construct validity of the particular measure being tested (Carmines & Zeller, p. 23). d. Content Validity is based on the extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1991, p.20). Content validity is illustrated using the following examples: Researchers aim to study mathematical learning and create a survey to test for mathematical skill. If these researchers only tested for multiplication and then drew conclusions from that survey, their study would not show content validity because it excludes other mathematical functions. Although the establishment of content validity for placement-type exams seems relatively straight-forward, the process becomes more complex as it moves into the more abstract domain of socio-cultural studies. For example, a researcher needing to measure an attitude like self-esteem must decide what constitutes a relevant domain of content for that attitude. For socio-cultural studies, content validity forces the researchers to define the very domains they are attempting to study. II. What is generalization? Generalization is an essential component of the wider scientific process. In an ideal world, to test a hypothesis, you would sample an entire population. You would use every possible variation of an independent variable. In the vast majority of cases, this is not feasible, so a representative group is chosen to reflect the whole population. For any experiment, you may be criticized for your generalizations about sample, time and size. You must ensure that the sample group is as truly representative of the whole population as possible. For many experiments, time is critical as the behaviors can change yearly, monthly or even by the hour. The size of the group must allow the statistics to be safely extrapolated to an entire population. In reality, it is not possible to sample the whole population, due to budget, time and feasibility. (There are however some regional large scalestudies such as the HUN- Study or the Decode Genetics of Iceland-Study.

For example, you may want to test a Hypothesis about the effect of an educational program on schoolchildren in the US. For the perfect experiment, you would test every single child using the program, against a control group. If this number runs into the millions, this may not be possible without a huge number of researchers and a bottomless pit of money. Thus, you need to generalize and try to select a sample group that is representative of the whole population.

A high budget research project might take a smaller sample from every school in the country; a lower budget operation may have to concentrate upon one city or even a single school. The key to generalization is to understand how much your results can be applied backwards to represent the group of children, as a whole. The first example, using every school, would be a strong representation, because the range and number of samples is high. Testing one school makes generalization difficult and affects the external validity. You might find that the individual school tested generates better results for children using that particular educational program. However, a school in the next town might contain children who do not like the system. The students may be from a completely different socioeconomic background or culture. Critics of your result will pounce upon such discrepancies and question your entire experimental design. Most statistical tests contain an inbuilt mechanism to take into account sample sizes with larger groups and numbers, leading to results that are more significant. The problem is that they cannot distinguish the validity of the results, and determine whether your generalization systems are correct. This is something that must be taken into account when generating a hypothesis and designing the experiment. The other option, if the sample groups are small, is to use proximal similarity and restrict your generalization. This is where you accept that a limited sample group cannot represent all of the population. If you sampled children from one town, it is dangerous to assume that it represents all children. It is, however, reasonable to assume that the results should apply to a similar sized town with a similar socioeconomic class. This is not perfect, but certainly contains more external validity and would be an acceptable generalization. III. Induction: Induction is the creative part of science. The scientist must carefully study a phenomenon, and then formulate a hypothesis to explain the phenomenon. Scientists who get the most spectacular research results are those who are creative enough to think of the right research questions. Natural sciences (physics, chemistry, biology, etc.) are inductive. Evidence is collected. The Scientific Method is applied. Start with specific results and try to guess the general rules. Hypotheses can only be disproved, never proved. If a hypothesis withstands repeated trials by many independent researchers, then confidence grows in the hypothesis. All hypotheses are tentative; any one could be overturned tomorrow, but very strong evidence is required to overthrow a "Law" or "Fact". Specific -> General Here's an example of induction: Suppose I have taken 20 marbles at random from a large bag of marbles. Every one of them turned out to be white. That's my observation - every marble I took out was white. I could therefore form the hypothesis that this would be explained if all the marbles in the bag were white. Further sampling would be required to test the hypothesis. It might be that there are some varicolored marbles in the bag and my first sample simply didn't hit any. Incidentally, this is one case where we could prove the hypothesis true. We could simply dump out all the marbles in the bag and examine each one. Deduction:

Mathematics is a deductive science. Axioms are proposed. They are not tested; they are assumed to be true. Theorems are deduced from the axioms. Given the axioms and the rules of logic, a machine could produce theorems. General -> Specific Start with the general rule and deduce specific results. If the set of axioms produces a theorem and its negation, the set of axioms is called INCONSISTENT. By the way, when Sherlock Holmes says that he uses "deduction," he really means "induction." Of course, one can fault his creator, Sir Arthur Conan Doyle, who believed in spirit mediums and faeries. Suppose we have the following known conditions. We have a large bag of marbles. All of the marbles in the bag are white. I have a random sample of 20 marbles taken from the bag.

From these, I can deduce that all the marbles in the sample are white, even without looking at them. This kind of reasoning is called modus ponens (more about this in Schick and Vaughn, chapter 6). How about this? We have a large bag of marbles. All of the marbles in the bag are white. I have a sample of 20 marbles of mixed colors. From this I quickly deduce that the sample was not taken from the bag of white marbles. This kind of reasoning is called modus tollens (more about this in Schick and Vaughn, chapter 3, where they spell it modus tolens). IV.Positivism: Brief History: Positivism really started with Auguste Comte. He recognized that societies could be studied in a scientific manner and there existed some form of natural law to explain phenomena. (Babbie, 2001) Positivism was further developed by Emile Durkheim using identical principles to the natural sciences. (Sarantakos, 2005) He believed that the social scientist should work with phenomena and data in terms of their inherent properties, and should not depend on the researcher. Supplementing this idea was the concept that the statistical generalisability of a data set is to be taken as normality for the group. (Smelser, 1976). Since Comte and Durkheim, positivism has been expanded and further refined, for example logical positivism, methodological positivism and neopositivism. (Sarantakos, 2005) >>Research with belief that social world exists independently (empiricism, positivism seeing= believing; one truth, test standardized method) Positivism methodologies argue that all genuine knowledge is based on sense experience and can only be advanced by means of observation and experiment; metaphysical or speculative attempts to gain knowledge by reason or intuition alone (unchecked by experience) should be abandoned in favor of the methods and approaches of the natural sciences, moreover, explanations should be expressed in terms of laws enabling us to Predict and control the world. However, because social systems consist of structures that exist independently of individuals (the needs of which push people to behave in certain ways) we experience the social world as a force that exists over-and-

above our individual ability to change or influence it. Therefore, positivists argue that we cannot escape social forces, such as roles and norms. V. What is reflexivity? Reflexivity requires an awareness of the researcher's contribution to the construction of meanings throughout the research process, and an acknowledgment of the impossibility of remaining 'outside of' one's subject matter while conducting research. Reflexivity then, urges us "to explore the ways in which a researcher's involvement with a particular study influences, acts upon and informs such research." (Nightingale and Cromby, 1999, p. 228). There are two types of reflexivity: personal reflexivity and epistemological reflexivity. Personal reflexivity involves reflecting upon the ways in which our own values, experiences, interests, beliefs, political commitments, wider aims in life and social identities have shaped the research. It also involves thinking about how the research may have affected and possibly changed us, as people and as researchers. Epistemological reflexivity requires us to engage with questions such as: How has the research question defined and limited what can be 'found?' How has the design of the study and the method of analysis 'constructed' the data and the findings? How could the research question have been investigated differently? To what extent would this have given rise to a different understanding of the phenomenon under investigation? Thus, epistemological reflexivity encourages us to reflect upon the assumptions (about the world, about knowledge) that we have made in the course of the research, and it helps us to think about the implications of such assumptions for the research and its findings." Carla Willig, (2001) Introducing Qualitative Research in Psychology (p. 10). VI. Semiotic Analysis: Although the earliest origins of semiotics can be traced back to Aristotle and Augustine, it didnt begin to be fully developed until the late nineteenth and early twentieth centuries. Semiotics is a broad topic which can be applied to many different fields, including media studies, theatre and music. Musical semiotics is a complex and relatively new topic; consequently materials explaining musical semiotics were very difficult to find in our library. Thus, the following presentation will not explain musical semiotics, but will show how one might further research this topic. Basically, semiotics is the study of signs and their meanings! Signs include words, gestures, images, sounds, and objects. According to Ferdinand de Saussure, a founder of modern semiotics, sign consists of two parts: the signifier (the form which the sign takes) and the signified (the concept represents).

For example, an everyday example is a stop sign. In this example, the physical sign is the signifier. The concept of stopping is the signified.

=the signifier

STOP!!!

=the signified

You might also like