You are on page 1of 25

ADDITIONAL

MATHEMATICS
PROJECT WORK 2010

MUHAMAD FAKHRUDDIN ARRAZI BIN MAT LUZI


930424-06-5547
5 IBNU KHALDUN
ENCIK MOHAMED BIN MASRI
APPRECATION
PART 1
PART 2
PART 3
PART 4
PART 5
FURTHER EXPLORATION
REFLECTION
After weeks of struggle and hard work to complete assignment given to us by our
teacher, Encik Mohamed Bin Masri. I finally did it within 2 weeks with satisfaction and
senses of success because I have understood more deeply about the interest and
investment more than before. I have to be grateful and thankful to all parties who have
helped me in the process of completing my assignment. It was a great experience for
me as I have learnt to be more independent and to work as group. For this, I would like
to take this opportunity to express my thankfulness once again to all parties concerned.

Firstly, I would like to thanks my Additional Mathematics teacher, Encik


Mohamed for patiently explained to us the proper and precise way to complete this
assignment. With her help and guidance, many problems I have encountered had been
solved.

Beside that, I would like to thanks my parents for all their support and
encouragement they have given to me. In addition, my parents had given me guidance
on the methods to account for investment which have greatly enhanced my knowledge
on particular area. Last but not least, I would like to express my thankfulness to my
cousin and friends, who have patiently explained to me and did this project with me in
group.
Most experimental searches for paranormal phenomena are statistical in nature.
A subject repeatedly attempts a task with a known probability of success due to chance,
then the number of actual successes is compared to the chance expectation. If a
subject scores consistently higher or lower than the chance expectation after a large
number of attempts, one can calculate the probability of such a score due purely to
chance, and then argue, if the chance probability is sufficiently small, that the results are
evidence for the existence of some mechanism (precognition, telepathy, psychokinesis,
cheating, etc.) which allowed the subject to perform better than chance would seem to
permit.

Claims of evidence for the paranormal are usually based upon statistics which
diverge so far from the expectation due to chance that some other mechanism seems
necessary to explain the experimental results. To interpret the results of our
RetroPsychoKinesis experiments, we'll be using the mathematics of probability and
statistics, so it's worth spending some time explaining how we go about quantifying the
consequences of chance.
a) TASK 1
HISTORY OF PROBABILITY
Probability is a way of expressing knowledge or belief that an event will occur or
has occurred. In mathematics the concept has been given an exact meaning
in probability theory, that is used extensively in such area of study as mathematics,
statistics, finance, gambling, science, and phisolophy to draw conclusions about the
likelihood of potential events and underlying mechanics of complex systems.

The scientific study of probability is a modern development. Gambling shows


that there has been an interest in quantifying the ideas of probability for millennia, but
exact mathematical descriptions of use in those problems only arose much later.

According to Richard Jeffrey, "Before the middle of the seventeenth century, the
term 'probable' (Latin probabilis) meant approvable, and was applied in that sense,
univocally, to opinion and to action. A probable action or opinion was one such as
sensible people would undertake or hold, in the circumstances."[4] However, in legal
contexts especially, 'probable' could also apply to propositions for which there was good
evidence.

Aside from some elementary considerations made by Girolamo Cardano in the


16th century, the doctrine of probabilities dates to the correspondence of Pierre de
Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known
scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous,
1713) and Abraham de Moivre'sDoctrine of Chances (1718) treated the subject as a
branch of mathematics. See Ian Hacking's The Emergence of Probability and James
Franklin's The Science of Conjecture for histories of the early development of the very
concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera
Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755
(printed 1756) first applied the theory to the discussion of errors of observation. The
reprint (1757) of this memoir lays down the axioms that positive and negative errors are
equally probable, and that there are certain assignable limits within which all errors may
be supposed to fall; continuous errors are discussed and a probability curve is given.

Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination
of observations from the principles of the theory of probabilities. He represented the law
of probability of errors by a curve y = (x), x being any error and y its probability, and
laid down three properties of this curve:

1. it is symmetric as to the y-axis;


2. the x-axis is an asymptote, the probability of the error being 0;
3. the area enclosed is 1, it being certain that an error exists.

He also gave (1781) a formula for the law of facility of error (a term due to Lagrange,
1774), but one which led to unmanageable equations. Daniel Bernoulli (1778)
introduced the principle of the maximum product of the probabilities of a system of
concurrent errors.

PROBABILITY THEORY

Like other theories, the theory of probability is a representation of probabilistic


concepts in formal termsthat is, in terms that can be considered separately from their
meaning. These formal terms are manipulated by the rules of mathematics and logic,
and any results are then interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely
the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation
(see probability space), sets are interpreted as events and probability itself as
a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that
is, not further analyzed) and the emphasis is on constructing a consistent assignment of
probability values to propositions. In both cases, the laws of probability are the same,
except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster-Shafer
theory or possibility theory, but those are essentially different and not compatible with
the laws of probability as they are usually understood.

APPLICATION

Two major applications of probability theory in everyday life are


in risk assessment and in trade on commodity markets. Governments typically apply
probabilistic methods inenvironmental regulation where it is called "pathway analysis",
often measuring well-being using methods that are stochastic in nature, and choosing
projects to undertake based on statistical analyses of their probable effect on the
population as a whole.

A good example is the effect of the perceived probability of any widespread


Middle East conflict on oil prices - which have ripple effects in the economy as a whole.
An assessment by a commodity trader that a war is more likely vs. less likely sends
prices up or down, and signals other traders of that opinion. Accordingly, the
probabilities are not assessed independently nor necessarily very rationally. The theory
of behavioral finance emerged to describe the effect of such groupthink on pricing, on
policy, and on peace and conflict.

It can reasonably be said that the discovery of rigorous methods to assess and
combine probability assessments has had a profound effect on modern society.
Accordingly, it may be of some importance to most citizens to understand how odds and
probability assessments are made, and how they contribute to reputations and to
decisions, especially in a democracy.

Another significant application of probability theory in everyday life is reliability.


Many consumer products, such as automobiles and consumer electronics,
utilize reliability theory in the design of the product in order to reduce the probability of
failure. The probability of failure may be closely associated with the product's warranty.
(b)

CATEGORIES OF PROBABILITY

Empirical Probability of an event is an "estimate" that the event will happen based on
how often the event occurs after collecting data or running an experiment (in a large
number of trials). It is based specifically on direct observations or experiences.

Empirical Probability Formula

P(E) = probability that an event, E, will occur.


top = number of ways the specific event
occurs.
bottom = number of ways the experiment
could occur.

Theoretical Probability of an event is the number of ways that the event can occur,
divided by the total number of outcomes. It is finding the probability of events that come
from a sample space of known equally likely outcomes.

Theoretical Probability Formula

P(E) = probability that an event, E, will occur.


n(E) = number of equally likely outcomes of E.
n(S) = number of equally likely outcomes of
sample space S.

Comparing Empirical and Theoretical Probabilities:

Empirical probability is the probability a person calculates from many different


trials. For example someone can flip a coin 100 times and then record how many times
it came up heads and how many times it came up tails. The number of recorded heads
divided by 100 is the empirical probability that one gets heads.

The theoretical probability is the result that one should get if an infinite number of
trials were done. One would expect the probability of heads to be 0.5 and the probability
of tails to be 0.5 for a fair coin.
(a)

Suppose you are playing the Monopoly game with two of your friends. To start the
game, each player will have to toss the die once. The player who obtains the highest
number will start the game. List all the possible outcomes when the dice is tossed once.

={1,2,3,4,5,6}

(b)

Chart
Dice 2

6 (1,6) (2,6) (3,6) (4,6) (5,6) (6,6)


5 (1,5) (2,5) (3,5) (4,5) (5,5) (6,5)
4 (1,4) (2,4) (3,4) (4,4) (5,4) (6,4)
3 (1,3) (2,3) (3,3) (4,3) (5,3) (6,3)
2 (1,2) (2,2) (3,2) (4,2) (5,2) (6,2)
1 (1,1) (2,1) (3,1) (4,1) (5,1) (6,1)
Dice 1
0 1 2 3 4 5 6
Table

1 2 3 4 5 6
1 (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
2 (2,1) (2,2) (2,3) (2,4) (2,5) (2,6)
3 (3,1) (3,2) (3,3) (3,4) (3,5) (3,6)
4 (4,1) (4,2) (4,3) (4,4) (4,5) (4,6)
5 (5,1) (5,2) (5,3) (5,4) (5,5) (5,6)
6 (6,1) (6,2) (6,3) (6,4) (6,5) (6,6)
(a)

Sum of the dots on both Possible outcomes Probability, P(x)


turned faces(x)
2 (1,1) 1/36
3 (1,2)(2,1) 2/36=1/18
4 (1,3)(2,2)(3,1) 3/36=1/12
5 (1,4)(2,3)(3,2)(4,1) 4/36=1/9
6 (1,5)(2,4)(3,3)(4,2)(5,1) 5/36
7 (1,6)(2,5)(3,4)(4,3)(5,2)(6,1) 6/36
8 (2,6)(3,5)(4,4)(5,3)(6,2) 5/36
9 (3,6)(4,5)(5,4)(6,3) 4/36=1/9
10 (4,6)(5,5)(6,4) 3/36=1/12
11 (5,6)(6,5) 2/36=1/18
12 (6,6) 1/36
(b)

A = { (1,1), (1,2), (1,3), (1,4), (1,5), (1,6)


(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)
(3,1), (3,2), (3,3), (3,4), (3,5), (3,6)
(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)
(5,1), (5,2), (5,3), (5,4), (5,5), (5,6)
(6,1), (6,2), (6,3), (6,4), (6,5), (6,6)}

B=

P = Both number are prime


= {(2,2), (2,3), (2,5), (3,3), (3,5), (5,3), (5,5)}

Q = Difference of 2 number is odd


= { (1,2), (1,4), (1,6), (2,1), (2,3), (2,5), (3,2), (3,4), (3,6), (4,1), (4,3), (4,5), (5,2),
(5,4), (5,6), (6,1), (6,3), (6,5) }

C=PUQ
= {1,2), (1,4), (1,6), (2,1), (2,2), (2,3), (2,5), (3,2), (3,3), (3,4), (3,6), (4,1), (4,3), (4,5),
(5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,3), (6,5) }

R = The sum of 2 numbers are even


= {(1,1), (1,3), (1,5), (2,2), (2,4), (2,6), (3,1), (3,3), (3,5), (4,2), (4,4), (4,6), (5,1), (5,3),
(5,5), (6,2(, (6,4), (6,6)}

D=PR
= {(2,2), (3,3), (3,5), (5,3), (5,5)}
(a)

x f fx fx2
2 2 4 6
3 4 12 36
4 4 16 64
5 9 45 225
6 4 24 144
7 11 77 539
8 4 32 256
9 6 54 486
10 3 30 300
11 1 11 121
12 2 24 128
77 50 329 2305

From the table,


(i)

(ii)

(iii)

(b)

x f fx fx2
8 2
1
6
34 5 15 45
x f fx fx2
8 2
1
6
1680400612724327 6 24 96
2114710298108064
04
7 9
2
6
4
8
5
108 9 90 900
11 5 55 605
12 4 48 576
n=50n=100Mean =100 =691 =5387
6.586.917.00Varian
ce6.04366.12195.83
Standard
Deviation2.4582.47
42.415Part
423456789101112P
(x)1/361/181/121/91/3
61/61/361/91/121/181
/36
Part 5
x f fx fx2
8 2
1
6

(b)

For n=50 , mean=6.58


For n=100 , mean=6.91
Actual mean=7
Hence, we get different mean for different number of experiment.
As the number of experiments getting bigger, the empirical(experimental) mean
will tend to be close to the theoretical(actual) mean.
The same will goes with the variance and standard deviation

(c)
0 < mean 7

n become n become
smaller bigger
FURTHER EXPLORATION

In probability theory, the law of large numbers (LLN) is a theorem that


describes the result of performing the same experiment a large number of times.
According to the law, the average of the results obtained from a large number of trials
should be close to the expected value, and will tend to become closer as more trials are
performed.

For example, a single roll of a six-sided die produces one of the numbers 1, 2, 3, 4, 5, 6,
each with equalprobability. Therefore, the expected value of a single die roll is

According to the law of large numbers, if a large number of dice are rolled, the
average of their values (sometimes called the sample mean) is likely to be close to 3.5,
with the accuracy increasing as more dice are rolled.
Similarly, when a fair coin is flipped once, the expected value of the number of
heads is equal to one half. Therefore, according to the law of large numbers, the
proportion of heads in a large number of coin flips should be roughly one half. In
particular, the proportion of heads after n flips will almost surely converge to one half
as napproaches infinity.

Though the proportion of heads (and tails) approaches half, almost surely the
absolute (nominal) difference in the number of heads and tails will become large as the
number of flips becomes large. That is, the probability that the absolute difference is a
small number approaches zero as number of flips becomes large. Also, almost surely
the ratio of the absolute difference to number of flips will approach zero. Intuitively,
expected absolute difference grows, but at a slower rate than the number of flips, as the
number of flips grows.
The LLN is important because it "guarantees" stable long-term results for random
events. For example, while a casino may lose money in a single spin of
the roulette wheel, its earnings will tend towards a predictable percentage over a large
number of spins. Any winning streak by a player will eventually be overcome by the
parameters of the game. It is important to remember that the LLN only applies (as the
name indicates) when a large number of observations are considered. There is no
principle that a small number of observations will converge to the expected value or that
a streak of one value will immediately be "balanced" by the others.

REFLECTION

While I was conducting the project, I had learned many moral values that I
practice. This project work had taught me to be more confident when doing something
especially the homework given by the teacher. I also learned to be a disciplined type of
student which is always sharp on time while doing some work, complete the work by
myself and researching the informations from the internet. I also felt very enjoy when
making this project during the school holidays.

From the table,


(i)

(ii)

(iii)
(a)

You might also like