1.04.2006

Null hypothesis, from wikipedia

In statistics, a null hypothesis set up to be nullified or refuted. Although it was originally proposed to be any hypothesis, in practice it has come to be identified with the "nil hypothesis", which states that "there is no phenomenon". It is a hypothesis that is presumed true until statistical evidence in the form of a hypothesis test indicates otherwise. For example, if we want to compare the test scores of two random samples of men and women, a null hypothesis would be that the mean score in the male population from which the first sample was drawn was the same as the mean score in the female population from which the second sample was drawn:

H0:μ1 = μ2
where:

H0 = the null hypothesis
μ1 = the mean of population 1, and
μ2 = the mean of population 2.
Alternatively, the null hypothesis can postulate that the two samples are drawn from the same population:

H0:μ1 ? μ2 = 0
Formulation of the null hypothesis is a vital step in statistical significance testing. Having formulated such a hypothesis, we can then proceed to establish the probability of observing the data we have actually obtained, or data more different from the prediction of the null hypothesis, if the null hypothesis is true. That probability is what is commonly called the "significance level" of the results.

In formulating a particular null hypothesis, we are always also formulating an alternative hypothesis, which we will accept if the observed data values are sufficiently improbable under the null hypothesis. The precise formulation of the null hypothesis has implications for the alternative. For example, if the null hypothesis is that sample A is drawn from a population with the same mean as sample B, the alternative hypothesis is that they come from populations with different means (and we shall proceed to a two-tailed test of significance). But if the null hypothesis is that sample A is drawn from a population whose mean is lower than the mean of the population from which sample B is drawn, the alternative hypothesis is that sample A comes from a population with a higher mean than the population from which sample B is drawn, and we will proceed to a one-tailed test.

A null hypothesis is only useful if it is possible to calculate the probability of observing a data set with particular parameters from it. In general it is much harder to be precise about how probable the data would be if the alternative hypothesis is true.

If experimental observations contradict the prediction of the null hypothesis, it means that either the null hypothesis is false, or we have observed an event with very low probability. This gives us high confidence in the falsehood of the null hypothesis, which can be improved by increasing the number of trials. However, accepting the alternative hypothesis only commits us to a difference in observed parameters; it does not prove that the theory or principles that predicted such a difference is true, since it is always possible that the difference could be due to additional factors not recognised by the theory.

For example, rejection of a null hypothesis (that, say, rates of symptom relief in a sample of patients who received a placebo and a sample who received a medicinal drug will be equal) allows us to make a non-null statement (that the rates differed); it does not prove that the drug relieved the symptoms, though it gives us more confidence in that hypothesis.

The formulation, testing, and rejection of null hypotheses is methodologically consistent with the falsificationist model of scientific discovery formulated by Karl Popper and widely believed to apply to most kinds of empirical research. However, concerns regarding the high power of statistical tests to detect differences in large samples have led to suggestions for re-defining the null hypothesis, for example as a hypothesis that an effect falls within a range considered negligible. This is an attempt to address the confusion among non-statisticians between significant and substantial, since large enough samples are likely to be able to indicate differences however minor.

1.02.2006

crm -- short question

The questions below are for reference only. Please go back to the notes if you have any problems. And of course, it is welcome for you to leave a comment on it.

Positivism and Positivist methods in Social research

Positivism is a systematic philosophical explanation of what account science. It is based on empiricism, a philosophical school concerning the basis of knowledge.

What is positivism?

Positivism’s account of scientific knowledge rested on some basic assumptions.Firstly, the positivism assume that the basic of scientific knowledge is sensory experience. The sensory experience, such as sweet in mouth, would be the only source of all ideas about external world. The positivism agree that knowledge can be put to test if sensory experience is if scientific value only.

Secondly, through sensory experience, it is possible to observe and record the “bare facts” of the world. All observer will observe exactly the same “bare facts” so that observation is the source of scientific ideas.

Thirdly, the universe behaves with regularities which are directly observable through sensory experience. The sensory experience helps to built up our mind, and so to perceive facts. And therefore, the law-liked statement can be tested on the basis of sensory observation.

Comments on positivism

However, there is some critics on positivism. Karl Popper has suggested that observation is theory-laden. Observations is not a secure and only basis on which establish scientific knowledge, that is mean whether or not an idea scientific or not does not depend on whether or not it is derived form observation.

For example, before Galileo invent a telescope, all people believe that the earth is the center of universe. But that believe is theory-laden so that no one accept the new idea.

Positivist methods in Social research

To prevent subjective mind, positivist try to make the research objective, so that they try to make all data quantitative and capable for a research process to transform concepts into variables to make the research workable.

For example, when we want to transform a concept, such as reliability of a employee, we have to transform reliability to some variables, such as the number of late coming, absent or works after deadline.

But all in all, quantitative data should also have value on research methods. Especially in the communication research, it is too hard to accept qualitative data as the only basic of scientific knowledge.