30-06-2012, 12:59 PM
academic achievement
LITERATURE REVIEW
Academic success is, no doubt, the main focus of all educational activities which has received tremendous attention from educationists. However, prediction of academic success is still not clear. Apparently, predictability of academic achievement is a complex (and by no means an easy) task. In the relevant literature, there are so many intricately related factors associated with academic achievement that make the prediction of academic achievement (at best) situational.
Among the factors of academic achievements are: student factors (attitudes, individual differences, physical health and readiness, and expectation, etc) ; (Ali, 1983); teacher/instructional/curriculum factors (teacher attitude to students, types of classroom control, curriculum content, teacher adequacy in professional qualification and preparation, instructional contents and presentation, use of relevant teaching aids etc); (Flowers, 1966; Burstall, 1970; Pidgeon, 1970); home, cultural and parental factors (Strauss, 1951; Llyod & Pidgeon, 1961; Pidgeon, 1970;
All, 1983) in which the motivational factors of the home background had been found to influence the learner’s academic achievement more importantly than the fixed material and economic conditions of the home. Institutional factors (type of school, population, control, discipline, personnel interactions, admissions and examination or evaluation policies, etc Pidgeon, 1970; Ali 1983) which are said to strongly affect academic achievement. According 'to Cronbach (1969) and Atkinson
(1978), environment and motivation exert very strong influence on academic achievement.
Educational research literature is replete with findings which indicate that academic achievement is apparently difficult to predict due to too many factors operating upon the learner
(Obemeata, 1970; Ohuche; 1974; Olatunji, 1976a and 1976b Abdullahi, 1983; Ali, 1983).
The purpose of the present study is to analysis factors that affects/influence student’s academic performances, a case study of faculty of science, University of Ilorin.
THE ACT OF INFERENCE MAKING
Inference is defined as the attempt to generalize on the basis of limited information. Information is always limited because it is impractical, in terms of time and cost, to obtain total knowledge about everything. If everything were known, there would be no need for inference. Since science does not claim to know everything, inference is behind all science, except in those few cases where everything is known about a whole population. It is important to note, also, that inference underlies most thinking, even the unscientific type. In this way, scientific inference is not unlike common sense. What distinguishes scientific inference is that the process is made explicit and follows certain rules.
So how do you make an inference? First, you have to know what one is. An inference is an assumption made based on specific evidence. Someone might say to you, "Nice hair," and you make the inference that the person is being rude and is really insulting you because it was said with a smirk. You infer the implied meaning – the meaning not said directly.
Inferences are made by doctors when they diagnose conditions, by FBI agents when they follow clues, by mechanics when they figure out what's wrong with your car.
We infer things all the time. If someone flips us the bird, we might figure out that they're mad at us for some reason. If someone is pushing a stroller, we infer that the person is taking a baby for a walk.
An inference is a guess, but it's an educated one, and you can typically come to only one of a few possible conclusions. For instance, in the cases above, the person flipping the bird may have only been scratching their chin with their middle finger. The person pushing the stroller could have been wheeling around a decrepit dog. Most likely, though, the first guesses were correct.
Inference demonstrates itself in science at least four main ways: (1) hypothesizing; (2) sampling, (3) designing, and (4) interpreting. These four general areas are sometimes referred to as the "wheel" of science. Hypothesizing usually begins after one has examined the existing knowledge base, reviewed the relevancy of theories, and understood something of the context within which the phenomenon of interest occurs. In other words, you begin research by identifying a problem area (picking a topic), reading the theoretical research (especially the literature review sections), and finding a research question of interest to you (something that has puzzled previous theorists and researchers). Research questions are longer and broader than hypotheses.
Hypotheses are simply if-then sentences that can be categorized in certain logical forms, such as no difference (null hypothesis), associated difference, directionality of difference, and magnitude of difference. A good hypothesis implies all these forms in a single sentence, and the trick is to express them as briefly as possible and in simple English. All theories contain hypotheses, but you sometimes have to read them into the theories. There's no need to elaborate all hypotheses capable of being generated by every aspect of a theory, but a single theory can generate many hypotheses with its twists and turns. In the end, all hypotheses demonstrate inference by concisely reducing extant (existing) knowledge into manageable and meaningful form. Extant knowledge is what you obtain from a literature review.
Sampling goes to the heart of inference because a sample is what one draws on to test hypotheses and make generalizations. The idea of sampling is drawn from the mathematical discipline of probability theory, and a particular subfield of that discipline called frequentism, which combines inductive (particular to general) and deductive (general to particular) reasoning. It's the selection of observables to make predictions about unobservable. Sampling, at bottom, is a matter of reducing, or simplifying. Since many phenomena in life tend to follow a normal, or almost normal, distribution (according to the central limits theorem), known mathematical properties of the standard normal curve provide the basis for most predictions, as these are considered estimates of the fit between a sample (observables) and the wider population (unobservable). If the researcher has been thinking inferentially, the method of sampling and the size of the sample will be selected on grounds of parsimony (making do with the fewest numbers as possible). There is no automatic need for large sample sizes, and the type of questions asked or relationships predicted will, in part, help determine the sampling plan. If one is going to infer causality, then random sampling, or some variant, is warranted. There are both probabilistic (making use of advanced features of probability theory) and non probabilistic (not making use of advanced features of probability theory) sampling methods that suit different purposes. In general, the more one knows about the wider population or context of the research problem, the easier it is to justify use of non probabilistic sampling. Representativeness is what one is after with sampling, which means that each and every person or unit in your sample is a near-average person or unit, not some unusual case that would be called an outlier (too far out on some traits or attributes to be near-average). Measurement is a research step related to sampling and the estimates derived from it. It is important that the sample enable measurement of constructs (unobservable) that are strongly linked to concepts (observables). In general, one should attempt to obtain measures that are meaningful, and this means interval or ratio level, especially if one is going to infer causality. Interval (meaningful distances between points) and ratio (fixed point with meaningful distances) measurement is also related to estimates of validity and reliability of one's research. Validity and reliability refer, respectively; to whether one is measuring what one intends to measure and if one is doing it consistently. These qualities of research, as well as the general idea of sampling, demonstrate inference by streamlining a project into manageable and meaningful proportions.
Design issues depend, in large part, upon the expertise and creativity of the researcher. What one wants is a good tradeoff between a parsimonious design and one that provides the highest level of confidence. There is no automatic need for the Cadillac of designs, the experimental model (with experimental and control groups), when one can get by with a less grand design. Of course, this depends upon the type of questions asked and relationships predicted. If one is predicting causality, or even hypothesizing correlation (that one thing moves up or down in correspondence with another thing), then the experimental model or a close approximation to it is warranted. Designing with confidence does not refer to the power of statistical estimates, although there is such a thing as statistical correspondence validity which means that the intended analysis is consistent with the design to be used. Confidence, as the term means here, simply means that the prospective design is one the researcher feels comfortable with and is likely to be appreciated by the rest of the scientific community. This is often referred to as the requirement of replication. Sound designs are capable of being replicated; each and every procedure is made explicit so that an outsider could come in, repeat the experiment exactly, and probably get the same results. Replication demonstrates, as design issues in general do, the quality of inference. Nothing is ever demonstrated directly and completely. Only by what seems tedious, rigorous, and systematic do more and more tenable generalizations become possible.
Interpreting research is perhaps the prime example of inference. Interpretation is made on the basis of data analysis using some sort of statistic. A statistic is a mathematical tool or formula used to test the probability of being right or wrong when we say a hypothesis is true or false. There are about 100 common statistical tests. A test of one's hypothesis can always reach statistical significance by increasing the sample size, and that's just because the way cutoffs are placed in the tables of numbers called significance tables. However, there's a difference between statistical and meaningful significance. Statistical significance is no guarantee of meaningful, or social or psychological significance. Generalize ability is what one is after with interpretation, which means that general conclusions can be made on the basis of successful testing of all your hypotheses. There are two things to be wary of: (1) knowing the limitations of one's research, and (2) knowing the delimitations of one's research. Limitations are specific conclusions that refer to the making of generalizations possible from what your analysis actually shows. It may be nothing more than the discovery of a relationship. You should always know your limitations. Delimitations are general conclusions that refer to the making of generalizations beyond the limitations of your study. You should always be cautious of over-generalizing to wider populations; you may go beyond your sample, but not beyond related populations. Be humble and modest in presenting your conclusions. One way of demonstrating how limitations are evidence of inference is to look at the requirements of causality: association, temporality, and non-spuriousness. These three requirements of causality can be said to summarize causal inference. Predicted relationships should vary concomitantly (association, as one goes up or down, the other goes up or down), one variable should precede the other in temporal order, and spurious variables should be reduced to a minimum. Spurious variables are things you haven't thought of that might explain what you've found.
In the end, there are usually more correlates than causes, and one cannot control everything. Causality is always an inference. This particular type of relationship must be inferred from the observed information, and then related back to known information. Inference demonstrates itself in hypothesizing, sampling, designing, and interpreting. It is the basis for scientific generalization, especially those having to do with the explanation of causality. It is never the final proof, but because final proof is itself never possible, inference is the best substitute. It enables ways to advance science, debunk mistaken beliefs, and is always mindful of its own limitations. Certain safeguards are built into the process which protect against unwarranted generalizations. The process of generalizing in an explicit and scientific manner is inference.