Extension programmes are mostly funded with public money and are planned and implemented by an organization, which in most cases is a department of a government. In order to justify the appropriation of public funds and continuing support from the people, it is necessary that their management as well as impact be properly and adequately evaluated from time to time. How, to evaluate management, achievements and failures of these programmes has been a challenge to extension workers right from the time when planned extension programmes were introduced. However, 'it was when Tyler's (1950) philosophy of educational evaluation became a part of extension education that educational evaluation became a part of extension education that the pattern of extension educational evaluation took a more usable, understandable form' (Sabrosky, 1966).
The word 'evaluation' has its origin in the Latin word "valere" meaning to be strong or valiant. Its dictionary meanings are the determination of the value, the strength or worth of something, an appraisal, an estimates of the force of or making a judgement of something.
Evaluation as applied to the field of extension education, may be defined as "a process of systematic appraisal by which we determine the value, worth or meaning of an activity or an enterprise". It is a method for determining how far an activity has progressed and how much further it should be carried to accomplish objectives'. Thus to an extension worker evaluation means determining the results of his extension programmes in order to know the extent to which objectives have been achieved and why and what changes would be needed in case the programme is planned again, or in its implementation.
Tyler (1950) developed two basic notions regarding educational evaluation, which equally apply to extension evaluation. These notions are that the process of evaluation (i) is essentially a process of determining behaviour of the people covered under the programme and (ii) the process of determining the degree to which these behavioural changes are actually taking place. Thus extension evaluation may be said to be a process for determining behavioural changes of people resulting from extension programmes. Once evaluation became an integral part of the extension education process, extension managers started applying this process to evaluate programme planning, management and implementation aspects of extension programme.
Definitions of Evaluation
More specific definitions of evaluation are given by persons involved in rural development programmes. While most of these definitions refer specifically to the assessment of the results of programmes of extension education, they can also be applied to the training aspect of such programmes. Some definitions of evaluation are:
- It is a process, which enables the administrator to describe the effects of his programme and thereby make progressive adjustments in order to reach his goal more effectively (Jahoda and Barnit, 1955).
- Programme evaluation is the determination of the extent to which the desired objectives have been attained or the amount of movement that has been made in the desired direction (Boyle and Johns, 1970).
- Programme evaluation is the process of judging the worth or value of a programme. The judgement is formed by comparing the programme should be (Steele, 1970).
- Evaluation is the process of delineating, obtaining and providing useful information for judging decision alternative (Stufflebeam, 1971).
- Evaluation is a co-ordinated process carried on by the total system and its individual subsystem. It consists of making judgements about a planned programme based on established criteria and known, observable evidence (Boone, 1985).
Nature of evaluation
1.Evaluation is not measurement: Evaluation is an integral part of extension education. All aspects of extension work need evaluation. Evaluation does not mean mere measuring of achievements, which is usually done after the programme is executed. Extension being an educational process, it is necessary to evaluate management of the programme and methods used, achievements accomplished in line with the objectives and also to determine the reasons for success or failure.
2.Evaluation is not exactly scientific research: When we think of evaluation as a process of collecting information as a basis for making decisions, forming judgements and drawing conclusions, we realise it has much in common with scientific research. But there is a great difference between our casual everyday evaluation and scientific research. However, the difference is a matter of degree rather than kind. Casual everyday evaluation can be placed at one end of the scale and scientific research at the other end. There are five locations on the scale with no sharp lines of distinction, i.e., casual everyday evaluation, self-checking evaluation, do-it yourself evaluation, extension evaluation studies and scientific research.
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
Casual everyday evaluation | Self-checking evaluation | Do-it yourself evaluation | Extension evaluation studies | Scientific research |
Types of evaluation
- Self-evaluation: This is to be carried out by every worker as a matter of routine. This requires the self-critical attitude which is so essential for extension work. By this self-critical attitude, the chances of an extension worker growing and continuously improving his professional competency become greater.
- Internal evaluation: Evaluation carried to by the agency responsible for the planning and implementation of the programme. Some of the other methods for internal evaluation are: systematic use of diaries and reports of workers, planned visits of staff members to work spot, use of special questionnaires and proforma for observation and inquiry etc.
- External evaluation: Evaluation conducted by a person or a committee outside the area of operation. One of the strong features of the Indian Community Development Programme is that simultaneous with its start an independent agency, namely the Programme Evaluation Organization, was established.
Evaluation can also be classified into (i) concurrent and (ii) ex-post facto evaluation.
Evaluate programme planning
As a result of experience, theory, research and experimentation, much information has been accumulated about how an extension programme should be planned. Progress in science and technology and the broadening of extension's clientele with the accompanying great variation in needs and interests have made the scientific planning of extension education programmes more important than ever before. There is considerable agreement on certain criteria which, if followed, make for successful extension programme planning at different levels. These criteria represent the ideal with which to compare our practices and procedures or programme planning. Some of the steps needed top evaluate or programming function in view of these criteria include:
i. Identify the evidence needed to form a judgement about each criterion.
ii. Specify the methods that will be used to obtain the evidence, such as personal observation, personal interview or through a systematic survey.
iii. On the basis of the evidence gathered, judge whether or not each criterion is being adequately satisfied in the programme planning activities.
Extension evaluation process
There are several models of evaluation available in the literature. However, a very simplified version of most of these models may be quite workable for evaluating extension programmes since, as Bhatnagar (1987) has pointed out, any extension evaluation process has to be based on certain assumptions. For example, if some inputs are provided in the form of a programme, specific outputs can be expected and if these outputs happen, then the purpose of the programme can be achieved; if the purpose is achieved, then the development goal is realised. This means that evaluation has to be so designed that the quality types and adequacy of the input measures, outputs and their impact in achieving the programme objectives have to be evaluated systematically.
Steps involved in an extension programme evaluative process may be as follows:
i) Formulate evaluation objectives
Specific objectives to be achieved through the evaluative process must be clearly and adequately identified and started. All further efforts should be knit around these objectives.
ii) Classify programme objectives
It is assumed that each extension programme, when formulated and implemented, will have specific well-defined objectives. Since evaluation is basically a process of determining the extent to which various extension teaching activities were organized and managed and the extent to which they contributed to achieving the goals, programme objectives must be clearly understood and if necessary, further broke down into measurable terms. This is a crucial step as all further efforts will be directed towards collecting evidence related to these objectives.
iii) Identify indicators
To identify indicators or the kind of evidence necessary to evaluate achievement in relation to specified programme objectives, it is necessary that specific beneficiaries of the programme be identified, the kind of behavioural changes expected in them be clearly stated, and the kin of learning experiences expected to be provided to them spelled out, together with the level of management to be achieved for provided those learning experiences are specified. Once this is done, identification of specific indicators to measures the achievements will not be difficult.
iv) Decide the kind of information needed
Once the indicators for evaluating the management and performance of a programme have been indicated, specific information to be collected may be worked out. Since there is usually more information than an extension worker can collect, he has to be very discriminating about the kind and amount of information that should be collected. Timing for collection of information may also need to be specified.
v) Sampling
The purpose of sampling is to take a relatively small number of units from a population in such a way that the evidence collected from them becomes representative evidence of the entire population. Although there are several sampling methods, perhaps stratified sampling procedures may be most suitable for extension evaluation studies a they allow inclusion of all interested groups and ensure enough heterogeneity in the sample.
vi) Decide the design of evaluation
An ideal design of evaluation may be an experimental one. This would allow separating the effect of the programme from other factors, by setting control and treatment groups. Several experimental designs, such as one-group pre-test-post-test design, static groups comparison, pre-test, post-test control group design, Solomon four-group design, longitudinal study design, etc. are available in literature and can be used. However, in actual practice, extension progammes are seldom run in a way that allows an experimental design of evaluation. In Pilot Projects, it might be possible to use an experimental design of evaluation. By and large, a survey method is use. This method can be used for evaluating ongoing progress or as an ex-post facto evaluation of the programme after it has completed its tenure.
vii) Collection and analysis of evaluation evidence
There are many methods for collecting information for evaluative purposes, such as the mail questionnaire, personal interview, distributed questionnaires, group interviews, case studies, systematic field observations, systematic study of secondary data etc. Selection of the right kind of data collection method will depend on the objectives of the evaluation, kind of information needed, time and resources available and the type of respondents from whom information is to be collected.
However, whatever the method used, a specific questionnaire or interview schedule or data recording sheet must be developed with care.
Once the data is collected, it must be tabulated, summarized and analyzed with adequate care. This step should not be rushed. To avoid delay, however, analysis may be done with the help of a computer.
viii) Interpretation of the results in a proper way
It is a very crucial as evaluation results can be missed also. Once tentative generalizations are arrived at, it may be appropriate and they are informally discussed among the interpreters as well as with programme planning and implementation officials, so that the results of evaluation are put in a proper perspective.
The evaluation results must clearly state the achievements, failures and future adjustments needed. A written report of the evaluation findings should be prepared and made available to all concerned.