During research especially when the concepts we want to measure are complex and abstract and there are no standardized measurement tools available, we face problems of measurement. Alternatively, when we are measuring something which can lead to subject bias like attitudes and opinions, there is a problem of their valid measurement. A similar problem may be faced in a lesser degree while measuring physical or institutional concepts. Therefore, knowledge of some such procedures which may enable accurate measurement of abstract concepts is extremely essential.
Scaling techniques are immensely beneficial for a researcher.
Scaling is the process of assigning numbers to various degrees of attitudes, preferences, opinion, and other concepts. Scaling is defined as a procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the characteristics of numbers to the properties in question.
Scaling can be done in two ways: (i) making a judgement about an individuals characteristics and then placing him on a scale which is defined in terms of that characteristic, and (ii) constructing questionnaires where individual’s responses score assign them a place on a scale. A scale is a continuum, consisting of the highest point and the lowest point along with several intermediate points between these two extremities. These scale-point positions are hierarchically related to each other. Numbers for measuring the degree of differences in the attitudes or opinions are assigned to individuals corresponding to their positions in a scale. Therefore, the term ‘scaling’ implies procedures for determination of quantitative measures of subjective abstract concepts.
a) Concept development: This is the first step. In this case, the researcher should have a complete understanding of all the important concepts relevant to his study. This step is more applicable to theoretical studies compared to practical studies where the basic concepts are already established beforehand.
b) Specification of concept dimensions: Here, the researcher is required to specify the dimensions of the concepts, which were developed in the first stage. This is achieved either by adopting an intuitive approach or by an empirical correlation of the individual dimensions with that concept and/or other concepts.
c) Indicator selection: In this step, the researcher has to develop the indicators that help in measuring the elements of the concept. These indicators include questionnaires, scales, and other devices, which help to measure the respondents opinion, mindset, knowledge, etc. Using more than one indicator lands stability and improves the validity of the scores.
Index formation: Here, the researcher combines the different indicators into an index. In case, there are several dimensions of a concept the researcher needs to combine them.
The practicality attribute of a measuring instrument can be estimated regarding its economy, convenience and interpretability. From the operational point of view, the measuring instrument needs to be practical. In other words, it should be economical, convenient and interpreted.
Economy consideration suggests that some mutual benefit is required between the ideal research project and that which the budget can afford. The length of measuring instrument is an important area where economic pressures are swiftly felt. Even though more items give better reliability, in the interest of limiting the interview or observation time, we have to take only few items for the study purpose. Similarly, the data-collection methods, which are to be used, occasionally depend upon economic factors.
Convenience test suggests that the measuring instrument should be easily manageable. For this purpose, one should pay proper attention to the layout of the measuring instrument. For example, a questionnaire with clear instructions and illustrated examples is comparatively more effective and easier to complete than the questionnaire that lacks these features. Interpretability consideration is especially important when persons other than the designers of the test are to interpret the results. In order to be interpretable, the measuring instrument must be supplemented by the following:
detailed instructions for administering the test,
evidence about the reliability, and
guides for using the test and interpreting results.
Reliability is an essential element of test quality. An instrument for measurement is reliable if it provides consistent results. But a reliable instrument need not be valid. For example, if a clock shows time nonstop then it is reliable, but that does not mean it is showing the correct time. Reliability deals with consistency, or reproducibility of similar results in a test by the test subject, if a test is administered on two occasions; the same conclusions are reached both times. While a test with poor reliability will have remarkably different scores each time with the same test and same examinee.
If a test is then it has to be reliable, but the vice versa is not true. Although, reliability might is not as valuable as validity, but nonetheless reliability it is easier to assess than validity for a test. Reliability has two key aspects: stability and equivalence. The degree of stability can be located comparing the results of repeated measurements with the same candidate and the same instrument. Equivalence means the probability of the amount of errors getting introduced by various investigators or different sample items being studied during the repetition of the test. The best way to test for reliability of a test is that two investigators should compare their observations of the same events. Reliability can be improved in the following ways:
(i) By standardizing the measurement conditions to reduce external factors such as boredom, fatigue, etc. which leads to achievement of stability.
(ii) By detailed directions for measurement which can be generalized and used by trained and motivated persons to conduct research and also by increasing the purview of the sample of items used, this lead to equivalence.