Development and standardization of alternate forms of an object sorting task for normal adults



Journal Title

Journal ISSN

Volume Title



The area of conceptual behavior of normal adults has been virtually ignored in psychological investigations. Normal groups have been used, for the most part, only secondarily as controls in studies which have focused upon "abnormal" conceptualization of clinical groups. Thus, no instrument or scoring system exists which is adequate to investigate the conceptual behavior of normal adults for the purpose of establishing a "baseline" against which to evaluate clinically "deviant" behavior. Present methods are generally too simple for this group or depend too heavily upon verbal skills. Another deterrent to the adequate characterization and evaluation of conceptualization has been the wide-spread usage of a dichotomous distinction between abstract and concrete thinking as the basis for measuring and analyzing conceptual behavior. Wide-spread usage of this dichotomy resulted from the influence of Goldstein's publication of his work with brain-damaged patients after World War I. The use of the terms "concrete" and "abstract" have become too automatic, and abstract behavior has acquired a value connotation of desirability or superiority. Frequently, only normal adults have been considered as capable of "genuine" abstractness. Thus, the aims of the present study were (1) to develop a more adequate technique than now exists to evaluate the conceptual behavior of normal adults, (2) to devise a scoring system that would bypass the abstract-concrete dichotomy, and (3) to obtain standardization data from a sample of 100 normal adults as a basis for deriving tentative norms of conceptual behavior for such a population. Fulfillment of these aims involved first the construction of the test instrument. An object sorting task was considered admirably suited for this purpose, since it appears to constitute a miniature replication of the individual's ordering of his everyday world of objects, and responses to it can be evaluated independently of language facility. Two potentially equivalent forms of an object sorting task were developed. The form of the task (OST) was patterned after Rapaport's Revised Object Sorting Test, and is composed of an active and a passive phase. For each form, the final composition of the active phase consists of 10 sortings of objects chosen by the subject; the passive phase consists of the subject's conceptualization of 11 object groupings chosen by the examiner. A total of 47 heterogeneous, familiar items, including pictures and words or phrases, constitute the object battery. To assess behavior on the OST, a scoring system was devised by scaling the Closed-Open and Public-Private dimensions of McGaughran's Conceptual Area Scoring System. The Closed-Open dimension scales differences in "order of conceptual classification," and the Public-Private dimension characterizes amount of social agreement inherent in the expressed concept. The Intent was to make the two dimensions independent of each other, since they are scored interactively in the Area Scoring System. Two additional scoring variables were introduced. "Essentiality" of a given concept is its saliency in identifying the most culturally useful and salient attributes of a particular object grouping. "Use of a generic term" assesses degree of language skill or facility in conceptual behavior. The construction of the Object Sorting Task and development of the scoring system in the present study are an extension of earlier work done by the writer. An original form of only the Passive phase of the OST and of the two scaled dimensions were used in a study with a small sample of normal adults. Results obtained from that study served as a guide for revisions needed that were introduced in the present study. The final forms of the Object Sorting Task were administered to a standardization sample of 100 normal adults. The sample was stratified in terms of sex, age, education and intelligence. Responses obtained were scored according to the four variables. Two judges scored randomly selected sets of 10 subject protocols for each of the Task phases of both forms (i.e.. Active phase. Form I, etc.) according to the Closed-Open and Public-Private dimensions in order to evaluate interscorer reliability. The data were analyzed by product-moment correlations and/or t-tests of mean differences to assess degree of interscorer agreement, equivalence of the two OST forms, independence of the four scoring dimensions, difference between Active and Passive Sorting behavior, effects of the demographic variables of intelligence, education, age and sex, and effect of order of presentation of OST forms. High interscorer correlations were obtained in the final comparison of judgments, which demonstrates that, if judges are sufficiently familiar with and mutually understanding of the scoring system, very close agreement can be achieved with the present system. The standardization figures showed the sample to be moderately "private" and moderately "open"; the mean scores for the Closed-Open and Public-Private dimensions fell at approximately the mean of each of these scales. There were relatively few Essential responses, and little use of generic terms. Correlations between the two OST forms were moderately high; significant differences were found only between the two Passive phases, except for the Generic Term variable. Form I appeared as a more difficult task. It was considered that the forms approach an acceptable degree of equivalence, although some additional revisions are needed in both forms and phases, but primarily in the Active phases. A comparison of results obtained with the Closed-Open and Public-Private scoring dimensions reflected that they are not highly correlated. The dimensions were shown to be sufficiently independent in scoring of Active sorting behavior. Some confounding was reflected in the scoring of the Passive phases, which indicates the need for further work in this part of the system to achieve the desired degree of independence. Essentiality was found to be substantially correlated with both of the scaled dimensions and more moderately with use of generic term. Essentiality was considered to be the weakest scoring variable. Generic Term was not highly correlated with the scaled dimensions and approaches the status of an independent variable. Active and Passive behavior were found to be significantly different on all but two measures. Responses were consistently more open, more public, more essential, with use of more generic terms in the Passive phases. Various possible sources of the difference were discussed (e.g., the fact that the Active phases were shown by the results to be more difficult tasks than the Passive phases.) Analysis of the effects of the demographic variables showed those of sex difference to be insignificant. Degree of intelligence exerted its effect primarily upon Closed-Open sorting behavior and upon performance in the Passive phases of the Task; it was found to have little effect upon Public-Private sorting behavior. Higher intelligence level was associated with more open responses. Level of education also was associated primarily with scores on the Closed-Open dimension and in the Passive phases, although to a lesser extent than degree of intelligence. Age differences had little effect upon performances. The highest correlations were with the Closed-Open dimension, and slightly more with Active than with Passive behavior, and were between more closed conceptualizing and greater age. In an attempt to control for the effects of order of presentation of the two forms of the OST, 50 subjects were administered Form I first, and 50 received Form II first. Comparison of first and second session performances revealed significant differences as a function of this variable, which the balancing did not sufficiently eliminate. Subjects were consistently more open, more public and more essential during second sessions; the differences were significant, however, only for Form I in terms of the Closed-Open and Public-Private dimensions. Factors contributing to the differences in performance, and possible solutions for subsequent work, were discussed (e.g., with the more difficult Form I Passive phase, the subjects benefited substantially more from a first practice session whereas, with the easier Form II Passive, the first session made little difference).



Concepts--Testing, Psychological tests