|
|||||||
АвтоАвтоматизацияАрхитектураАстрономияАудитБиологияБухгалтерияВоенное делоГенетикаГеографияГеологияГосударствоДомДругоеЖурналистика и СМИИзобретательствоИностранные языкиИнформатикаИскусствоИсторияКомпьютерыКулинарияКультураЛексикологияЛитератураЛогикаМаркетингМатематикаМашиностроениеМедицинаМенеджментМеталлы и СваркаМеханикаМузыкаНаселениеОбразованиеОхрана безопасности жизниОхрана ТрудаПедагогикаПолитикаПравоПриборостроениеПрограммированиеПроизводствоПромышленностьПсихологияРадиоРегилияСвязьСоциологияСпортСтандартизацияСтроительствоТехнологииТорговляТуризмФизикаФизиологияФилософияФинансыХимияХозяйствоЦеннообразованиеЧерчениеЭкологияЭконометрикаЭкономикаЭлектроникаЮриспунденкция |
X. Methods and Procedures of Lexicological Analysis
It is commonly recognized that acquaintance with at least some of the currently used procedures of linguistic investigation is of considerable importance both for language learners and for prospective teachers as it gives them the possibility to observe how linguists obtain answers to certain questions and is of help in the preparation of teaching material. It also helps language learners to become good observers of how language works and this is the only lasting way to become better users of language. The process of scientific investigation may be subdivided into several stages. Observation is an early and basic phase of "all modern scientific investigation, including linguistic, and is the centre of what is called the inductive method of inquiry. The cardinal role of all inductive procedures is that statements of fact must be based on observation, not on unsupported authority, logical conclusions or personal preferences. Besides, linguists as a rule largely confine themselves to making factual statements, i.e. statements capable of objective verification. In other words a linguist assumes that a question cannot be answered unless there are procedures by which reliable and verifiable answers can be obtained. The next stage after observation is classification or orderly arrangement of the data obtained through observation. For example, it is observed that in English nouns the suffixal morpheme -er is added to verbal stems (speak + -er, writ(e) + -er, etc.), noun stems (village + -er, London + -er, etc.), and that -er also occurs in non-derived words such as mother, father, etc. Accordingly all the nouns in -er may be classified into two types—derived and simple words and the derived words may be subdivided,into two groups according to their stems. It should be pointed out that at this stage the application of different methods of analysis is common practice.1 The following stage is usually that of generalization, i.e. the -collection of data and their orderly arrangement must eventually lead to the formulation of a generalization. or hypothesis, rule, or law. In our case we can formulate a rule that derived nouns in -er may have either verbal or noun stems. The suffix -er in combination with adjectival or adverbial stems cannot form nouns (cf. (to) dig—digger but big—bigger). Moreover, the difference in the meaning of the suffixal nouns observed by the linguist allows him to infer that if -er is added to verbal stems, the nouns thus formed denote an active doer—teacher, learner, etc., whereas when the suffix -er is combined with noun-stems the words denote residents of a place or profession (e.g. villager, Londoner).
One of the fundamental tests of the validity of a generalization is whether or not the generalization is useful in making reliable pre-dictions. For example, proceeding from the observation and generalization discussed above we may 'predict' with a considerable degree of certainty that if a new word with a suffix -er appears in modern English and the suffix is added to a verbal stem, the word is a noun denoting an active doer (cf., e.g., the new words of the type (moon-)crawler, (moon-)walker (lunar-)rover which appeared when the Soviet moon car was launched.1 Moreover we may predict if we make use of statistical analysis that such words are more likely to be coined than the other types of nouns with the -er suffix. Any linguistic generalization is to be followed by the verifying process. Stated simply, the linguist is required, as are other scientists, to seek verification of the generalizations that are the result of |- his inquiries. Here too, various procedures of linguistic analysis are commonly applied. It may be inferred from the above that acquaintance with at least I some of the methods of lexicological investigation, is essential for classification, generalization and above all for the verification of the hypothesis resulting from initial observation. We may also assume that application of various methods of analysis should be an essential part of the learning process and consequently of teacher's training. The methods and procedures briefly discussed below are as follows: 1. Contrastive analysis, 2. Statistical methods of analysis, 3. Immediate Constituents analysis, 4 Distributional analysis and co-occurrence, 5. Transformational analysis, 6. Componental analysis, 7. Method of semantic differential.2 All methods of linguistic analysis are traditionally subdivided into formalized and non-formalized procedures. It is common knowledge that formalized methods of analysis proved to be in many cases inapplicable to natural languages and did not yield the desired results, nevertheless if not theoretical tenets at least some procedures of these methods of analysis have been used by linguists of different schools of thought and have become part of modern linguists' equipment. Naturally, the selection of this or that particular procedure largely depends on the goal set before the investigator. If, e.g., the linguist wishes to find out the derivational structure of the lexical unit he is likely to make use of the IC analysis and/or the transformational analysis.3 If the semantic structure of two correlated words is compared, componental analysis will probably be applied. Some of the methods of lexicological analysis are of primary importance for teachers of English and are widely used in the preparation of
teaching material, some are of lesser importance. The comparative value of individual methods for practicing teachers and also the interconnection of some of the procedures determined the order of their presentation. The first method discussed here is that of contrastive analysis as we consider it indispensable in teaching English as a foreign language. This is followed by a brief survey of statistical methods of analysis as quantitative evaluation is usually an essential part of any linguistic procedure. The so-called formalized methods of analysis—the IC analysis, distributional and transformational procedures precede the componental analysis not because of their greater value in terms of teaching English, but because componental analysis may be combined with distributional and/or transformational procedures, hence the necessity of introducing both procedures before we start the discussion of the componental analysis. § 1. Contrastive Analysis Contrastive linguistics as a systematic branch of linguistic science is of fairly recent date though it is not the idea which is new but rather the systematization and the underlying principles. It is common knowledge that comparison is the basic principle in comparative philology. However the aims and methods of comparative philology differ considerably from those of contrastive linguistics. The comparativist compares languages in order to trace their philogenic relationships. The material he draws for comparison consists mainly of individual sounds, sound combinations and words, the aim is to establish family relationship. The term used to describe this field of investigation is historical linguistics or diachronic linguistics. Comparison is also applied in typological classification and analysis. This comparison classifies languages by types rather than origins and relationships. One of the purposes of typological comparison is to arrive at language universals—those elements and processes despite their surface diversity that all language have in common. Contrastive linguistics attempts to find out similarities and differences in both philogenically related and non-related languages. It is now universally recognized that contrastive linguistics is a field of particular interest to teachers of foreign languages.1 In fact contrastive analysis grew as the result of the practical demands of language teaching methodology where it was empirically shown that the errors which are made recurrently by foreign language students can be often traced back to the differences in structure between the target language and the language of the learner. This naturally implies the necessity of a detailed comparison of the structure of a native and a target language which has been named contrastive analysis.
It is common knowledge that one of the major problems in the learning of the second language is the interference caused by the difference between the mother tongue of the learner and the target language. All the problems of foreign language teaching will certainly not be solved by contrastive linguistics alone. There is no doubt, however, that contrastive analysis has a part to play in evaluation of errors, in predicting typical errors and thus must be seen in connection with overall endeavors to rationalize and intensify foreign language teaching. Linguistic scholars working in the field of applied linguistics assume that the most effective teaching materials are those that are based upon a scientific description of the language to be learned carefully compared with a parallel description of the native language of the learner.1 They proceed from the assumption that the categories, elements, etc. on the semantic as well as on the syntactic and other levels are valid for both languages, i.e. are adopted from a possibly universal inventory. For example, linking verbs can be found in English, in French, in Russian, etc. Linking verbs having the meaning of 'change', 'become' are differently represented in each of the languages. In English, e.g., become, come, fall, get, grow, run, turn, wax, in German — warden, in French — devenir, in Russian — станоновиться. The task set before the linguist is to find out which semantic and syntactic features characterize 1. the English set of verbs (cf. grow thin, get angry, fall ill, turn traitor, run dry, wax eloquent), 2. the French (Russian, German, etc.) set of verbs, 3. how the two sets compare. Cf., e.g., the English word-groups grow thin, get angry, fall ill and the Russian verbs похудеть, рассердиться, заболеть. Contrastive analysis can be carried out at three linguistic levels: phonology, grammar (morphology and syntax) and lexis (vocabulary). In what follows we shall try to give a brief survey of contrastive analysis mainly at the level of lexis. Contrastive analysis is applied to reveal the features of sameness and difference in the lexical meaning and the semantic structure of correlated words in different languages. It is commonly assumed by non-linguists that all languages have vocabulary systems in which the words themselves differ in sound-form but refer to reality in the same way. From this assumption it follows that for every word in the mother tongue there is an exact equivalent in the foreign language. It is a belief which is reinforced by the small bilingual dictionaries where single word translations are often offered. Language learning however cannot be just a matter of learning to substitute a new set of labels for the familiar ones of the mother tongue. Firstly, it should be borne in mind that though objective reality exists outside human beings and irrespective of the language they speak every language classifies reality in its own way by means of vocabulary units. In English, e.g., the word foot is used to denote the extremity of the leg. In Russian there is no exact equivalent for foot. The word denotes the whole leg including the foot.
Classification of the real world around us provided by the vocabulary units of our mother tongue is learned and assimilated together with our first language. Because we are used to the way in which our own language structures experience we are often inclined to think of this as the only natural way of handling things whereas in fact it is highly arbitrary. One example is provided by the words watch and clock. It would seem natural for Russian speakers to have a single word to refer to all devices that tell us what time it is; yet in English they are divided into two semantic classes depending on whether or not they are customarily portable. We also find it natural that kinship terms should reflect the difference between male and female: brother or sister, father or mother, uncle or aunt, etc. yet in English we fail to make this distinction in the case of cousin (cf. the Russian — двоюродный брат, двоюродная сестра). Contrastive analysis also brings to light what can be lapelled problem pairs, i.e. the words that denote two entities in one language and correspond to two different words in another language. Compare, for example часы in Russian and clock, watch in English, художник in Russian and artist, painter in English. Each language contains words which cannot be translated directly from this language into another. For example, favorite examples of untranslatable German words are gemutlich (something like 'easy-going', 'humbly pleasant', 'informal') and Schadenfreude ('pleasure over the fact that someone else has suffered a misfortune'). Traditional examples of untranslatable English words are sophisticated and efficient. This is not to say that the lack of word-for-word equivalents implies also the lack of what is denoted by these words. If this were true, we would have to conclude that speakers of English never indulge in Shadenfreude and that there are no sophisticated Germans or there is no efficient industry in any country outside England or the USA. If we abandon the primitive notion of word-for-word equivalence, we can safely assume, firstly, that anything which can be said in one language can be translated more or less accurately into another, secondly, that correlated polysemantic words of different languages are as a rule not co-extensive. Polysemantic words in all languages may denote very different types of objects and yet all the meanings are considered by the native speakers to be obviously logical extensions of the basic meaning. For example, to an Englishman it is self-evident that one should be able to use the word head to denote the following: head – of a person, a bed, a coin, a cane; of a match, a table, an organization whereas in Russian different words have to be Used: голова, изголовье сторона, головка, etc. The very real danger for the Russian language learner here is that having learned first that head is the English word which denotes a part
of the body he will assume that it can be used in all the cases where the Russian word голова is used in Russian, e.g. голва сахара ('a loaf of sugar'), гордской голова ('mayor of the city'), он парень с головой ('.he is a bright lad'), в первую голову ('in the first place'погрузиться во что-л. с головой ('to throw oneself into smth.'), etc., but will never think of using the word head in connection with 'a bed' or 'a coin'. Thirdly, the meaning of any word depends to a great extent on the place it occupies in the set of semantically related words: its synonyms, the constituents of the lexical field the word belongs to, other members of the word-family which the word enters, etc. Thus, e.g., in the English synonymic set brave, courageous, bold, fearless, audacious, valiant, valorous, doughty, undaunted, intrepid each word differs in certain component of meaning from the others, brave usually implies resolution and self-control in meeting without flinching a situation that inspires fear, courageous stresses stout-hearted-ness and firmness of temper, bold implies either a temperamental liking for danger or a willingness to court danger or to dare the unknown, etc. Comparing the corresponding Russian synonymic set храбрый, бес-страшныйi, смелый, мужественный, отважный, etc.- we see that the Russian word смелый, e.g., maybe considered as a correlated word to either brave, valiant or valorous and also that no member of the Russian synonymic set can be viewed as an exact equivalent of any single member of the English synonymic set in isolation, although all of them denote 'having or showing fearlessness in meeting that which is dangerous, difficult, or unknown'. Different aspects of this quality are differently distributed among the words making up the synonymic set. This absence of one-to-one correspondence can be also observed if we compare the constituents of the same lexico-semantic group in different languages. Thus, for example, let us assume that an Englishman has in his vocabulary the following words for evaluating mental aptitude: apt, bright, brilliant, clever, cunning, intelligent, shrewd, sly, dull, stupid, slow, foolish, silly. Each of these words has a definite meaning for him. Therefore each word actually represents a value judgement. As the Englishman sees a display of mental aptitude, he attaches one of these words to the situation and in so doing, he attaches a value judgement. The corresponding Russian semantic field of mental aptitude is different (cf. способный, хитрый, умный, глупый, тупой, etc.), therefore the meaning of each word is slightly different too. What Russian speakers would describe as хитрый might be described by English speakers as either cunning or sly depending on how they evaluate the given situation. The problem under discussion may be also illustrated by the analysis of the members of correlated word-families, e.g., cf. голова, головка, etc. head, heady, etc. which are differently connected with the main word of the family in each of the two languages and have different de-notational and connotational components of meaning. This can be easily observed in words containing diminutive and endearing suffixes, e.g. the English word head, grandfather, girl and others do not possess the connotative component which is part of the meaning of the Russian; words головка, головушка, головенка, дедушка, дедуля, etc.
Thus on the lexical level or to be more exact on the level of the lexical meaning contrastive analysis reveals that correlated polysemantic words are not co-extensive and shows the teacher where to expect an unusual degree of learning difficulty. This analysis may also point out the effective ways of overcoming the anticipated difficulty as it shows which of the new items will require a more extended and careful presentation and practice. Difference in the lexical meaning (or meanings) of correlated words accounts for the difference of their collocability in different languages. This is of particular importance in developing speech habits as the mastery of collocations is much more important than the knowledge of isolated words. Thus, e.g., the English adjective new and the Russian adjective новый when taken in isolation are felt as correlated words as in a number of cases new stands for новый, e.g. новое платье—a. new dress, Новый Год—New Year. In collocation with other nouns, however, the Russian adjective cannot be used in the same meaning in which the English word new is currently used. Compare, e.g., new potatoes—молодая картошка, new bread—свежий хлеб, etc. The lack of co-extension may be observed in collocations made up by words belonging to different parts of speech, e.g. compare word-groups with the verb to fill: to fill a lamp—заправлять лампу to fill a pipe—набивать трубку to fill a truck—загружать машину to fill a gap—заполнять пробел As we see the verb to fill in different collocations corresponds to a number of different verbs in Russian. Conversely one Russian word may correspond to a number of English words. For instance compare тонкая книга a thin book тонкая ирония subtle irony тонкая талия slim waist Perhaps the greatest difficulty for the Russian learners of English is the fact that not only notional words but also function words in different languages are polysemantic and not co-extensive. Quite a number of mistakes made by the Russian learners can be accounted for by the divergence in the semantic structure of function words. Compare, for example, the meanings of the Russian preposition до and its equivalents in the English language. (Он работал) до 5 часов till 5 o'clock (Это было) до войны before the war (Он дошёл) до угла to the corner Contrastive analysis on the level of the grammatical meaning reveals that correlated words in different languages may differ in the grammatical component of their meaning. To take a simple instance Russians are liable to say the *news are good, *the money are on the table, *her hair are black, etc. as the words
новости, деньги, волосы have the grammatical meaning of plurality in the Russian language. Of particular interest in contrastive analysis are the compulsory grammatical categories which foreign language learners may find in the language they are studying and which are different from or nonexistent in their mother tongue. These are the meanings which the grammar of the language "forces" us to signal whether we want it or not. One of the compulsory grammatical categories in English is the category of definiteness/indefiniteness. We know that English signals this category by means of the articles. Compare the meaning of the word man in the man is honest and man is honest. As this category is non-existent in the Russian language it is obvious that Russian learners find it hard to use the articles properly. Contrastive analysis brings to light the essence of what is usually described as idiomatic English, idiomatic Russian etc., i.e. the peculiar way in which every language combines and structures in lexical units various concepts to denote extra-linguistic reality. The outstanding Russian linguist acad. L. V. Scerba repeatedly stressed the fact that it is an error in principle if one supposes that the notional systems of any two languages are identical. Even in those areas where the two cultures overlap and where the material extra-linguistic world is identical, the lexical units of the two languages are not different labels appended to identical concepts. In the overwhelming majority of cases the concepts denoted are differently organized by verbal means in the two languages. Different verbal organization of concepts in different languages may be observed not only in the difference of the semantic structure of correlated words but also in the structural difference of word-groups commonly used to denote identical entities. For example, a typical Russian word-group used to describe the way somebody performs an action, or the state in which a person finds him-self, has the structure that may be represented by the formula adverb followed by a finite form of a verb (or a verb + an adverb), e.g. он крепко спит, он быстро /медленно/ усваивает, etc. In English we can also use structurally similar word-groups and say he smokes a lot, he learns slowly (fast), etc. The structure of idiomatic English word-groups however is different. The formula of this word-group can be represented as an ad-jective + deverbal noun, e.g. he is a heavy smoker, a poof learner, e.g. "the Englishman is a slow starter but there is no stronger finisher" (Gals-worthy). Another English word-group used in similar cases has the structure verb to be + adiective + the infinitive, e.g. (He) is quick to realize, (He) is slow to cool down, etc. which is practically non-existent in the Russian language. Commonly used English words of the type (he is) an early-riser, a music-lover, etc. have no counterparts in the Russian language and as a rule correspond to phrases of the type (Он) рано встаёт, (он) очень любит музыку, etc.
Last but not least contrastive analysis deals with the meaning and use of situationa1 verbal units, i.e. words, word-groups, sentences which are commonly used by native speakers in certain situations. For instance when we answer a telephone call and hear somebody asking for a person whose name we have never heard the usual answer for the Russian speaker would be Вы ошиблись (номером), Вы не туда попали. The Englishman in identical situation is likely to say Wrong number. When somebody apologizes for inadvertently pushing you or treading on your foot and says Простите (I beg your pardon. Excuse me.) the Russian speaker in reply to the apology would probably say— Ничего, пожалуйста, whereas the verbal reaction of an Englishman would be different—It's all right. It does not matter. * Nothing or *please in this case cannot be viewed as words correlated with Ничего, Пожалуйста. To sum up contrastive analysis cannot be overestimated as an indispensable stage in preparation of teaching material, in selecting lexical items to be extensively practiced and in predicting typical errors. It is also of great value for an efficient teacher who knows that to have a native like command of a foreign language, to be able to speak what we call idiomatic English, words, word-groups and whole sentences must be learned within the lexical, grammatical and situational restric-tions of the English language.
§ 2. Statistical Analysis An important and promising trend in modern linguistics which has been making progress during the last few decades is the quantitative study of language phenomena and the application of statistical methods in linguistic analysis. Statistical linguistics is nowadays generally recognized as one of the major branches of linguistics. Statistical inquiries have considerable importance not only because of their precision but also because of their relevance to certain problems of communication engineering and information theory. "Probably one of the most important things for modern linguistics was the realization of the fact that non-formalized statements are as a matter of fact unverifiable, whereas any scientific method of cognition presupposes verification of the data obtained. The value of statistical methods as a means of verification is beyond dispute. Though statistical linguistics has a wide field of application here we shall discuss mainly the statistical approach to vocabulary. Statistical approach proved essential in the selection of vocabulary items of a foreign language for teaching purposes. It is common knowledge that very few people know more than 10% of the words of their mother tongue. It follows that if we do not wish to waste time on committing to memory vocabulary items which are never likely to be useful to the learner, we have to select only lexical units that are commonly used by native speakers. Out of about 500,000 words listed in the OED the "passive" vocabulary of an educated Englishman comprises no more than 30,000 words and of these 4,000—5,000 are presumed to be amply sufficient for the daily needs of an average member of the English speech community. Thus it is evident that the problem of selection of teaching vocabulary is of vital importance.1 It is also evident that by far the most reliable single criterion is that of frequency as presumably the most useful items are those that occur most frequently in our language use. As far back as 1927, recognizing the need for information on word frequency for sound teaching materials, Ed. L. Thorndike brought out a list of the 10,000 words occurring most frequently in a corpus of five million running words from forty-one different sources. In 1944 the extension was brought to 30,000 words.2 Statistical techniques have been successfully applied in the analysis of various linguistic phenomena: different structural types of words, affixes, the vocabularies of great writers and poets and even in the study of some problems of historical lexicology. Statistical regularities however can be observed only if the phenomena under analysis are sufficiently numerous and their occurrence very frequent. Thus the first requirement of any statistic investigation is the evaluation of the size of the sample necessary for the analysis. To illustrate this statement we may consider the frequency of word occurrences. It is common knowledge that a comparatively small group of words makes up the bulk of any text.3 It was found that approximately 1,300— 1,500 most frequent words make up 85% of all words occurring in the text. If, however, we analyse a sample of 60 words' it is hard to predict the number of occurrences of most frequent words. As the sample is so small it may contain comparatively very few or very rnany of such words. The size of the sample sufficient for the reliable information as to the frequency of the items under analysis is determined by mathematical statistics by means of certain formulas. It goes without saying that to be useful in teaching statistics should deal with meanings as well as sound-forms as not all word-meanings are equally frequent. Besides, the number of meanings exceeds by far the number of words. The total number of different meanings recorded and illustrated in OED for the first 500 words of the Thorndike Word List vis 14,070, for the first thousand it is nearly 25,000. Naturally not all the meanings should be included in the list of the first two thousand most commonly used words. Statistical analysis of meaning frequencies resulted in the compilation of A General Service List of English Words with Semantic Frequencies. The semantic count is a count of the frequency of the occurrence of the various senses of 2,000 most frequent words as found in a study of five million running words. The semantic count Us based on the differentiation of the meanings in the OED and the fre
quencies are expressed as percentage, so that the teacher and textbook writer may find it easier to understand and use the list. An example will make the procedure clear. room ('space') takes less room, not enough room to turn round (in) make room for (figurative) - 12%
room for improvement - come to my room, bedroom, sitting room; drawing room, bathroom - 83%
(plural = suite, lodgings) my room in college 2% to let rooms It can be easily observed from the semantic count above that the meaning 'part of a house' (sitting room, drawing room, etc.) makes up 83% of all occurrences of the word room and should be included in the list of meanings to be learned by the beginners, whereas the meaning 'suite, lodgings' is not essential and makes up only 2% of all occurrences of this word. Statistical methods have been also applied to various theoretical problems of meaning. An interesting attempt was made by G. K. Zipfto study the relation between polysemy and word frequency by statistical methods. Having discovered that there is a direct relationship between the number of different meanings of a word and its relative frequency of occurrence, Zipf proceeded to find a mathematical formula for this correlation. He came to the conclusion that different meanings of a word will tend to be equal to the square root of its relative frequency (with the possible exception of the few dozen most frequent words).This was summed up in the following formula where m stands for the number of meanings, F for relative frequency — m=F1/2. This formula is known as Zipf's law. Though numerous corrections to this law have been suggested, still there is no reason to doubt the principle itself, namely, that the more frequent a word is, the more meanings it is likely to have. One of the most promising trends in statistical enquiries is the analysis of collocability of words. It is observed that words are joined together according to certain rules. The linguistic structure of any string of words may be described as a network of grammatical and lexical restrictions.1 The set of lexical restrictions is very complex. On the standard probability scale the set of (im)possibilities of combination of lexical units range from zero (impossibility) to unit (certainty). Of considerable significance in this respect is the fact that high frequency value of individual lexical items does not forecast high frequency of the word-group formed by these items. Thus, e.g., the adjective able and the noun man are both included in the list of 2,000 most frequent words, the word-group an able man, however, is very rarely used.
The importance of frequency analysis of word-groups is indisputable as in speech we actually deal not with isolated words but with word-groups. Recently attempts have been made to elucidate this problem in different languages both on the level of theoretical and applied lexicology and lexicography. It should be pointed out, however, that the statistical study of vocabulary has some inherent limitations." Firstly, statistical approach is purely quantitative, whereas most linguistic problems are essentially qualitative. To put it in simpler terms quantitative research implies that one knows what to count and this knowledge is reached only through a long period of qualitative research carried on upon the basis of certain theoretical assumptions. For example, even simple numerical word counts presuppose a qualitative definition of the lexical items to be counted. In connection with this different questions may arise, e.g. is the orthographical unit work to be considered as one word or two different words: work n—(to) work v. Are all word-groups to be viewed as consisting of so many words or are some of them to be counted as single, self-contained lexical units? We know that in some dictionaries word-groups of the type by chance, at large, in the long run, etc. are-counted as one item though they consist of at least two words, in others they are not counted at all but viewed as peculiar cases of usage of the notional words chance, large, run, etc. Naturally the results of the word counts, largely depend on the basic theoretical assumption, i.e. on the definition of the lexical item.1 We also need to use qualitative description of the language in deciding whether we deal with one item or more than one, e.g. in sorting out two homonymous words and different meanings of one word.2 It •follows that before counting homonyms one must have a clear idea of what difference in meaning is indicative of homonymy. From the discussion of the linguistic problems above we may conclude that an exact and exhaustive definition of the linguistic qualitative aspects of the items under consideration must precede the statistical analysis. Secondly, we must admit that not all linguists have the mathematical equipment necessary for applying statistical methods. In fact what is often referred to as statistical analysis is purely numerical counts of this or that linguistic phenomenon not envolving the use of any mathematical formula, which in some cases may be misleading. Thus, statistical analysis is applied in different branches of linguistics including lexicology as a means of verification and as a reliable criterion for the selection of the language data provided qualitative description of lexical items is available.
§ 3.Immediate Constituents Analysis The theory of Immediate Constituents (IC) was originally elaborated as an attempt to determine the ways in which lexical units are relevantly related to one another. It was discovered that combinations of such units are usually structured into hierarchi
cally arranged sets of binary constructions. For example in the word-group a black dress In severe style we do not relate a to black, black to dress, dress to in, etc. but set up a structure which may be represented as a black dress / in severe style. Thus the fundamental aim of IC analysis is to segment a set of lexical units into two maximally independent sequences or ICs thus revealing the hierarchical structure of this set. Successive segmentation results in Ultimate Constituents (UC), i.e. two-facet units that cannot be segmented into smaller units having both sound-form and meaning. The Ultimate Constituents of the word-group analvsed above are: a | black | dress | in | severe | style. The meaning of the sentence, word-group, etc. and the IC binary segmentation are interdependent. For example, fat major's wife may mean that either 'the major is fat' or 'his wife is fat'. The former semantic interpretation presupposes the IC analysis "into fat major's | wife, whereas the latter reflects a different segmentation into IC's and namely fat | major's wife. It must be admitted that this kind of analysis is arrived at by reference to intuition and it should be regarded as an attempt to formalize one's semantic intuition. It is mainly to discover the derivational structure of words that IC analysis is used in lexcicological investigations. For example, the verb denationalize has both a prefix de- and a suffix -ize. To decide whether this word is a prefixal or a suffixal derivative we must apply IC analy-sis.1 The binary segmentation of the string of morphemes making up the word shows that *denation or *denational cannot be considered independent sequences as there is no direct link between the prefix de-and nation or national. In fact no such sound-forms function as independent units in modern English. The only possible binary segmentation is de | nationalize, therefore we may conclude that the word is a prefixal derivative. There are also numerous cases when identical morphemic structure of different words is insufficient proof of the identical pattern of their derivative structure which can be revealed only by IC analysis. Thus, comparing, e.g., snow-covered and blue-eyed we observe that both words contain two root-morphemes and one derivational morpheme. IC analysis, however, shows that whereas snow-covered may be treated as a compound consisting of two stems snow + covered, blue-eyed is a suffixal derivative as the underlying structure as shown by IC analysis is different, i.e. (blue+eye)+-ed. It may be inferred from the examples discussed above that ICs represent the word-formation structure while the UCs show the morphemic structure of polymorphic words.
§ 4.Distributional Analysis and Co-occurrence Distributional analysis in its various forms is commonly used nowadays by lexicologists of different schools of thought. By the term distribution we understand the occurrence of a lexical unit relative to other lexical units of the same level (words relative to words / morphemes relative to morphemes, etc.). In other
words by this term we understand the position which lexical units occupy or may occupy in the text or in the flow of speech. It is readily observed that a certain component of the word-meaning is described when the word is identified distributionally. For example, in the sentence The boy—home the missing word is easily identified as a verb— The boy went, came, ran, etc. home. Thus, we see that the component of meaning that is distributionally identified is actually the part-of-speech meaning but not the individual lexical meaning of the word under analysis. It is assumed that sameness / difference in distribution is indicative of sameness / difference in part-of-speech 'meaning. It is also observed that in a number of cases words have different lexical meanings in different distributional patterns. Compare, e.g., the lexical meaning of the verb to treat in the following: to treat somebody well, kindly, etc.—'to act or behave towards' where the verb is followed by a noun+an adverb and to treat somebody to ice-cream, champaigne, etc.—'to supply with food, drink, entertainment, etc. at one's own expence' where the verb is followed by a noun-\-the preposition to-{-another noun. Compare also the meaning of the adjective ill in different distributional structures, e.g. ill look, ill luck, ill health, etc. (ill+N-'bad') and fall ill, be ill, etc. (V+ill— 'sick'). The interdependence of distribution and meaning can be also observed at the level of word-groups. It is only the distribution of otherwise completely identical lexical units that accounts for the difference in the meaning of water tap and tap water. Thus, as far as words are concerned the meaning by distribution may be defined as an abstraction on the syntagmatic level. It should also be noted that not only words in word-groups but also whole word-groups may acquire a certain denotational meaning due to certain distributional pattern to which this particular meaning is habitually attached. For example, habitually the word preceding ago denotes a certain period of time (an hour, a month, a century, etc. ago) and the whole word-group denotes a certain temporal unit. In this particular distributional pattern any word is bound to acquire an additional lexical meaning of a certain period of time, e.g. a grief ago (E. Cum-mings), three cigarettes ago (A. Christie), etc. The words a grief and a cigarette are understood as indicating a certain period of time and the word-groups as denoting temporal units. This is also true of the meaning of the most unusual word-groups or sentences, e.g. griefs of joy (E. Cum-mings) (cf. days of joy, nights of grief, etc.), to deify one's razorblade (E. Cummings) (cf. to sharpen the knife). Distributional pattern as such seems to possess a component of meaning not to be found in individual words making up the word-group or the sentence. Thus, the meaning 'make somebody do smth by means of something' cannot be traced back to the lexical meanings of the individual words in 'to coax somebody into accepting the suggestion'. The distributional pattern itself seems to impart this meaning to the whole irrespective of the meaning of the verb used in this structure, i.e. in the pattern V+N+into+Ving verbs of widely different lexical meaning may be used. One can say, e.g., to kiss somebody into doing smth, to
flatter somebody into doing smth, to beat somebody into doing something, etc.; in all these word-groups one finds the meaning 'to make somebody do something' which is actually imparted by the distributional pattern. The same set of lexical items can mean different things in different syntactic arrangements as illustrated by: John thought he had left Mary alone, Mary alone thought he had left John. Had he alone thought Mary left John? As can be inferred from the above distributional analysis is mainly applied by the linguist to find out sameness or difference of meaning. It is assumed that the meaning of any lexical unit may be viewed as made up by the lexical meaning of its components and by the meaning of the pattern of their arrangement, i.e. their distributional meaning. This may perhaps be best illustrated by the semantic analysis of polymorphic words. The word singer, e.g., has the meaning of 'one who sings or is singing' not only due to the lexical meaning of the stem sing- and the derivational morpheme -er (== active doer), but also be-cause of the meaning of their distributional pattern. A different pattern of arrangement of the same morphemes *ersing changes the whole into a meaningless string of sounds.1 Distribution of stems in a compound makes part of the lexical meaning of the compound word. Compare, e.g., different lexical meanings of the words formed by the same stems bird and cage in bird-cage and cage-bird. It is also assumed that productivity largely depends on the distributional meaning of the lexical units. Distributional meaning of the lexical units accounts for the possibility of making up and understanding a lexical item that has never been heard or used before but whose distributional pattern is familiar to the speaker and the hearer. Thus, though such words as kissable, hypermagical, smiler (She is a charming srniler), etc. cannot be found in any dictionary their meaning is easily understood on the analogy with other words having the same distributional pattern, e. g. (v+-able -> A as in readable, eatable and kissable). From the discussion of the distributional analysis above it should not be inferred that difference in distribution is always indicative of the difference in meaning and conversely that sameness of distribution is an absolutely reliable criterion of sameness of meaning. It was pointed out above that as a rule distribution of stems in a compound word predicts a certain component of meaning as the stem that stands first is understood as modifying the one that follows (cf. bird-cage and cage-bird). In certain cases, however, the meaning or to be more exact one of the word-meanings may be structured differently. Firstly, in morphologically non-motivated words distributional structure is not correlated with certain meaning. For instance, in the words apple-sauce, plum-sauce, etc. we actually see that the item sauce-is modified by the stems apple-, plum-, etc., hence these words may be semantically interpreted as 'kind of sauce made of apples, plums, etc.' One of the meanings of the word apple-sauce—'nonsense', 'insincere
flattery', however, is in no way connected with the distributional structure of sterns. This is observed in all non-motivated words. Secondly, it is common knowledge that words used in identical distributonal pat-terns may have different meanings. Compare, e.g., the meaning of the verb to move in the pattern to move+N: 1. cause to change position (e.g. move the chair, the piano, etc.); 2. arouse, work on the feelings of smb. (e.g. to move smb, deeply). In the cases of this type distributional analysis traditionally understood as the analysis on the level of different parts of speech, as an abstraction on the syntagmatic level is of little help in the analysis of sameness or difference of lexical mean-ing. Distributional analysis, however, is not as a rule confined to the analysis on the part-of-speech level or in general on the grammatical level but is extended to the lexical level. The essential difference between grammar and lexis is that grammar deals with an obligatory choice between a comparatively small and limited number of possibilities, e.g. between the man and men depending on the form of the verb to be, cf. The man is walking, The men are walking where the selection of the singular number excludes the selection of the plural number. Lexis accounts for the much wider possibilities of choice between, say, man, soldier, fireman and so on. Lexis is thus said to be a matter of choice between open sets of items while grammar is one between closed systems.1 The possibilities of choice between lexical items are not limitless however. Lexical items containing certain semantic components are usually observed only in certain positions. In phrases such as all the sun long, a grief ago and farmyards away the deviation consists of nouns sun, grief, farm yards in a position where normally only members of a limited list of words appear, (in this case nouns of linear measurements such as inches, feet, miles). The difference between the normal lexical paradigm and the ad hoc paradigm can be represented as follows: farmyards, griefs, etc. - away (deviant)
inches feet yards, etc. - away (normal) Cf. also "half an hour and ten thousand miles ago" (Arthur C. Clark). "She is feeling miles better today." (Nancy Milford) Distribution defined as the occurrence of a lexical unit relative to other lexical units can be interpreted as co-occurrence of lexical items and the two terms can be viewed as synonyms. It follows that by the term distribution we understand the aptness of a word in one of its meanings to collocate or to co-occur with a certain group, or certain groups of words having some common semantic component. In this case distribution may be treated on the level of semantic classes or subclasses of lexical
units. Thus, e.g., it is common practice to subdivide animate nouns into nouns denoting human beings and non-humans (animals, birds, etc.). Inanimate nouns are usually subdivided into concrete and abstract (cf., e.g., table, book, flower and joy, idea, relation) which may be further classified into lexico-semantic groups, i.e. groups of words joined together by a common concept, e.g. nouns denoting pleasurable emotions (joy, delight, rapture, etc.), nouns denoting mental aptitude (cleverness, brightness, shrewdness, etc). We observe that the verb to move followed by the nouns denoting inanimate objects (move-N'n) as a rule have the meaning of 'cause to change position'; when, however, this verb is followed by the nouns denoting human beings (move+N) it will usually have another meaning, i.e. 'arouse, work on the feelings of. In other cases the classification of nouns into animate / inanimate may be insufficient for the semantic analysis, and it may be necessary to single out different lexico-semantic groups as, e.g., in the case of the adjective blind. Any-collocation of this adjective with a noun denoting a living being (animate) (blind+ Nan) will bring out the meaning 'with-out the power to see' (blind man, cat, etc.). Blind followed by a noun denoting inanimate objects, or abstract concepts may have different meanings depending on the lexico-semantic group the noun belongs to. Thus, blind will have the meaning 'reckless, thoughtless, etc.' when combined with nouns denoting emptions (blind passion, love, fury, ets.) and the meaning 'hard to discern, to see' in collocation with nouns denoting written or typed signs (blind handwriting, blind type, etc.). In the analysis of word-formation pattern the investigation on the level of lexico-semantic group's is commonly used to find out the word-meaning, the part of speech, the lexical restrictions of the stems, etc. For example, the analysis of the derivational pattern n+ish-> A shows that the suffix -ish is practically never combined with the noun-stems which denote units of time, units of space, etc. (*hourish, *mileish, etc.). The overwhelming majority of adjectives in -ish are formed from the noun-stems denoting living beings (wolfish, clownish, boyish, etc.). It follows that distribution may be viewed as the place of a lexical item relative to other lexical items on the level of semantic classes and sub-classes. The analysis of lexical collocability in word-groups is widely applied for different purposes: to find out typical, most commonly used collocations in modern English, to investigate the possibility / impossibility of certain types of meaning in certain types of collocations, and so on. It stands to reason that certain lexical items rarely if ever co-occurbecause of extra-linguistic factors. There are no restrictions inherentin the grammar or vocabulary of the English language that would makeco-occurrence of the participle flying with the noun rhinoceros impossible, yet we may be reasonably certain that the two words are unlikelyto co-occur. What we describe as meaning by collocation or meaning by co-occurrence is actually a blend of extra-linguistic and intra-linguistic components of meaning.
One or the other component may prevail. For instance, one may argue that the meaning of the adjective good is different in good doctor, good mother, good milkman, etc. because we know that a good doctor is 'a doctor who gives his patient adequate medical care and treatment', whereas good mother is 'a mother who takes care of the needs of her children and cares for them adequately'. Here naturally it is the extra-linguistic factors that account for the difference in meaning. Of greatest importance for language teaching, however, is the investigation of lexical restrictions in collocability that are of purely intra-linguistic nature and cannot be accounted for by logical considerations. This can be perhaps best illustrated by comparing the collocability of correlated words in different languages. In the English language, e.g., the verb to seize may be combined with nouns denoting different kinds of emotions: I was seized with joy, grief, etc:, whereas in the Russian language one can say на меня напала тоска, отчаяние, сомнение, etc. but the collocations напала радость, надежда are impossible, that is to say the Russian verb cannot be combined with nouns denoting pleasurable emotions. The results of the co-occurrence or distributional analysis may be of great help to teachers in preparation of teaching material. To illustrate the point under consideration it is sufficient to discuss the experiment the goal of which was to find out the semantic peculiar-ities of the verb to giggle. Giggle refers to a type of laughter—to giggle is usually defined as 'to laugh in a nervous manner'. There is nothing in the dictionary definition to indicate a very important peculiarity of the word-meaning, i.e. that giggling is habitually associated with women. A completion test carried out by a group of English linguists yielded interesting results. The sentences to be completed were of the type: The man—with obvious pleasure, The woman—with obvious pleasure, etc. The informants were to fill in the blanks with either the verb to laugh or to giggle and were presented with a choice of subjects male and fеmale. A clear preference was shown for women giggling and men laughing with obvious pleasure. The analysis of the informants' responses also showed that a man may giggle drunkenly or nervously, but not happily or politely. In the case of women, however, of whom giggling is more characteristic it appears that all collocations—glggle drunkenly, nervously, happily, politely—are equally acceptable. It may be inferred from the above that the meaning by co-occurrence is an inherent part "and an essential component of the word-meaning. § 5. Transformafional Analysis Transformational analysis in lexicological investigations may be defined as repatterning of various distributional structures in order to discover difference or sameness of meaning of practically identical distributional patterns. As distributional patterns are in a number of cases polysemantic, transformational procedures are of help not only in the analysis of semantic sameness | difference of the lexical units under investigation
but also in the analysis of the factors that account for their polysemy. For example, if we compare two compound words dogfight and dog-cart, we shall see that the distributional pattern of stems is identical and may be represented as n+n. The meaning of these words broadly speaking is also similar as the first of the stems modifies, describes the second and we understand these compounds as 'a kind of fight' and 'a kind of cart' respectively. The semantic relationship between the stems, however, is different and hence the lexical meaning of the words is also different. This can be shown by means of a transformational procedure which shows that a dogfight is semantically equivalent to 'a fight between dogs', whereas a dogcart is not 'a cart between dogs' but 'a cart drawn by dogs'. Word-groups of identical distributional structure when repatterned also show that the semantic relationship between words and consequent-ly the meaning of word-groups may be different. For example, in the word-groups consisting of a possessive pronoun followed by a noun, e.g. his1 car, his failure, his arrest, his goodness, etc., the relationship between his and the following nouns is in each instant different which can be demonstrated by means of transformational procedures. his car (pen, table, etc.) may be repatterned into he has a car (a pen, a table, etc.) or in a more generalized form may be represented as A possesses B. his failure (mistake, attempt, etc.) may be represented as he failed (was mistaken, attempted) or A performs B which is impossible in the case of his car (pen, table, etc.). his arrest (imprisonment, embarassment, etc.) may be repatterned into he was arrested (imprisoned and embarrassed, etc.) or A is the goal of the action B. his goodness (kindness, modesty, etc.) may be represented as he is good (kind, modest, etc.) or B is the quality of A. It can also be inferred from the above that two phrases which are transforms of each other (e.g. his car -> he has a car; his kindness -> he is kind, etc.1) are correlated in meaning as well as in form. Regular correspondence and interdependence of different patterns is viewed as a criterion of different or same meaning. When the direction of conversion was discussed it was pointed out that transformational procedure may be used as one of the criteria enabling us to decide which of the two words in a conversion pair is the derived member. Transformational analysis may also be described as a kind of trans- * lation. If we understand by translation transference of a message by different means, we may assume that there exist at least three types of translation: 1. interlingual translation or translation from
one language into another which is what we traditionally call trans-lation; 2. intersemiotic translation or transference of a message from one kind of semiotic system to another. For example, we know that a verbal message may be transmitted into a flag message by hoisting up the proper flags in the right sequence, and at last 3. intralingua1 translation which consists essentially in rewording a message within the same language—a kind of paraphrasing. Thus, e.g., the same message may be transmitted by the following his work is excellent -> his excellent work -> the excellence of his work. The rules of transformational analysis, however, are rather strict should not be identified with paraphrasing in the usual sense of the term. There are many restrictions both on the syntactic and the lexical level. An exhaustive discussion of these restrictions is unnecessary and impossible within the framework of the present textbook. We shall confine our brief survey to the transformational procedures commonly used in lexicological investigation. These are as follows; 1. permutation—the repatterning of the kernel transformon condition that the basic subordinative relationships between words the word-stems of the lexical units are not changed. In the example discussed above the basic relationships between lexical units and the stems of the notional words are essentially the same: cf. his work is excellent-> his excellent work-> the excellence of his work-> he works excellently. 2. rep1acement—the substitution of a component of the distributional structure by a member of a certain strictly defined set oflexical units, e.g. replacement of a notional verb by an auxiliary or alink verb, etc. Thus, in the two sentences having identical distributionalstructure He will make a bad mistake, He will make a good teacher,the verb to make can be substituted for by become or be only in the second sentence (he will become, be a good teacher) but not in the first(*he will become a bad mistake) which is a formal proof of the intuitively felt difference in the meaning of the verb to make in each of thesentences. In other words the fact of the impossibility of identical transformations of distributionally identical structures is a formal proofof the difference in their meaning. 3. addition (or expansion)—may be illustrated by the application of the procedure of addition to the classification of adjectivesinto two groups—adjectives denoting inherent and non-inherent properties. For example, if to the two sentences John is happy (popular, etc.)and John is tall (clever, etc.) we add, say, in Moscow, we shall see that*John Is tall (clever, etc.) in Moscow is utterly nonsensical, whereasJohn is happy (popular, etc.) in Moscow is a well-formed sentence. Evidently this may be accounted for by the difference in the meaning of adjectives denoting inherent (tall, clever, etc.) and non-inherent (happy, popular, etc.) properties. 4. deletion — a procedure which shows whether one of the wordsis semantically subordinated to the other or others, i.e. whether thesemantic relations between words are identical. For example, the word-group red flowers may be deleted and transformed into flowers without
making the sentence nonsensical. Cf.: I love red flowers, I love flowers, whereas I hate red tape cannot be transformed into I hate tape or I hate red.1 Transformational procedures may be of use in practical 9lassroom teaching as they bring to light the so-called sentence paradigm or to be more exact different ways in which the same message may be worded in modern English. It is argued, e.g., that certain paired sentences, one containing a verb and one containing an adjective, are understood in the same way, e.g. sentence pairs where there is form similarity between the verb and the adjective. Cf.: I desire that...— I am desirous that...; John hopes that...—John is hopeful that...; His stories amuse me...—are amusing to me; Cigarettes harm people—are harmful to people. Such sentence pairs occur regularly in modern English, are used interchangeably in many cases and should be taught as two equally possible variants. It is also argued that certain paired sentences, one containing a verb and one a deverbal noun, are also a common occurrence in Modern English. Cf., e.g., I like jazz-> my liking for jazz; John considers Mary's feelings-> John's consideration of Mary's feelings.2 Learning a foreign language one must memorize as a rule several commonly used structures with similar meaning. These structures make up what can be described as a paradigm of the sentence just as a set of forms (e.g. go—went—gone, etc.) makes up a word paradigm. Thus, the sentence of the type John likes his wife to eat well makes up part of the sentence paradigm which may be represented as follows John likes his wife to eat well -> John likes his wife eating well -> what John likes is his wife eating well, etc. as any sentence of this type may be repatterned in the same way. Transformational procedures are also used as will be shown below in componental analysis of lexical units.
§ 6. Componenfal Analysis In recent years problems of semasiology come to the fore in the research work of linguists of different schools of thought and a number of attempts have been made to find efficient procedures for the analysis and interpretation of meaning.3 An important step forward was taken in 1950's with the development of componental analysis. In this analysis linguists proceed from the assumption that the smallest units of meaning are sememes (or semes) and that sememes and lexemes (or lexical items) are usually not in one-to-one but in one-to-many correspondence. For example, in the lexical item woman several components of meaning or sememes may be singled out and namely 'human', 'female', 'adult'. This one-to-many correspondence may be represented as follows.
-woman- human female adult The analysis of the word girl would also yield the sememes 'human' and 'female', but instead of the sememe 'adult' we shall find the sememe 'young' distinguishing the meaning of the word woman from that of girl. The comparison of the results of the componential analysis of the words boy and girl would also show the difference just in one component, i..e. the sememe denoting 'male' and 'female' respectively. Поиск по сайту: |
Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Студалл.Орг (0.077 сек.) |