Справочник от Автор24
Поделись лекцией за скидку на Автор24

Linguistics

  • 👀 567 просмотров
  • 📌 513 загрузок
Выбери формат для чтения
Загружаем конспект в формате docx
Это займет всего пару минут! А пока ты можешь прочитать работу в формате Word 👇
Конспект лекции по дисциплине «Linguistics» docx
1. Linguistics Linguistics is the scientific study of language, encompassing a number of sub-fields. An important topical division is between the study of language structure (grammar) and the study of meaning (semantics). Over the twentieth century, following the work of Noam Chomsky, linguistics came to be dominated by the Generativist school, which is chiefly concerned with explaining how human beings acquire language and the biological constraints on this acquisition. Generative theory is modularist in character. Language module refers to a hypothesized structure in the human brain (anatomical module) or cognitive system (functional module) that some psycholinguists (e.g., Steven Pinker) claim contains innate capacities for language. According to Jerry Fodor the module is immune from information from other sources not directly associated with language processing (Fodor, 2005). There is currently ongoing debate about this in the field of cognitive science (psycholinguistics) and neuroscience (neurolinguistics). Cognitive linguistics refers to the school of linguistics that understands language creation, learning, and usage as best explained by reference to human cognition in general. It is characterized by adherence to three central positions. First, it denies that there is an autonomous linguistic faculty in the mind; second, it understands grammar in terms of conceptualization; and third, it claims that knowledge of language arises out of language use. Cognitive linguists deny that the mind has any module for language-acquisition that is unique and autonomous. This stands in contrast to the work done in the field of generative grammar. Although cognitive linguists do not necessarily deny that part of the human linguistic ability is innate, they deny that it is separate from the rest of cognition. Thus, they argue that knowledge of linguistic phenomena - i.e., phonemes, morphemes, and syntax - are essentially conceptual in nature. Moreover, they argue that the storage and retrieval of linguistic data is not significantly different from the storage and retrieval of other knowledge and use of language in understanding employs similar cognitive abilities as used in other non-linguistic tasks. The study of linguistics encompassing three main sub-fields: evolutionary, historical and sociolinguistics. I. Evolutionary linguistics attempts to account for the origins of language. Evolutionary linguistics is the scientific study of the origins and development of language. The main challenge in this research is the lack of empirical data: spoken language leaves no traces. This led to an abandonment of the field for more than a century. Since the late 1980s, the field has been revived in the wake of progress made in the related fields of psycholinguistics, neurolinguistics, evolutionary anthropology and cognitive science. II. Historical linguistics explores language change. Historical linguistics (also called diachronic linguistics) is the study of language change. It has five main concerns: • to describe and account for observed changes in particular languages; • to reconstruct the pre-history of languages and determine their relatedness, grouping them into language families (comparative linguistics); • to develop general theories about how and why language changes; • to describe the history of speech communities; • to study the history of words, i.e. etymology. Modern historical linguistics dates from the late 18th century and grew out of the earlier discipline of philology, the study of ancient texts and documents, which goes back to antiquity. At first historical linguistics was comparative linguistics and mainly concerned with establishing language families and the reconstruction of prehistoric languages, using the comparative method and internal reconstruction. The focus was on the well-known Indo-European languages, many of which had long written histories. The Indo-European languages are a family of several hundred related languages and dialects — 449 according to the 2008 SIL estimate, about half (219) belonging to the Indo-Aryan sub-branch — including most of the major languages of Europe, the Iranian plateau (Southwest Asia), much of Central Asia and the Indian subcontinent (South Asia). The languages of the Indo-European group are spoken by approximately three billion native speakers, the largest number of the recognised families of languages. (The Sino-Tibetan family of tongues has the second-largest number of speakers). Of the top 20 contemporary languages in terms of native speakers according to SIL Ethnologue, 12 are Indo-European: Spanish, English, Hindi, Portuguese, Bengali, Russian, German, Marathi, French, Italian, Punjabi and Urdu, accounting for over 1.6 billion native speakers. Membership of these languages in the Indo-European language family and branches, groups and subgroups thereof, is determined by a genetic relationship, defined by shared innovations which are presumed to have taken place in a common ancestor. For example, what makes Germanic languages "Germanic" is that large parts of the structures of all the languages so designated can be stated just once for all of them. In other words, they can be treated as an innovation that took place in Proto-Germanic, the source of all the Germanic languages. But since then, significant comparative linguistic work has been done on the Uralic languages, Austronesian and Native American languages (Uralic languages constitute a language family of 39 languages spoken by approximately 20 million people). The healthiest Uralic languages in terms of the number of native speakers are Estonian, Finnish, and Hungarian. Countries that are home to a significant number of speakers of Uralic languages include Estonia, Finland, Hungary, Romania, Russia, Serbia and Slovakia. Initially, all modern linguistics was historical in orientation - even the study of modern dialects involved looking at their origins. But Saussure drew a distinction between synchronic and diachronic linguistics, which is fundamental to the present day organization of the discipline. Primacy is accorded to synchronic linguistics, and diachronic linguistics is defined as the study of successive synchronic stages. Saussure's clear demarcation, however, is now seen to be idealised. Also, the work of sociolinguists on linguistic variation has shown synchronic states are not uniform: the speech habits of older and younger speakers differ in ways which point to language change. Synchronic variation is linguistic change in progress. The biological origin of language is in principle a concern of historical linguistics, but most linguists regard it as too remote to be reliably established by standard techniques of historical linguistics such as the comparative method. Less standard techniques, such as mass lexical comparison, are used by some linguists to overcome the limitations of the comparative method, but most linguists regard them as unreliable. The findings of historical linguistics are often used as a basis for hypotheses about the groupings and movements of peoples, particularly in the prehistoric period. In practice, however, it is often unclear how to integrate the linguistic evidence with the archaeological or genetic evidence. For example, there are a large number of theories concerning the homeland and early movements of the Proto-Indo-Europeans, each with their own interpretation of the archaeological record. The branches of historical linguistics: Comparative linguistics (originally comparative philology) is a branch of historical linguistics that is concerned with comparing languages in order to establish their historical relatedness. Languages may be related by convergence through borrowing or by genetic descent. Genetic relatedness implies a common origin or proto-language, and comparative linguistics aims to construct language families, to reconstruct proto-languages and specify the changes that have resulted in the documented languages. In order to maintain a clear distinction between attested and reconstructed forms, comparative linguists prefix an asterisk to any form that is not found in surviving texts. Etymology is the study of the history of words — when they entered a language, from what source, and how their form and meaning have changed over time. In languages with a long detailed history, etymology makes use of philology, the study of how words change from culture to culture over time. However, etymologists also apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information (such as writing) to be known. By analyzing related languages with a technique known as the comparative method, linguists can make inferences, about their shared parent language and its vocabulary. In this way, word roots have been found which can be traced all the way back to the origin of, for instance, the Indo-European language family. Dialectology is the scientific study of linguistic dialect, the varieties of a language that are characteristic of particular groups, based primarily on geographic distribution and their associated features (as opposed to variations based on social factors, which are studied in sociolinguistics, or variations based on time, which are studied in historical linguistics). Dialectology treats such topics as divergence of two local dialects from a common ancestor and synchronic variation. Dialectologists are ultimately concerned with grammatical features which correspond to regional areas. Thus they are usually dealing with populations living in their areas for generations without moving, but also with immigrant groups bringing their languages to new settlements. Phonology is a sub-field of historical linguistics which studies the sound system of a specific language or set of languages change over time. Whereas phonetics is about the physical production and perception of the sounds of speech, phonology describes the way sounds function within a given language or across languages. An important part of phonology is studying which sounds are distinctive units within a language. For example, the "p" in "pin" is aspirated while the same phoneme in "spin" is not. In some other languages, for example Thai and Quechua, this same difference of aspiration or non-aspiration does differentiate phonemes. In addition to the minimal meaningful sounds (the phonemes), phonology studies how sounds alternate, such as the /p/ in English, and topics such as syllable structure, stress, accent, and intonation. The principles of phonological theory have also been applied to the analysis of sign languages, even though the phonological units do not consist of sounds. A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns (manual communication, body language and lip patterns) to convey meaning—simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to express fluidly a speaker's thoughts. On the whole, deaf sign languages are independent of oral languages and follow their own paths of development. For example, British Sign Language and American Sign Language are quite different and mutually unintelligible, even though the hearing people of Britain and America share the same oral language. Morphology - the study of the formal means of expression in a language; in the context of historical linguistics, how the formal means of expression change over time; for instance, languages with complex inflectional systems tend to be subject to a simplification process. Morphology is the field of linguistics that studies the internal structure of words as a formal means of expression. Words as units in the lexicon are the subject matter of lexicology. While words are generally accepted as being the smallest units of syntax, it is clear that in most (if not all) languages, words can be related to other words by rules. The rules understood by the speaker reflect specific patterns (or regularities) in the way words are formed from smaller units and how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word-formation within and across languages, and attempts to formulate rules that model the knowledge of the speakers of those languages, in the context of historical linguistics, how the means of expression change over time. In historical linguistics, grammaticalisation is a process of linguistic change by which a content word (lexical morpheme) changes into a function word or further into a grammatical affix. Involved in the process are various semantic changes (especially bleaching) and phonological changes typical of high-frequency words. In English, the word "go" became a change-of-state marker (e.g. "He went home" vs. "He went mad") and a future tense marker ("I am going to the store" vs. "I am going to eat", contracted to " eat Syntax is the study of the principles and rules for constructing sentences in natural languages. The term syntax is used to refer directly to the rules and principles that govern the sentence structure of any individual language, as in "the syntax of Modern Irish". Modern research in syntax attempts to describe languages in terms of such rules. Many professionals in this discipline attempt to find general rules that apply to all natural languages in the context of historical linguistics, how characteristics of sentence structure in related languages changed over time. III. Sociolinguistics looks at the relation between linguistic variation and social structures. Sociolinguistics is the study of the effect of any and all aspects of society, including cultural norms, expectations, and context, on the way language is used. Sociolinguistics overlaps to a considerable degree with pragmatics. It also studies how lects differ between groups separated by certain social variables, e.g., ethnicity, religion, status, gender, level of education, age, etc., and how creation and adherence to these rules is used to categorize individuals in social class. As the usage of a language varies from place to place (dialect), language usage varies among social classes, and it is these sociolects that sociolinguistics studies. (The term "lect" is a back-formation from specific terms such as dialect and idiolect. A lect is a form of a language, or a language itself, that is considered to be a variety or development of another language or form of it, in sociolinguistics). William Labov, an American linguist (born December 4, 1927) is regarded as the founder of the study of sociolinguistics. He is especially noted for introducing the quantitative study of language variation and change, making the sociology of language into a scientific discipline. He has been described as "an enormously original and influential figure who has created much of the methodology" of sociolinguistics. He is employed as a professor in the linguistics department of the University of Pennsylvania, and pursues research in sociolinguistics, language change, and dialectology. In the late 1960s and early 1970s, his studies of the linguistic features of African American Vernacular English (AAVE) were also influential: he argued that AAVE should not be stigmatized as substandard but respected as a variety of English with its own grammatical rules. He has also pursued research in referential indeterminacy, and he is noted for his seminal studies of the way ordinary people structure narrative stories of their own lives. Variation and universality Much modern linguistic research, particularly within the paradigm of generative grammar, has concerned itself with trying to account for differences between languages of the world. This has worked on the assumption that if human linguistic ability is narrowly constrained by human biology, then all languages must share certain fundamental properties. In generativist theory, the collection of fundamental properties all languages share are referred to as universal grammar (UG). Universal grammar is a theory of linguistics postulating principles of grammar shared by all languages, thought to be innate to humans (linguistic nativism). It attempts to explain language acquisition in general, not describe specific languages. Universal grammar proposes a set of rules intended to explain language acquisition in child development. The specific characteristics of this universal grammar are a much debated topic. Typologists and non-generativist linguists usually refer simply to language universals. Linguistic typology is a subfield of linguistics that studies and classifies languages according to their structural features. Its aim is to describe and explain the structural diversity of the world's languages. It includes three sub-disciplines: 1. qualitative typology, which deals with the issue of comparing languages and within-language variance; 2. quantitative typology, which deals with the distribution of structural patterns in the world’s languages; 3. theoretical typology, which explains these distributions. Similarities between languages can have a number of different origins. In the simplest case, universal properties may be due to universal aspects of human experience. For example, all humans experience water, and all human languages have a word for water. Other similarities may be due to common descent: the Latin language spoken by the Ancient Romans developed into Spanish in Spain and Italian in Italy; similarities between Spanish and Italian are thus in many cases due to both being descended from Latin. In other cases, contact between languages — particularly where many speakers are bilingual — can lead to much borrowing of structures, as well as words. Similarity may also, of course, be due to coincidence. English much and Spanish mucho are not descended from the same form or borrowed from one language to the other; nor is the similarity due to innate linguistic knowledge. These words are the examples of false cognates. False cognates are pairs of words in the same or different languages that are similar in form and meaning but have different roots. That is, they appear to be or are sometimes considered cognates when in fact they are not. As an example of false cognates, the word for "dog" in the Australian Aboriginal language Mbabaram happens to be dog, although there is no common ancestor or other connection between that language and English (the Mbabaram word evolved regularly from a protolinguistic form *gudaga). Similarly, in the Japanese language the word 'to occur' happens to be okoru. The basic kinship terms mama and papa comprise a special case of false cognates (cf. Chinese bàba, Persian baba, and French papa (all "dad"); or Navajo má, Chinese māma, Swahili mama, Quechua mama, and English "mama"). The striking cross-linguistical similarities between these terms are thought to result from the nature of language acquisition. According to Jakobson, these words are the first word-like sounds made by babbling babies; and parents tend to associate the first sound babies make with themselves. Thus, there is no need to ascribe the similarities to common ancestry. This hypothesis is supported by the fact that these terms are built up from speech sounds that are easiest to produce (bilabial stops like m and b and the basic vowel a). However, variants do occur; for example, in Fijian, the word for "mother" is nana, and in proto-Old Japanese, the word for "mother" was *papa. Furthermore, the modern Japanese word for "father," chichi, is from older titi. In fact, in Japanese the child's initial mamma is interpreted to mean "food". Similarly, in some Indian languages, such as Marathi, a child's articulation of "mum-mum" is interpreted to mean "food". Some historical linguists presume that all languages go back to a single common ancestor. Therefore, a pair of words whose earlier forms are distinct, yet similar, as far back as they've been traced, could in theory have come from a common root in an even earlier language, making them real cognates. The further back in time language reconstruction efforts go, however, the less confidence there can be in the outcome. Attempts at such reconstructions typically rely on just such pairings of superficially similar words, but the connections proposed by these theories tend to be conjectural, failing to document significant patterns of linguistic change. Under the disputed Nostratic theory(macrofamily of languages) and similar theories, some of these examples would indeed be distantly related cognates, but the evidence for reclassifying them as such is insufficient. The Nostratic hypothesis is however based on the comparative method, unlike some other superfamily hypotheses. Examples: Arabic sharif and English sheriff English boy and Japanese bōya (young male child) English bullshit and Mandarin búshì (不是; is not, not true) English cheek and Russian shcheka (щека; cheek) English chop and Uzbek chop English day, daily and Spanish día (day) (or Latin dies (day) or English diary) English delete and Russian udalit' (удалить; to delete, remove) English hut and Russian hata (хата) English much and Spanish mucho English river and Spanish rio English strange and Russian stranno(странно) German haben (to have) and Latin habere (to have) Japanese arigatō and Portuguese obrigado (thank you) Japanese babā (disrespectful term meaning "old hag") and Russian baba (grandmother) Arguments in favor of language universals have also come from documented cases of sign languages developing in communities of congenitally deaf people, independently of spoken language. The properties of these sign languages conform generally toо many of the properties of spoken languages. Most contemporary linguists work under the assumption that spoken (or signed) language is more fundamental than written language. This is because: • Speech appears to be a human "universal", whereas there have been many cultures and speech communities that lack written communication; • Speech evolved before human beings discovered writing; • People learn to speak and process spoken languages more easily and much earlier than writing; • Written language will be based on a standard language, whereas Linguistics focuses on languages as a whole, not just the accepted cultural norm. Linguists nonetheless agree that the study of written language can be worthwhile and valuable. For research that relies on corpus linguistics and computational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typically transcribed and written. Additionally, linguists have turned to text-based discourse occurring in various formats of computer-mediated communication as a viable site for linguistic inquiry. 2. Generative linguistics Generative linguistics is a school of thought within linguistics that makes use of the concept of a generative grammar. Formally, a generative grammar is a finite set of rules that can be applied to generate only those sentences (often, but not necessarily, infinite in number) that are grammatical in a given language. This is the definition that is offered by Noam Chomsky, who invented the term. It is important to note that generate is being used as a technical term with a particular sense. To say that a grammar generates a sentence means that the grammar "assigns a structural description" to the sentence. The term generative grammar is also used to label the approach to linguistics taken by Chomsky and his followers. Chomsky's approach is characterised by the use of transformational grammar – a theory that has changed greatly since it was first promulgated by Chomsky in his 1957 book Syntactic Structures – and by the assertion of a strong linguistic nativism (and therefore an assertion that some set of fundamental characteristics of all human languages must be the same). A transformational grammar, or transformational - generative grammar (TGG), is a generative grammar, especially of a natural language. Noam Chomsky, in his work Syntactic Structures, developed the idea that each sentence in a language has two levels of representation — a deep structure and a surface structure. The deep structure represented the core semantic relations of a sentence, and was mapped on to the surface structure (which followed the phonological form of the sentence very closely) via transformations. Chomsky believed that there would be considerable similarities between languages' deep structures, and that these structures would reveal properties, common to all languages, which were concealed by their surface structures. Though transformations continue to be important in Chomsky's current theories, he has now abandoned the original notion of Deep Structure and Surface Structure. Initially, two additional levels of representation were introduced (LF — Logical Form, and PF — Phonetic Form), and then in the 1990s Chomsky sketched out a new program of research known as Minimalism, in which Deep Structure and Surface Structure no longer featured and LF and PF remained as the only levels of representation. In TGG, Deep structures were generated by a set of phrase structure rules. For example a typical transformation in TG is the operation of subject-auxiliary inversion (SAI). This rule takes as its input a declarative sentence with an auxiliary: "John has eaten all the tomatoes." and transforms it into "Has John eaten all the tomatoes, or there was a transformation that turned active sentences into passive ones. Another type of transformation raised embedded subjects into main clause subject position in sentences such as "John seems to have gone".Terms such as "transformation" can give the impression that theories of transformational generative grammar are intended as a model for the processes through which the human mind constructs and understands sentences. Chomsky is clear that this is not in fact the case: a generative grammar models only the knowledge that underlies the human ability to speak and understand. One of the most important of Chomsky's ideas is that most of this knowledge is innate, with the result that a baby can have a large body of prior knowledge about the structure of language in general, and need only actually learn the idiosyncratic features of the language(s) it is exposed to. Chomsky originally theorized that children were born with a hard-wired language acquisition device (LAD) in their brains. He later expanded this idea into Universal Grammar, a set of innate principles and adjustable parameters that are common to all human languages. According to Chomsky, the presence of Universal Grammar in the brains of children allow them to deduce the structure of their native languages from "mere exposure". Much of the nativist position is based on the early age at which children show competency in their native grammars, as well as the ways in which they do (and do not) make errors. Infants are born able to distinguish between phonemes in minimal pairs, distinguishing between bah and pah, for example. Young children (under the age of three) do not speak in fully formed sentences, instead saying things like 'want cookie' or 'my coat.' They do not, however, say things like 'want my' or 'I cookie,' statements that would break the syntactic structure of the Phrase, a component of universal grammar. Children also seem remarkably immune from error correction by adults, which Nativists say would not be the case if children were learning from their parents. Grammatical theories Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories. The first was the distinction between competence and performance. Chomsky noted the obvious fact that people, when speaking in the real world, often make linguistic errors (e.g. starting a sentence and then abandoning it midway through). He argued that these errors in linguistic performance were irrelevant to the study of linguistic competence (the knowledge that allows people to construct and understand grammatical sentences). Consequently, the linguist can study an idealised version of language, greatly simplifying linguistic analysis. The second idea related directly to the evaluation of theories of grammar. Chomsky made a distinction between grammars which achieved descriptive adequacy and those which went further and achieved explanatory adequacy. A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar which achieves explanatory adequacy has the additional property that it gives an insight into the underlying linguistic structures in the human mind; that is, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented. For Chomsky, the nature of such mental representations is largely innate, so if a grammatical theory has explanatory adequacy it must be able to explain the various grammatical nuances of the languages of the world as relatively minor variations in the universal pattern of human language. Chomsky argued that, even though linguists were still a long way from constructing descriptively adequate grammars, progress in terms of descriptive adequacy would only come if linguists held explanatory adequacy as their goal. In other words, real insight into the structure of individual languages could only be gained through the comparative study of a wide range of languages, on the assumption that they are all cut from the same cloth. According to Chomsky the notions "grammatical" and "ungrammatical" could be defined in a meaningful and useful way.. He argued that the intuition of a native speaker is enough to define the grammaticalness of a sentence; that is, if a particular string of English words elicits a double take, or feeling of wrongness in a native English speaker, it can be said that the string of words is ungrammatical (when various extraneous factors affecting intuitions are controlled for). This (according to Chomsky) is entirely distinct from the question of whether a sentence is meaningful, or can be understood. It is possible for a sentence to be both grammatical and meaningless, as in Chomsky's famous example "colorless green ideas sleep furiously". But such sentences manifest a linguistic problem distinct from that posed by meaningful but ungrammatical (non)-sentences such as "man the bit sandwich the", the meaning of which is fairly clear, but which no native speaker would accept as being well formed. The use of such intuitive judgments permitted generative syntacticians to base their research on a methodology in which studying language through a corpus of observed speech became downplayed, since the grammatical properties of constructed sentences were considered to be appropriate data on which to build a grammatical model. Cognitive linguistics Cognitive linguistics refers to the school of linguistics that understands language creation, learning, and usage as best explained by reference to human cognition in general. It is characterized by adherence to three central positions: 1. It denies that there is an autonomous linguistic faculty in the mind; 2. it understands grammar in terms of conceptualization; 3. it claims that knowledge of language arises out of language use. Cognitive linguists deny that the mind has any module for language-acquisition that is unique and autonomous. This stands in contrast to the work done in the field of generative grammar. Although cognitive linguists do not necessarily deny that part of the human linguistic ability is innate, they deny that it is separate from the rest of cognition. Thus, they argue that knowledge of linguistic phenomena — i.e., phonemes, morphemes, and syntax — is essentially conceptual in nature. Moreover, they argue that the storage and retrieval of linguistic data is not significantly different from the storage and retrieval of other knowledge and use of language in understanding employs similar cognitive abilities as used in other non-linguistic tasks. (A concept is a cognitive unit of meaning- an abstract idea or a mental symbol). Concepts are bearers of meaning, as opposed to agents of meaning. A single concept can be expressed by any number of languages. The concept of DOG can be expressed as dog in English, Hund in German, as chien in French, and perro in Spanish. The fact that concepts are in some sense independent of language makes translation possible - words in various languages have identical meaning, because they express one and the same concept. In Cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. Departing from the tradition of truth-conditional semantics, cognitive linguists view meaning in terms of conceptualization. Instead of viewing meaning in terms of models of the world, they view it in terms of mental spaces. The main area of Cognitive linguistics study is devoted to cognitive semantics, dealing mainly with lexical semantics. Cognitive semantics is a part of the cognitive linguistics movement. The main tenets of cognitive semantics are, first, that grammar is conceptualization; second, that conceptual structure is embodied and motivated by usage; and third, that the ability to use language draws upon general cognitive resources and not a special language module. The cognitive semantics approach rejects the traditional separation of linguistics into phonology, syntax, pragmatics, etc. Instead, it divides semantics (meaning) into meaning-construction and knowledge representation. Therefore, cognitive semantics studies much of the area traditionally devoted to pragmatics. The techniques native to cognitive semantics are typically used in lexical studies by Leonard Talmy, George Lakoff, Dirk Geeraerts and Bruce Wayne Hawkins. Leonard Talmy is a professor of linguistics and philosophy at the University at Buffalo in New York. He is most famous for his pioneering work in cognitive linguistics, more specifically, in the relationship between semantic and formal linguistic structures and the connections between semantic typologies and universals. He also specializes in the study of Yiddish and Native American linguistics. George P. Lakoff is a professor of cognitive linguistics at the University of California, Berkeley. He is most famous for his ideas about the centrality of metaphor to human thinking, political behavior and society. He is particularly famous for his concept of the "embodied mind," which he has written about in relation to mathematics. Lakoff's original thesis on conceptual metaphor was expressed in his book with Mark Johnson entitled Metaphors We Live By in 1980. Metaphor has been seen within the Western scientific tradition as purely a linguistic construction. The essential thrust of Lakoff's work has been the argument that metaphors are primarily a conceptual construction, and indeed are central to the development of thought. He says, "Our ordinary conceptual system, in terms of which we think and act, is fundamentally metaphorical in nature." Non-metaphorical thought is for Lakoff only possible when we talk about purely physical reality. For Lakoff the greater the level of abstraction the more layers of metaphor are required to express it. People do not notice these metaphors for various reasons. One reason is that some metaphors become 'dead' and we no longer recognize their origin. Another reason is that we just don't "see" what is "going on".For instance, in intellectual debate the underlying metaphor is usually that argument is war: He won the argument. Your claims are indefensible. He shot down all my arguments. His criticisms were right on target. If you use that strategy, he'll wipe you out. For Lakoff, the development of thought has been the process of developing better metaphors. The application of one domain of knowledge to another domain of knowledge offers new perceptions and understandings. Points of contrast (classical semantics vs. cognitive semantics): According to classic theories in semantics, the meaning of a particular sentence may be understood as the conditions under which the proposition conveyed by the sentence hold true. For instance, the expression "snow is white" is true if only snow is, in fact, white. Meanwhile, cognitive semantic theories are typically built on the argument that lexical meaning is conceptual. That is, meaning is not necessarily reference to the entity or relation in some real or possible world. Instead, meaning corresponds with a concept held in the mind which is based on personal understanding. As a result, semantic facts like "All bachelors are unmarried males" are not treated as special facts about our language practices; rather, these facts are not distinct from encyclopedic knowledge. Frame semantics. Charles J. Fillmore is an American linguist, and an Emeritus Professor of Linguistics at the University of California, Berkeley. He received his Ph.D. in Linguistics from the University of Michigan in 1961. Professor Fillmore spent ten years at The Ohio State University before joining Berkeley's Department of Linguistics in 1971. He has been a Fellow at the Center for Advanced Study in the Behavioral Sciences. He has been extremely influential in the areas of syntax and lexical semantics; he was one of the founders of cognitive linguistics, and Frame Semantics (1976). Frame semantics, developed by Charles J. Fillmore, attempts to explain meaning in terms of their relation to general understanding, not just in the terms laid out by truth-conditional semantics. Fillmore explains meaning in general in terms of "frames". By "frame" is meant any concept that can only be understood if a larger system of concepts is also understood. Fillmore: framing Many pieces of linguistic evidence motivate the frame-semantic project. First, it has been noted that word meaning is an extension of our bodily and cultural experiences. For example, the notion of restaurant is associated with a series of concepts, like food, service, waiters, tables, and eating. These rich-but-contingent associations cannot be captured by an analysis in terms of necessary and sufficient conditions, yet they still seem to be intimately related to our understanding of "restaurant". Second, and more seriously, these conditions are not enough to account for asymmetries in the ways that words are used. According to a semantic feature analysis, there is nothing more to the meanings of "boy" and "girl" than: BOY [+MALE], [+YOUNG]; GIRL [+FEMALE], [+YOUNG]. And there is surely some truth to this proposal. Indeed, cognitive semanticists understand the instances of the concept held by a given certain word may be existed in a schematic relation with the concept itself. However, linguists have found that language users regularly apply the terms "boy" and "girl" in ways that go beyond mere semantic features. That is, for instance, people tend to be more likely to consider a young female a "girl" (as opposed to "woman"), than they are to consider a borderline-young male a "boy" (as opposed to "man"). This fact suggests that there is a latent frame, made up of cultural attitudes, expectations, and background assumptions, which is part of word meaning. These background assumptions go up and beyond those necessary and sufficient conditions that correspond to a semantic feature account. Frame semantics, then, seeks to account for these puzzling features of lexical items in some systematic way.With the frame-semantic paradigm's analytical tools, the linguist is able to explain a wider range of semantic phenomena than they would be able to with only necessary and sufficient conditions. Some words have the same definitions or intensions, and the same extensions, but have subtly different domains. For example, the lexemes land and ground are synonyms, yet they naturally contrast with different things -- air and sea, respectively. Fillmore’s current major project is called FrameNet; it is a wide-ranging on-line description of the English lexicon. In this project, words are described in terms of the Frames they evoke. Data is gathered from the British National Corpus, annotated for semantic and syntactic relations, and stored in a database organized by both lexical items and Frames. The project is influential -- Issue 16 of the International Journal of Lexicography was devoted entirely to it. It has also inspired parallel projects, which investigate other languages, including Spanish, German, and Japanese. 3. Semantics Semantics is the study of meaning in communication. Communication is a process by which we assign and convey meaning in an attempt to create shared understanding. Communication can be seen as processes of information transmission governed by three levels of semiotic rules: 1. Syntactic (formal properties of signs and symbols), 2. pragmatic (concerned with the relations between signs/expressions and their users) 3. semantic (study of relationships between signs and symbols and what they represent). The word semantics derives from Greek σημαντικός (semantikos), "significant", from σημαίνω (semaino), "to signify, to indicate" and that from σήμα (sema), "sign, mark, token". In linguistics it is the study of interpretation of signs as used by agents or communities within particular circumstances and contexts. It has related meanings in several other fields. In linguistics the fields most closely associated with meaning are semantics and pragmatics. While semantics deals most directly with what words or phrases mean, pragmatics deals with how the environment changes the meanings of words. Semanticists differ on what constitutes meaning in an expression. For example, in the sentence, "John loves a bagel", the word bagel may refer to the object itself, which is its literal meaning or denotation, but it may also refer to many other figurative associations, such as how it meets John's hunger, etc., which may be its connotation. Traditionally, the formal semantic view restricts semantics to its literal meaning, and relegates all figurative associations to pragmatics, but this distinction is difficult to defend. The degree to which a theorist subscribes to the literal-figurative distinction decreases as one moves from the formal semantic, semiotic, pragmatic, to the cognitive semantic traditions. The word semantic in its modern sense is considered to have first appeared in French as sémantique in Michel Bréal's 1897 book, Essai de sémantique'. In International Scientific Vocabulary semantics is also called semasiology. Semasiology (from Greek: σημαίνω (sēmaino) — indicate, signify) is a discipline within linguistics concerned with the question "what does the word X mean?" It studies the meaning of words regardless of their phonetic expression. The meaning of the term is somewhat obscure, because according to some authors’ semasiology merged with semantics in modern times, while at the same time the term is still in use when defining onomasiology (the opposite approach to semasiology). While onomasiology, as a part of lexicology, departs from a concept (i.e. an idea, an object, a quality, an activity etc.) and asks for the name of the word, semasiology departs from a word and asks what it means, or what concepts the word refers to. Thus, an onomasiological question is, e.g., "what are the names for long, narrow pieces of potato that have been deep-fried?" (answers: french fries in the US, chips in the UK), while a semasiological question is, e.g., "what is the meaning of the term chips?" (answers: 'long, narrow pieces of potato that have been deep-fried' in the UK, 'slim slices of potatoes deep fried or baked until crisp' in the US). In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and even larger units of discourse (referred to texts. The basic area of semantics’ study is the meaning of signs, and the study of relations between different linguistic units: homonymy, synonymy, antonymy, polysemy, metonymy, etc. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of connotative sense and denotative reference, truth conditions, argument structure, thematic roles, discourse analysis, and the linkage of all of these to syntax. Formal semanticists are concerned with the modeling of meaning in terms of the semantics of logic. Thus the sentence John loves a bagel above can be broken down into its constituents (signs), of which the unit loves may serve as both syntactic and semantic head. In linguistics, the head is the word that determines the syntactic type of the phrase of which it is a member, or analogously the stem that determines the semantic category of a compound of which it is a component. The other elements modify the head. For example, in the big red dog, the word dog is the head, as it determines that the phrase is a noun phrase. The adjectives big and red modify this head noun. That is, the phrase big red dog is a noun like dog, not an adjective like big or red. Likewise, in the compound noun birdsong, the stem song is the head, as it determines the basic meaning of the compound, while the stem bird modifies this meaning. That is, a birdsong is a kind of song, not a kind of bird. If bird were the head, the order would be different: a songbird is a kind of bird. Thus in English, compound nouns are head final: the head comes at the end. Montague grammar Richard Merett Montague is an American mathematician and philosopher. Montague pioneered a logical approach to natural language semantics which became known as Montague grammar. This approach to language has been especially influential among certain computational linguists—more than among more traditional philosophers of language. Richard Montague proposed a system for defining semantic entries in the lexicon in terms of lambda calculus. In mathematical logic and computer science, lambda calculus, also written as λ-calculus, is a formal system designed to investigate function definition, function application and recursion. It was introduced by Alonzo Church and Stephen Cole Kleene in the 1930s as part of an investigation into the foundations of mathematics, but has emerged as a useful tool in the investigation of problems in computability or recursion theory, and forms the basis of a paradigm of computer programming called functional programming. The lambda calculus is an idealized, minimalistic programming language. It is capable of expressing any algorithm, and it is this fact that makes the model of functional programming an important one. Functional programs are stateless and deal exclusively with functions that accept and return data (including other functions), but they produce no side effects in 'state' and thus make no alterations to incoming data. Modern functional languages, building on the lambda calculus, include Erlang, Haskell, Lisp, ML, and Scheme, as well as more recent programming languages like Nemerle, and Scala. Thus, the syntactic parse of the sentence John loves a bagel would now indicate loves as the head, and its entry in the lexicon would point to the arguments as the agent, John, and the object, bagel, with a special role for the article "a" (which Montague called a quantifier). This resulted in the sentence being associated with the logical predicate loves (John, bagel), thus linking semantics to categorial grammar models of syntax. The dynamic turn in semantics In the Chomskian tradition in linguistics there was no mechanism for the learning of semantic relations, and the nativist view considered all semantic notions as inborn. This traditional view was also unable to address many issues such as metaphor or associative meanings, and semantic change, where meanings within a linguistic community change over time, and qualia or subjective experience. Another issue not addressed by the nativist model was how perceptual cues are combined in thought, e.g. in mental rotation. "Qualia" from the Latin for "what sort" or "what kind," is a term used in philosophy to describe the subjective quality of conscious experience. Examples of qualia are the pain of a headache, the taste of wine, or the redness of an evening sky. Daniel Dennett (a prominent American philosopher), writes that qualia are "an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us." One of the simpler, broader definitions is "The 'what it is like' character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc.'" The traditional view of semantics, as an innate finite meaning inherent in a lexical unit that can be composed to generate meanings for larger chunks of discourse, is now being fiercely debated in the emerging domain of cognitive linguistics and also in the non-Fodorian camp in Philosophy of Language. The challenge is motivated by factors external to language, i.e. language is not a set of labels stuck on things, but "a toolbox, the importance of whose elements lie in the way they function rather than their attachments to things." A concrete example of the latter phenomenon is semantic underspecification - meanings are not complete without some elements of context. To take an example of a single word, "red", its meaning in a phrase such as red book is similar to many other usages, and can be viewed as compositional. However, the colours implied in phrases such as "red wine" (very dark), and "red hair" (coppery), or "red soil", or "red skin" are very different. Indeed, these colours by themselves would not be called "red" by native speakers. These instances are contrastive, so "red wine" is so called only in comparison with the other kind of wine (which also is not "white" for the same reasons). This view goes back to de Saussure: “Each of a set of synonyms like redouter ('to dread'), craindre ('to fear'), avoir peur ('to be afraid') has its particular value only because they stand in contrast with one another. No word has a value that can be identified independently of what else is in its vicinity”. The Semantics’ place in Computer science In computer science, semantics reflects the meaning of programs or functions. In this regard, semantics permits programs to be separated into their syntactical part (grammatical structure) and their semantic part (meaning). Semantics for computer applications falls into three categories: • Operational semantics: The meaning of a construct is specified by the computation it induces when it is executed on a machine. In particular, it is of interest how the effect of a computation is produced. • Denotational semantics: Meanings are modelled by mathematical objects that represent the effect of executing the constructs. Thus only the effect is of interest, not how it is obtained. • Axiomatic semantics: Specific properties of the effect of executing the constructs as expressed as assertions. Thus there may be aspects of the executions that are ignored. The Semantic Web is an evolving extension of the World Wide Web in which the semantics of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. It derives from World Wide Web Consortium director Sir Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange. Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags, for example At its core, the semantic web comprises a set of design principles, collaborative working groups, and a variety of enabling technologies. Some elements of the semantic web are expressed as prospective future possibilities that are yet to be implemented or realized. Other elements of the semantic web are expressed in formal specifications. Some of these include Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain. Semantics in Psychology In psychology, semantic memory is memory for meaning, in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience, while episodic memory is memory for the ephemeral details, the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep; the relationships among words themselves in a semantic network. In a network created by people analyzing their understanding of the word (such as Wordnet ) the links and decomposition structures of the network are few in number and kind; and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines. Semantics has been reported to drive the course of psychotherapeutic interventions. Language structure can determine the treatment approach to drug-abusing patients. While working in Europe for the US Information Agency, American psychiatrist Dr. A. James Giannini reported semantic differences in medical approaches to addiction treatment. English-speaking countries used the term "drug dependence" to describe a rather passive pathology in their patients. As a result the physician's role was more active. Southern European countries such as Italy and Yugoslavia utilized the concept of "tossicomania" (i.e. toxic mania) to describe a more active rather than passive role of the addict. As a result the treating physician's role shifted to that of a more passive guide than that of an active interventionist. 4. Pragmatics Pragmatics studies the ways that context affects meaning. The two primary forms of context important to pragmatics are linguistic context and situational context. The term "pragmatics" was introduced by the Logical Positivist, Rudolf Carnap. This was an attempt to reduce subjective meaning to a secondary status and to treat what remained as objective. In short, while we all use objectified meaning and consider context important, when communication breaks down, then the primacy of subjective meaning becomes overwhelming, especially when we finally ask: "What do YOU mean?" Nevertheless as we attempt to understand meaning without directly considering subjective factors, the importance of linguistic context as an indirect way of doing becomes exceptionally important especially when looking at particular linguistic problems such as pronouns. In most situations, for example, the pronoun him in the sentence "Joe also saw him" has a radically different meaning if preceded by "Jerry said he saw a guy riding an elephant", or "Jerry saw the bank robber", or "Jerry saw your dog run that way". Indeed, studying context is about the only path in realistic speech or writing for understanding semantics and pragmatics without referring to meaning as intent and assumptions. Linguistic context refers to every non-linguistic factor that affects the meaning of a phrase. Nearly anything can be included in the list, from the time of day to the people involved to the location of the speaker or the temperature of the room. An example of situational context at work is evident in the phrase "it's cold in here", which can either be a simple statement of fact or a request to turn up the heat, or to close the window, depending on other things, whether or not it is believed to be in the listener's power to affect the temperature. When we speak we perform speech acts. The locutionary act, the act of saying something, illocutionary act is an act performed in saying something, and a perlocutionary act, an act performed by saying something. A speech act has an illocutionary point or illocutionary force. For example, the point of an assertion is to represent the world as being a certain way. The point of a promise is to put oneself under an obligation to do something. The illucutionary point of a speech act must be distinguished from its perlocutionary effect, which is what it brings about. A request, for example, has as its illocutionary point to direct someone to do something. Its perlocutionary effect may be the doing of the thing by the person directed. Sentences in different grammatical moods, the declarative, imperative, and interrogative, tend to perform speech acts of specific sorts. But in particular contexts one may perform a different speech act using them than that for which they are typically put to use. Thus, as noted above, one may use a sentence such as "it's cold in here" not only to make an assertion but also to request that one's auditor turn up the heat, or close the window. Speech acts include performative utterances, in which one performs the speech act by using a first person present tense sentence which says that one is performing the speech act. Examples are: 'I promise to be there', 'I warn you not to do it', 'I advise you to turn yourself in', etc. Pragmatics is the study of the ability of natural language speakers to communicate more than that which is explicitly stated. The ability to understand another speaker's intended meaning is called pragmatic competence. Pragmatics deals with the ways we reach our goal in communication. Suppose a person wanted to ask someone else to stop smoking. This could be achieved by using several utterances. The person could simply say, 'Stop smoking, please!' which is direct and with clear semantic meaning; alternatively, the person could say, 'Whew, this room could use an air purifier' which implies a similar meaning but is indirect and therefore requires pragmatic inference to derive the intended meaning. The main topics in pragmatics • Deixis • Implicature • Presupposition • Speech act Deixis In pragmatics and linguistics, deixis is collectively the orientational features of human languages to have reference to points in time, space, and the speaking event between interlocutors. A word that depends on deictic clues is called a deictic or a deictic word. Deictic words are bound to a context — either a linguistic or extralinguistic context — for their interpretation. Some English deictic words include, for example, the following: now vs. then; here vs. there; this vs. that; me vs. you vs. him/her; go vs. come. In pragmatics, the origo is the reference point on which deictic relationships are based. In most deictic systems, the origo identifies with the current speaker. For instance, if the speaker, John, were to say "This is now my fish", then John would be the origo, and the deictic word "my" would be dependent on that fact. Likewise, his use of the word "this" and "now" communicate his properties, namely his location and point in time. The origo is the context from which the reference is made—in other words, the viewpoint that must be understood in order to interpret the utterance. (If Tom is speaking and he says "I", he refers to himself, but if he is listening to Betty and she says "I", then the origo is with Betty and the reference is to her). Types of deixis • Place deixis: a spatial location relative to the spatial location of the speaker. It can be proximal or distal, or sometimes medial. It can also be either bounded (indicating a spatial region with a clearly defined boundary, e.g. in the box) or unbounded (indicating a spatial region without a clearly defined boundary, e.g. over there). It is common for languages to show at least a two-way referential distinction in their deictic system: proximal, i.e. near or closer to the speaker, and distal, i.e. far from the speaker and/or closer to the addressee. English exemplifies this with such pairs as this and that, here and there, etc. • Time deixis: where reference is made to particular times relative (most currently the time of utterance). For example the use of the words now or soon, or the use of tenses. • Discourse deixis: where reference is made to the current discourse or part thereof. Examples: "see section 8.4". • Person deixis: Pronouns are generally considered to be deictics, but a finer distinction is often made between personal pronouns such as I, you, and it (commonly referred to as personal pronouns) and pronouns that refer to places and times such as now, then, here, there. • Social deixis: is the use of different deictics to express social distinctions. An example is the difference between formal and polite pro-forms. Relational social deixis is where the form of the word used indicates the relative social status of the addressor and the addressee. For example, one pro-form might be used to address those of higher social rank, another to address those of lesser social rank, another to address those of the same social rank. (By contrast, absolute social deixis indicates a social standing irrespective of the social standing of the speaker. Thus, village chiefs might always be addressed by a special pro-form, regardless of whether it is someone below them, above them or at the same level of the social hierarchy who is doing the addressing). Implicature Implicature is a technical term in pragmatics coined by Paul Grice for certain kinds of inferences that are drawn from statements without the additional meanings in logic and informal language use of implication. It refers to what is suggested in an utterance, even though not expressed nor strictly implied (that is, entailed) by the utterance. For example, the sentence "Mary had a baby and got married" strongly suggests that Mary had the baby before the wedding, but the sentence would still be strictly true if Mary had her baby after she got married. Further, if we add the qualification "— not necessarily in that order" to the original sentence, then the implicature is cancelled even though the meaning of the original sentence is not altered. Presupposition In the linguistic branch of pragmatics, a presupposition is an implicit assumption about the world or background belief relating to an utterance whose truth is taken for granted in discourse. Examples of presuppositions include: Do you want to do it again? Presupposition: that you have done it already, at least once. Jane no longer writes fiction. Presupposition: that Jane once wrote fiction. A presupposition must be mutually known or assumed by the speaker and addressee for the utterance to be considered appropriate in context. It will generally remain a necessary assumption whether the utterance is placed in the form of an assertion, denial, or question, and can be associated with a specific lexical item or grammatical feature in the utterance. Negation of an expression does not change its presuppositions: I want to do it again and I don't want to do it again both presuppose that the subject has done it already one or more times; My wife is pregnant and My wife is not pregnant both presuppose that the subject has a wife. A significant amount of current work in semantics and pragmatics is devoted to a proper understanding of when and how presuppositions project. Speech act Following the usage of John R. Searle, "speech act" is often meant to refer just to the same thing as the term illocutionary act. This term had originally introduced by John L. Austin in his work “How to Do Things with Words” (published posthumously in 1962). The work of J. L. Austin “How to Do Things with Words”, led philosophers to pay more attention to the non-declarative uses of language. The terminology he introduced, especially the notions "locutionary act", "illocutionary act", and "perlocutionary act", occupied an important role in what was then to become the "study of speech acts". All of these three acts, but especially the "illocutionary act", are nowadays commonly classified as "speech acts". According to Austin, the idea of an "illocutionary act" is expressed like this: "by saying something, we do something", f. e. when a minister (priest) joins two people in marriage saying, "I now pronounce you husband and wife." Illocutionary acts The concept of an illocutionary act is central to the concept of a speech act. Although there are numerous opinions as to what 'illocutionary acts' actually are, there are some kinds of acts which are widely accepted as illocutionary, as for example: • Greeting (in saying, "Hi John!", for instance), apologizing ("Sorry for that!"), describing something ("It is snowing"), asking a question ("Is it snowing?"), making a request and giving an order ("Could you pass the salt?" and "Drop your weapon or I'll shoot you!"), or making a promise ("I promise I'll give it back") are typical examples of "speech acts" or "illocutionary acts". • In saying, "Watch out, the ground is slippery", Peter performs the speech act of warning Mary to be careful. • In saying, "I will try my best to be at home for dinner", Peter performs the speech act of promising to be at home in time. • In saying, "Ladies and gentlemen, may I have your attention, please?" Peter requests the audience to be quiet. • In saying, "Can you race with me to that building over there?" Peter challenges Mary. An interesting type of illocutionary speech act performed in the utterance Austin calls performatives, typical instances of which are "I nominate John to be President", "I sentence you to ten years' imprisonment", or "I promise to pay you back." In these typical, rather explicit cases of performative sentences, the action that the sentence describes (nominating, sentencing, promising) is performed by the utterance of the sentence itself. Indirect speech acts In the course of performing speech acts we ordinarily communicate with each other. The content of communication may be identical with the content intended to be communicated, as when a speaker asks a family member to wash the dishes by asking, "Could you please do the dishes?" However, the meaning of the linguistic means used may also be different from the content intended to be communicated. I may, in appropriate circumstances, request Peter to do the dishes by just saying, "Peter ...!", or I can promise to do the dishes by saying, "Me!" One common way of performing speech acts is to use an expression which indicates one speech act, and indeed to perform this act, but additionally to perform a further speech act, which is not indicated by the expression uttered. I may, for instance, request Peter to open the window by saying, "Peter, will you be able to reach the window?", thereby asking Peter whether he will be able to reach the window, but at the same time I am requesting him to do so if he can. Since the request is performed indirectly, by means of (directly) performing a question, it counts as an indirect speech act. Indirect speech acts are commonly used to reject proposals and to make requests. For example, a speaker asks, "Would you like to meet me for coffee?" and another replies, "I have class." The second speaker used an indirect speech act to reject the proposal. This is indirect because the literal meaning of "I have class" does not entail any sort of rejection. This poses a problem for linguists because it is confusing to see how the person who made the proposal can understand that his proposal was rejected. Following substantially an account of H. P. Grice, Searle suggests that we are able to derive meaning out of indirect speech acts by means of a cooperative process out of which we are able to derive multiple illocutions; however, the process he proposes does not seem to accurately solve the problem. Sociolinguistics and pragmatics There is a considerable overlap between pragmatics and socio-linguistics, since both share an interest in linguistic meaning as determined by usage in a speech community. However, sociolinguists tend to be more oriented towards variations within such communities. They describe gender, race, identity, and their interactions with individual speech acts. For example, the study of code-switching directly relates to pragmatics, since a switch in code effects a shift in pragmatic force. Code-switching Code-switching is a term in linguistics referring to using more than one language or variety in conversation. Bilinguals, who can speak at least two languages, have the ability to use elements of both languages when conversing with another bilingual. Code-switching is the syntactically and phonologically appropriate use of multiple varieties. Code-switching can occur between sentences (intersentential) or within a single sentence (intrasentential). Although some commentators have seen code-switching as reflecting a lack of language ability, most contemporary scholars consider code-switching to be a normal and natural product of interaction between the bilingual (or multilingual) speaker's languages. Code-switching can be distinguished from other language contact phenomena such as loan translation (calques), borrowing, pidgins. What are the reasons for people to code-switch? • Code-switching a word or phrase from language-B into language-A can be more convenient than waiting for one's mind to think of an appropriate language-B word. • Code-switching can help an ethnic minority community retain a sense of cultural identity, in much the same way that slang is used to give a group of people a sense of identity and belonging, and to differentiate themselves from society at large.
«Linguistics» 👇
Готовые курсовые работы и рефераты
Купить от 250 ₽
Решение задач от ИИ за 2 минуты
Решить задачу
Помощь с рефератом от нейросети
Написать ИИ
Получи помощь с рефератом от ИИ-шки
ИИ ответит за 2 минуты

Тебе могут подойти лекции

Смотреть все 138 лекций
Все самое важное и интересное в Telegram

Все сервисы Справочника в твоем телефоне! Просто напиши Боту, что ты ищешь и он быстро найдет нужную статью, лекцию или пособие для тебя!

Перейти в Telegram Bot