Syntax: Universal Grammar

Let’s recollect the beginning of our quest to linguistics. We’re endeavoring to demystify the development of human language inside our brain. In Chomsky’s (1959, 1980) Poverty of the Stimulus argument, children rapidly form a mental grammar out of a small amount of linguistic stimuli within the first two years of their lives. Moreover, universal properties of natural language are extensively studied and later summarized as a large compendium (Dryer and Haspelsmith, 2005, 2013). These discoveries bolster the notion of Universal Grammar.

The Universal Grammar Theory postulates the existence of an innate and genetic component necessary for human’s ability to learn and generate language. The term Universal Grammar is sometimes to referred to as the mental grammar, and they can be used interchangeably. The theory is based on the Principles and Parameters Framework (P&P), whereby natural language is described by:

  • Principles (abstract rules): a certain set of general structural rules are innate to humans and independent from sensory experience. For example, basic word order (the linear arrangement of subject, verb, and object) is a principle.
  • Parameters (switches): with more linguistic stimuli received during psychological development, children adopt specific structural rules that conform to the principles. Most of these switches are binary (i.e. either on or off) and the remainders have discrete values. These values vary from language to language. For example, the basic word order of Mandarin Chinese is SVO while that of Japanese is SOV.

The framework is sometimes called Government and Binding. The Principles component is compared to a set of general rules that govern human language, and the Parameters component to binding the general rules to a particular language.

We can describe the language development in children with the P&P Framework as depicted in the following illustration.

universal grammar
Language development according to the Universal Grammar Theory

Step 1: A child is born with language faculty, which is actually an innate set of general structural rules. Let’s compare this set of rules with an uninitialized sausage machine (i.e. language principles).

Step 2: As the child is getting exposed to the linguistic stimuli from his parents, he learns to discern human voices from background noise and imitates the language by babbling. We consider this step unsupervised learning, because he learns this task from the stimuli without any explicitly labeled examples for human voice classification. At the end of this step, the sausage machine becomes partially initialized as some switches (i.e. language parameters) are turned on or off while the rest are left untouched. The child is ready to recognize and produce human voice, but he’s still unable to produce any grammatical sentences yet.

Step 3: His parents correct the child’s language use throughout the one-word, two-word, and telegraphic stages. He gradually acquires the language, or ‘tunes up the switches’, from positive and negative feedback from the parents. We consider this step a kind of reinforcement learning, owing to his objective to learn an appropriate language use from the guiding feedback. At the end of this step, the sausage machine becomes fully initialized. The child has achieved an adult grammar and is ready to produce an infinite number of grammatical sentences with respect to the grammar.

Step 4: The child gets exposed to human community. He keeps acquiring more fine-grained grammatical rules from the community’s language culture and attaching additional switches and wires to the sausage machine accordingly. We consider this step supervised learning, because the child learns the language use from explicitly labeled examples. Thanks to the very sophisticated sausage machine, the child is now capable of producing not only speech but also eloquence with his full-fledged grammar.

The Universal Grammar Theory is still an ongoing debate. The most common criticisms of the theory are three folds (Hinzen, 2012). First, despite its claim, any panlinguistically consistent formulation of the Universal Grammar has not been established. The notion of grammar rules employed in linguistics is post-hoc observations about existing languages rather than speculation about what is possible for a language (Sampson, 2005). Second, Christian and Chater (2008) argue that the innateness of the Universal Grammar contradicts the neo-Darwinian evolution theory. That is because evolution is a long process of genetic mutation whilst human’s use of language started in a much faster rate. Third and last, linguistic principles that generalize across all languages have not been discovered up until now. Even the notions of subject and object are not entirely universal. For example, Cebuano discerns the topic subject from the non-topic one with the absolutive and ergative case markers, respectively, while the object is marked by the oblique case marker (Dryer, 2013). Therefore, some connectionists believe that language learning is based on similarity by observing probabilistic patterns of words, a.k.a. distributional semantics (McDonald, 2001).

Despite the criticisms, the P&P Framework was shown to be a key catalyst for grammar induction (grammar discovery from raw text). In such task, we assume that words that co-occur frequently are likely to form a constituent such as a determiner and a noun, occurring very frequently, form a valid constituent. However, unexpected cooccurrence may lead to mistaken constituency. For example, a determiner and an adjective co-occur very frequently in English but they don’t form a grammatical constituent, e.g. grammatical [a [big dog]] vs. ungrammatical * [[a big] dog]. Various degrees of linguistic insights are employed to solve this issue (Haghighi and Klein, 2006; Druck et al., 2009; Snyder et al., 2009; Naseem et al., 2010; Boonkwan and Steedman, 2011; Bisk and Hockenmaier, 2012; Boonkwan, 2014), therefore substantially improving the accuracy of unsupervised parsing. Some of Boonkwan’s (2014) language parameters include:

  1. Basic word order
    • Subject, verb, direct object, and indirect object (or free word order)
    • Noun and its adjuncts and specifiers
    • Verb and its complements, adjuncts, and specifiers
    • Adposition (pre- and post-position) and its complements, adjuncts, and specifiers
    • Copula (verb to be)
    • Auxiliary verb and its complements, adjuncts, and specifiers
  2. Movement and extraction
  3. Dropping and ellipsis
  4. Coordinate structures

Elicitation of these language parameters can be done via a questionnaire-based interview with language informants. In the experiments, the informants are asked to translate a set of simple and complex sentences to their native languages. Language parameters are then extracted from the word alignment information and employed as the choice preferences of constituent formation.

References

  • Chomsky, A. Noam (1959). “A Review of Skinner’s Verbal Behavior”. Language. LSA. 35 (1): 26–58. doi:10.2307/411334. Retrieved 2014-08-26., Repr. in Jakobovits, Leon A.; Miron, Murray S. (eds.). Readings in the Psychology of Language. New York: Prentice-Hall. pp. 142–143.
  • Chomsky, N. (1980) On Cognitive Structures and their Development: A reply to Piaget. In M. Piattelli-Palmarini, ed. Language and Learning: The debate between Jean Piaget and Noam Chomsky. Harvard University Press.
  • Dryer, Matthew S. & Haspelmath, Martin (eds.) 2005, 2013. The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. (Available online at http://wals.info, Accessed on 2017-12-25.)
  • Aria Haghighi and Dan Klein. 2006. Prototype-driven grammar induction. In Proceedings of 44th Annual Meeting of the Association for Computational Linguistics, pages 881–888.
  • Gregory Druck, Gideon Mann, and Andrew McCallum. 2009. Semi-supervised learning of dependency parsers using generalized expectation cri- teria. In Proceedings of 47th Annual Meeting of the Association of Computational Linguistics and the 4th IJCNLP of the AFNLP, pages 360–368, Suntec, Singapore, August.
  • Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proceedings of the Joint Confer- ence of the 47th ACL and the 4th IJCNLP.
  • Tahira Naseem, Harr Chen, Regina Barzilay, and Mark John- son. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of EMNLP-2010.
  • Prachya Boonkwan and Mark Steedman. 2011. Grammar induction from text using small syntactic prototypes. In Proceedings of the 5th IJCNLP, pages 438–446.
  • Yonatan Bisk and Julia Hockenmaier. 2012. Induction of linguistic structure with combinatory categorial grammars. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 90–95. ACL.
  • Prachya Boonkwan. 2014. Scalable Semi-Supervised Grammar Induction using Cross-Linguistically Parameterized Syntactic Prototypes. PhD thesis. University of Edinburgh.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Syntax: Basic Word Order

In the previous episode, we talked about the famous Xʹ Theory. We learned how to analyze a given constituent into four components: head (core meaning), complement (compulsory element), adjunct (omissible element), and specifier (omissible phrase marker). One key advantage of Xʹ Theory is it paves a way to an in-depth understanding of how constituent formation takes place and how to design the part of speech tagset. In this episode, we will learn another advantage of the theory: it helps us classify human language according to constituent word order.

Word order topology is the study of basic word order in human language. There’re generally two extremes on the word-ordering spectrum: rigid (or fixed) and free. On one side, rigid-word-order languages use restrictive word order to convey grammatical information. On the other side, free-word-order languages ascribe grammatical information to the affixation process, resulting in more flexibility of constituent construction.

Wherever a language situates in the spectrum, it always has a dominant word order, a word order preferred over the others when used in any neutral context. We observe the order of the main verb (V) and its complements—the subject (S) [i.e. the actor or the agent] and the object (O) [i.e. the actee or the patient]. Out of 1,377 known languages, there are six combinations of S, V, and O as follows (Dryer, 2013).

    1. SVO: there’re 488 known languages in this group (42%), e.g. English, Mandarin Chinese, and Thai. For example, Thai:
      sǒmsǐː kin kʰâːʋ lǽːʋ
      Somsri eat rice PERF
      S V O
      ‘Somsri has already eaten rice.’
    2. SOV: there’re 565 known languages (45%), e.g. Japanese, Farsi (Modern Persian), and Myanmar. This group has the most members. For example, Japanese:
      sensei-ga gohan-o taberu
      teacher.NOM rice.ACC eat
      ‘The teacher is eating rice.’
    3. VSO: there’re 95 known languages (9%), e.g. Irish Gaelic, Modern Standard Arabic, Hebrew, and Filipino. For example, Irish Gaelic (Dillon and Ó Cróinin, 1961):
      Léann [na sagairt] [na leabhair]
      read.PRES the.PL priest.PL the.PL book.PL
      V S O
      ‘The priests read the books.’
    4. VOS: there’re 25 known languages (3%), e.g. Malagasy and Bauré (Bolivia). For example, Nias (Brown, 2001: 538):
      i-rino vakhe ina-gu
      3SG.REALIS.cook ABS.rice mother-1SG.POSS
      V O S
      ‘My mother cooked rice.’
    5. OVS: there’re 11 known languages (1%), e.g. Hixkaryana (Amazon River, Brazil) and Apalaí (Amazon River, Brazil). For example, Hixkaryana (Derbyshire, 1979: 87):
      toto y-ahosɨ-ye kamara
      man 3:3-grab-distant.pst jaguar
      O V S
      ‘The jaguar grabbed the man.’
    6. OSV: there’re only four languages (0%), e.g. Warao (Venezuela) and Nadëb (Amazon River, Brazil). For example, Nadëb (Weir, 1994: 309):
      awad kalapéé hapʉ́h
      jaguar child see.IND
      O S V
      ‘The child sees the jaguar.’

Some languages may have two dominant word orders. SVO and SOV are the most common pair of dominant word orders. In English, the SOV pattern, such as ‘Mary has her hair cut’, is rarely used. However, German uses the SOV pattern almost as frequently as the SVO pattern.

The differences of word order also affect the Xʹ schema across the languages. For example, the Xʹ schemas of English and Japanese are illustrated below. In English, complements tend to succeed the head, adjuncts may precede or succeed the head, and the specifier precedes the head. However, complements, adjuncts, and the specifier almost always precede the head in Japanese.

xbar-schemas
The Xʹ schemas of English and Japanese, respectively

References

  • Matthew S. Dryer. 2013. Order of Subject, Object and Verb. In: Dryer, Matthew S. & Haspelmath, Martin (eds.). The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. (Available online at http://wals.info/chapter/81, Accessed on 2017-12-24.)
  • Dillon, Myles and Ó Cróinin, Donncha. 1961. Teach Yourself Irish. London: The English Universities Press Ltd.
  • Brown, Lea. 2001. A Grammar of Nias Selatan.
  • Derbyshire, Desmond C. 1979. Hixkaryana. (Lingua Descriptive Studies, 1.) Amsterdam: North-Holland.
  • Weir, E. M. Helen. 1994. Nadëb. In Kahrel, Peter and van den Berg, René (eds.), Typological Studies in Negation, 291-323. Amsterdam: John Benjamins.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Syntax: Xʹ Theory

Last time, we learned how to generalize the patterns of constituents with the Phrase Structure Grammar (PSG). Each constituent consists of two components: the head (the core meaning) and the complement (the rest of the constituent that completes the core meaning). Constituents that function similarly are grouped into a single phrase type. We name that phrase according to its head’s phrase type—the preposition phrase has the preposition as the core meaning, for instance.

Although it’s clear to indicate the head of each phrase, the complement part is still quite problematic. That’s because some elements of the complement are compulsory while the others are omissible. For example, in verb phrase “[ate icecream ravenously]”, the complement ‘icecream’ is necessary for the verb ‘ate’ while the complement ‘ravenously’ can be omitted. We have to disentangle the compulsory from the omissible in order to examine the smallest complete meaning of a constituent. This partition is suggested in Xʹ Theory (Chomsky, 1970).

Xʹ Theory (pronounced ‘X-bar Theory’) states that each constituent consists of four basic components: head (the core meaning), complement (compulsory element), adjunct (omissible element), and specifier (an omissible phrase marker). The name Xʹ (traditionally typesetted as \bar{X}) comes from an intermediate phrase type that represents the smallest complete core meaning formed by the head of type X and its complements. Here, X is actually a variable for any phrase type or part of speech such as N (noun), V (verb), and P (preposition).

In essense, the theory proposes three steps to help identify the four elements of each constituent.

Step 1. Complementation: the head of type X combines with any number (including zero) of complements to become Xʹ; formulated as:

X^\prime \rightarrow X \oplus (\textrm{complement} ...)

The notation \oplus is string concatenation in any order. Note that any constituent of type Xʹ conveys the smallest complete core meaning of the head of type X and it needs not be grammatical. For example, consider the constituent “gave a policeman a flower”. The ditransitive verb ‘gave’ requires two objects—a direct object and an indirect object. When this verb of type V combines with the two complements, i.e. ‘a policeman’ and ‘a flower’, the whole constituent becomes of bare phrase type Vʹ. We sometimes call a constituent of type Xʹ a bare phrase.

xbar-head.png
Xʹ-theoretic analysis of constituent “gave a policeman a flower”

Step 2. Attachment: a bare phrase of type Xʹ combines with any number (including zero) of adjuncts to become Xʹ; formulated as:

X^\prime \rightarrow X^\prime \oplus \textrm{adjunct}

We can attach any number of the adjuncts to a constituent of type Xʹ and it won’t change the phrase type. For example, let’s consider the constituent “secretly gave a policeman a flower at the police station”. When we attach the adjunct ‘secretly’ to the constituent “gave a policeman a flower” of type Vʹ, the whole constituent is still a bare phrase of type Vʹ. The same applies when we attach the adjunct ‘at the police station’ to the whole constituent again. We obtain a bare phrase of type Vʹ.

xbar-head-adj
Xʹ-theoretic analysis of constituent “secretly gave a policeman a flower at the police station”

Step 3. Specification: a constituent of type Xʹ combines with zero or one specifier to become a phrase of type XP; formulated as:

X\textrm{P} \rightarrow X^\prime \oplus (\textrm{specifier})

The specifier is a word or a derivational affix that makes a bare phrase of type Xʹ become a fully grammatical phrase of type XP. It’s the word or derivational affix that signifies the beginning of such phrase. In the case of English, the determiners (i.e. a, an, and the) signifies the beginning of a noun phrase. For example, let’s consider the constituent “a big flower”. The head is the noun ‘flower’. Since the adjective ‘big’ is omissible, it’s an adjunct. Their combination ‘big flower’ is of type Nʹ but it’s still ungrammatical. To make it grammatical, we attach the specifier ‘a’ to make it a noun phrase (NP).

xbar-spec
Xʹ-theoretic analysis of constituent “a big dog

The quantifiers (i.e. all and each) can be a specifier of a verb phrase. In sentence “The girls all secretly gave a policeman a flower at the police station”, the quantifier all becomes the specifier for the constituent “gave a policeman a flower at the police station” of type Vʹ.

xbar-head-adj-vp
Xʹ-theoretic analysis of constituent “all secretly gave a policeman a flower at the police station

Note that the specifiers are omissible. A constituent of type Vʹ can become a fully grammatical verb phrase VP on its own if there’re no quantifiers preceding.

One key advantage of Xʹ Theory is it paves a way to an in-depth understanding of how constituent formation takes place and how to design the part of speech tagset. First, with the head and its complements identified, we can determine the number of arguments for each part of speech tag. For example, we can classify English verbs into three categories: intransitive (requiring zero complements e.g. ‘sit’), transitive (requiring one complement e.g. ‘eat’), and ditransitive (requiring two complements e.g. ‘give’). Second, we can separately treat each adjunct as attachment of a meaning modifier rather than meaning complementation. For example, we don’t have to divide nouns into subcategories with respect to the number of adjectives modifying them. Finally, we can also distinguish the specifiers from the adjuncts and treat them separately. In the case of English, for instance, articles differ from adjectives in that the first transform a bare phrase into a fully grammatical phrase while the latter don’t. In the case of Mandarin Chinese and Thai, on the other hand, there’re no concepts of specifiers, so any full noun phrase is identical to its bare phrase form. Without doubts, the Xʹ Theory is a key foundation of modern syntax theories and we’ll refer to it from time to time.

References

  • Chomsky, Noam (1970). Remarks on nominalization. In: R. Jacobs and P. Rosenbaum (eds.) Reading in English Transformational Grammar, 184-221. Waltham: Ginn.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Syntax: Phrase Structure Grammar

In the previous episode, we learned how to test the constituency of a text chunk. To be precise, we wanted to show if a word or a group of words can function as a single unit or not. This analysis sheds light to the fact that constituents can be nested to form a larger constituent; for instance:

  • [The dog] is sleeping.
  • [The dog [that chased [the cat]]] is sleeping.
  • [The dog [that chased [the cat [that killed [the rat]]]]] is sleeping.
  • [The dog [that chased [the cat [that killed [the rat [that ate [oatmeals]]]]]]] is sleeping.

In the above example, the core meanings ‘dog’, ‘cat’, and ‘rat’ are modified by different relative clauses. A certain constituent can be modified only by particular groups of constituents. Phrase Structure Grammar generalizes all these syntactic subtleties.

In Phrase Structure Grammar (PSG) (Chomsky, 1957), any constituent generally comprises two components: the head and the complement. The head is the central word that conveys the core meaning of that constituent. The complement is a part of the constituent that completes the core meaning. If we unite all constituents that function similarly under one category, or phrase, we can now define the patterns of each constituent by designating its head and complement. As a consensus, we name a phrase according to its head’s phrase type.

Let’s consider the constituent “fond of chocolate”, whose phrase structure is illustrated below. The entire constituent is an adjective phrase (ADJP) because the head of the constituent is ‘fond’ which is an adjective (ADJ). Likewise, the constituent “of chocolate” is a preposition phrase (PP) because its head is a preposition (PREP).

PSG-adjp-pp
The phrase structure of ‘fond of chocolate’

There’re two phrase structure rules that construct the above phrase structure.

  • ADJP → ADJ PP
  • PP → PREP N

The arrow → reads ‘generates’, ‘produces’, or ‘consists of’.

Broad phrase types are useful for observing general constituent patterns, but they lack constraints of generation. For example, if we strictly follow the phrase structure rules above, we’ll end up with an ungrammatical constituent like * “fond to chocolate”. To avoid such circumstance, we can subcategorize phrase types of interest to impose constraints to the grammar. For example, if we subcategorize the preposition phrase into \textrm{PP}_\textrm{to} and \textrm{PP}_\textrm{of}, the phrase structures will become as follows.

PSG-subcat
Subcategorization of phrase type (PP in this case)

References

  • Chomsky, Noam 1957. Syntactic structures. The Hague/Paris: Mouton.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Syntax: Constituency

In syntax, the study of word order, we focus on hierarchical structures of human language. This property distinguishes the human language from animalistic communication in that there’re many possible sentences in our grammar; i.e. human language is productive. For instance, let’s observe the use of relative clauses in these examples.

  • The dog is sleeping.
  • The dog that chased the cat is sleeping.
  • The dog that chased the cat that killed the rat is sleeping.
  • The dog that chased the cat that killed the rat that ate oatmeals is sleeping.
  • ………

The relative clauses in the above examples allow us to produce indefinitely many sentences from simple structures. And if we take a closer look, we’ll see that the chunks ‘the dog’, ‘the dog that chased the cat’, etc. function in the sentences as if they are almost the same things. This kind of phenomenon is called constituency.

Constituent is a word or a group of words that function as a single unit within a hierarchical structure. For example, ‘the dog’, ‘the dog that chased the cat’, etc. are constituents of the same type, because they behave as single chunks. There are five popular ways to check whether or not a chunk of text is a constituent.

  • Substitution: replace it with a single word.
  • Movement: move it around.
  • Clefting: “It is X that ________”.
  • Stand-alone: ask a question and reply to it with that text chunk.
  • Coordination:X and/but Y“.

If the text chunk passes any of these tests, it’s likely to be a constituent.

Substitution Test: Replace a text chunk with a single word. If the resultant sentence still preserves the original meaning, it is a constituent. For example, let’s consider the sentence “A computer scientist is slacking off.” We’ll now test each part of the sentence for its constituency.

  • [A computer scientist] is slacking off.
    He/she is slacking off.
  • A computer [scientist is] slacking off.
    ⇒ * A computer ??? slacking off.
  • A computer scientist [is slacking off].
    ⇒ A computer scientist is. (Question: Who’s slacking off?)

The result of the first example preserve the original meaning of the sentence, so ‘a computer scientist’ is a constituent. In the second example, the result doesn’t preserve the original meaning, because we cannot replace the bracket with any other words (thus, the asterisk). So ‘scientist is’ is not a constituent. Finally, ‘is slacking off’ is a constituent because we can replace it with the word ‘is’, making the entire sentence become an answer for the question “Who’s slacking off?”

Movement Test: If a chunk of text can be moved together in a sentence while retaining the original meaning, it is a constituent. For example, consider the sentence “The students are called upon to the assembly hall.” We’ll test the constituency of some parts of the sentence with movement.

  • The students are called upon [to the assembly hall].
    To the assembly hall the students are called upon.
  • The students are called [upon to the assembly hall].
    ⇒ * Upon to the assembly hall the students are called.
  • The students are [called upon to the assembly hall].
    Called upon to the assembly hall the students are.

The first and third examples are allowed in English, making ‘to the assembly hall’ and ‘called upon to the assembly hall’ constituents. However, the second example is not allowed because the two-word verb ‘call upon‘ is unseparable in English.

Clefting Test: If a text chunk can be placed in the pattern “It is X that _____” while retaining the original meaning, it is a constituent. Clefting means splitting something into two halves; i.e. the pattern splits the sentence into two halves. Let’s consider the following examples.

  • [The students] are called upon to the assembly hall.
    ⇒ It is the students that are called upon to the assembly hall.
  • The students are called [upon to the assembly hall].
    ⇒ * It is upon to the assembly hall that the students are called.
  • The students are called upon [to the assembly hall].
    ⇒ It is to the assembly hall that the students are called upon.

Likewise, clefting in the first and third examples is allowed in English. However, clefting in the second example is not allowed because the two-word verb ‘call upon’ is inseparable.

Stand-Alone Test: If a chunk of text can stand alone as an answer to a question while retaining the original meaning, it is a constituent. Consider the following examples.

  • [The students] are called upon to the assembly hall.
    ⇒ Question: Who are called upon to the assembly hall?
    ⇒ Answer: The students.
  • The students are called [upon to the assembly hall].
    ⇒ Question: Where are the students called?
    ⇒ Answer: * Upon to the assembly hall.

In the first example, ‘the students’ is a constituent because it is a legitimate answer to the corresponding question. On the other hand, ‘upon to the assembly hall’ is not, because it would split the two-word verb ‘call upon’.

Coordinate Test: If two text chunks can be coordinated, both of them are constituents of the same type. For example, all of the following pairs of text chunks are constituents of the same type.

  • Mary likes [red apples] and [brown potatoes].
  • Mary enjoys [watching TV] and [reading novels].
  • [Mary likes talking to] and [John likes working with] their bosses.
  • [Mary likes] but [John dislikes] potatoes.

However, this pair of text chunks are not of the same type.

  • * Mary likes [books] and [big].

The notion of constituency has been long studied so that linguists observed the nestedness and recursivity of hierarchical structures. They found that constituents can be nested to form a larger constituent; for instance:

  • [The dog] is sleeping.
  • [The dog [that chased [the cat]]] is sleeping.
  • [The dog [that chased [the cat [that killed [the rat]]]]] is sleeping.
  • [The dog [that chased [the cat [that killed [the rat [that ate [oatmeals]]]]]]] is sleeping.

They generalized this phenomenon in the form of Phrase Structure Grammar, which will be elaborated in the next episode.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Syntax

Language is a quintessential faculty of humans. In a considerably short time, children can effortlessly acquire language from their parents in four components: phonetics, phonology, morphology, and syntax. Among those, syntax, the system of word order, is the last component to form in the telegraphic stage and consolidate to become the mental grammar in the adult-grammar stage. In this episode, we’ll learn how this system governs humans’ language understanding and generation.

Syntax is the study of word order and sentence structure in human language. Native speakers of a language are autonomously able to know whether or not any given sentence belongs to the language, or the grammaticality of the sentence. Let’s take a look at the following sentences.

  1. Give me some orange, will you?
  2. * Orange me give will some you?

Any English speakers can immediately tell that the first sentence makes a perfect sense while the second one doesn’t make sense at all. We call the first one grammatical, and the second one ungrammatical. Consensually, we mark any sentence that sounds strange to the language’s native speakers with an asterisk ‘*’ in front of it as you have seen in the second sentence. The stranger it sounds, the more asterisks it gets.

Contrary to popular belief, syntax is independent from the meaning (Chomsky, 1957). We can tell right away that the following sentence is grammatical, because it follows the rules of English grammar.

  • Colorless green ideas sleep furiously.

However, we can merely guess the meaning of this scrambled sentence albeit completely ungrammatical.

  • * today office at me for wait please

Can you unscramble the sentence? ….. It means “Please wait for me at the office today.” Some people may find it very easy to unscramble the sentence, but others may find it rather hard. That’s because I intentionally put these words in the subject-object-verb order, where the adverbs precede the verbs and the prepositions follow the nouns. We’ll revisit the topic of word orders soon.

By our inherent syntax, we can imply the following aspects from any given sentence.

  • Ambiguity: there’re multiple interpretations for a sentence. For example, sentence “I saw a man in the park with a telescope” has several interpretations, depending on how we attach each preposition ‘in’ and ‘with’ to the nouns and verbs. This kind of problems is called the attachment of preposition phrase or PP-attachment.
  • Sentence relatedness: we can tell if two sentences are similar or not. For example, sentences “John gives Mary a flower.” and “John gives a flower to Mary.” are quite similar because they denote the same meanings. The relatedness is not limited to the same type of sentences. For example, declarative sentence “John ate the candy.” and interrogative sentence “Did John eat the candy?” are related because they correspond to each other.
  • Role interpretation: we can interpret the role of each grammatical unit. For example, the roles of ‘Mary’ in “Mary is eager to please.” and “Mary is easy to please.” are completely different. In the first example, Mary is an actor of the action ‘please’, while in the second example, she becomes the recipient of the action.

References

  • Chomsky, Noam (1957). Syntactic Structures. The Hague/Paris: Mouton. p. 15. ISBN 3-11-017279-8.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Morphology: Some Nominal Categories and Verbal Categories

In the previous episode, we learned how to distinguish languages into families and subfamilies using their morphological characteristics. By the ratio of morphemes per word, there’re the isolating/analytic and the synthetic families. In the synthetic family, there’re two subfamilies: fusional and agglutinative, with respect to the separability of the inflectional affixes. In the synthetic family, there’re also two marking systems: case marking (on the dependents) and head marking (on the heads). We’ve learned a lot about the cases (grammatical functions of nouns) and how they’re used. In this episode, it’s time to learn some more about the grammatical information, a.k.a. category, of the nouns and the verbs.

Nominal categories. Nouns and adjectives are grouped together as nominal because they share common linguistic properties. Some of these properties is worth observing.

  • Number: count distinction of nouns. Most languages have the number system of singular (one) vs. plural (more than one). But there’re also other systems such as
    • Singular, dual (two), and plural: as in Sanskrit, Ancient Greek, Old English, and other ancient Indo-European. For example, Sanskrit puras ‘person’ > purau ‘two persons’ > purās ‘many persons’.
    • Singulative (individual) vs. collective (group): as in Welsh, Standard Arabic, etc. For example, Standard Arabic hajar ‘stone’ (collective) > hajara ‘a stone’ (singulative).
    • Singular, dual, trial (three), and plural: as in the pronoun systems of Tok Pisin and several Austronesian languages. No known languages have the trial number in nouns.
    • Singular, dual, trial, quadral (four), and plural: as in the pronoun systems of Marshallese, Sursurunga, and several Austronesian languages
    • Singular, paucal (small amount), and plural: as in Hopi, Walpiri, Arabic (for some nouns), Polish (for some nouns), Motuna, and Serbo-Croatian. For example, Polish pies ‘a dog’ > psy ‘2/3/4 dogs’ > psów ‘5 or more dogs’.
  • Noun class: a division of nouns into classes that form agreement with another aspect of language, such as articles, adjectives, pronouns, or verbs. Common systems of noun classes are masculine-feminine, masculine-feminine-neuter, animate-inanimate, and common-neuter. For example, adjectives decline according to nouns in Latin, e.g. bonus dominus ‘a good master’ (masculine) > bonā puellā ‘a good girl’ (feminine) > bonum castellum ‘a good castle’.
  • Definiteness: distinction of entities whether they can be identified in a given context or not. If it can be identified up to the current point of the context, it is definite. Otherwise, it is indefinite. Some languages distinguish the definiteness, while some don’t. For example, in English, definiteness is marked by the determiners. In Germanic languages and Balto-Slavic languages, two sets of adjectives for nouns with different definiteness.
  • Numeral classifier: counting words inherent for each noun. Numeral classifiers are prominent features of nouns in many East and Southeast Asian languages such as Chinese, Japanese, Korean, Thai, Laos, Khmer, etc. For example, in Thai, classifier ตัว [tua] is a classifier of animals.
    sùnák sǎ:m tua
    dog three CL
    ‘three dogs’

    Classifiers differ from measure words (like a cup of, a pound of, two kilograms of, etc.) in that the classifiers are inherent to the nouns while the measure words are not.

Verbal categories. Verbs and adverbs are grouped together as verbal because they share common linguistic properties. Some of these properties should be taken into account.

  • Tense: a relation of the event time and the speech time. Let’s take a look at Chibemba’s tenses for ba-bomba ‘they work’.
    chibemba tenses
    Tense system in Chibemba

    In Chibemba, there’re three general tenses: present, past, and future. But the past and the future tenses have four levels of remoteness from the speech time: immediate, near, removed, and remote. That is quite tense, isn’t it?

  • Aspect: a temporal structure of an event. For example, in English there’re three aspects: perfect, imperfect (a.k.a habitual), and progressive.
    • Perfective: “He wrote three novels.”
    • Imperfective (habitual): “He writes novels.”
    • Progressive: “He is writing a novel.”
  • Mood: how the speaker thinks about an event expressed in morphological and syntactic constructions. For example, there’re three moods in English:
    • Indicative (assertion): “It is sunny outside.”
    • Subjunctive (undertainty): “It should be sunny outside.”
    • Imperative (command): “Break the window!”
  • Modality: the semantic notion of the mood. For example, there’re four modalities in English.
    • Neutral: “I resit the exam.”
    • Obligation: “I must resit the exam.”
    • Desire: “I want to resit the exam.”
    • Possibility: “I may resit the exam.”
  • Evidentiality: the validity of an event. For example, in Tuyuca, a language in Brazil and Colombia, there’re five evidentialities: visual, non-visual, apparent, second-hand, and assumed.

    evidentiality
    Evidentiality in Tuyuca

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Morphology: Language Families

Have you ever wondered how all languages around the World are divided into families? The linguists classify them into families by morphological characteristics. There are two general language families: isolating and synthetic.

Isolating family. Languages of this family have a low ratio of morphemes per word and generally lack inflectional morphemes. For example, all dialects of Chinese, Thai, and Laos belong to this family. Due to the lack of inflectional morphemes, the languages in this family consequently have a rigid word order and grammatical functions have to be analyzed out of it. Therefore, the terms isolating and analytic are used interchangeably to refer to this language family.

Synthetic family. Languages of this family have a high ratio of morphemes per word and have a rich set of inflectional affixes. The languages in this family have a relatively more free word order, because grammatical functions can be marked by the inflectional affixes. There are two subfamilies in this family.

  • Fusional subfamily: morphemes are less separable. For example, in Ancient Greek λύομαι lu.omai (release.1st person.present tense.active.subjunction) ‘I should release’. lu is the root, and -omai is an inflectional affix indicating the first person, present tense, active, and subjunctive form of a verb. The affix -omai is not separable.
  • Agglutinative subfamily: morphemes are more separable. For example, in Nahuatl (Aztec language), no.kali.mes (my.house.plural) ‘my houses’. no- is a derivational affix for ‘my’, kali is the root, and -mes is an inflectional affix for the plurality. All of them are easily separable.

Languages in this family can also be characterized by the use of inflectional affixes.

  • Case-marking subfamily: grammatical functions are marked on each dependent (i.e. an argument of a verb, a preposition, etc.). Any grammatical function marked on a dependent is called a case, and an inflectional affix that marks a grammatical function, a case marker. For example, in Japanese, cases are marked on nouns and pronouns, e.g.
    watashi-ga gohan-o taberu
    I.SUBJ rice.OBJ eat.PRES
    ‘I eat rice.’

    The case marker -ga marks the subject, and the case marker -o marks the object.

  • Head-marking subfamily: grammatical functions are marked on each head (i.e. a verb, a preposition, etc.) instead. Any inflectional affix that marks a grammatical function on the head is called a head marker. For example, the order of verb arguments in Mohawk is marked by inflectional affixes. Consider the following sentences:
    Sak Uwári shako-núhwe’s
    Sak Uwari SUBJ-OBJ.like
    ‘Sak likes Uwari.’

    and

    Sak Uwári ruwa-núhwe’s
    Sak Uwari OBJ-SUBJ.like
    ‘Uwari likes Sak.’

    The head marker shako- marks the sequence of the subject and the object, while the head marker ruwa- marks the sequence of the object and the subject.

In the case-marking subfamily, cases vary from language to language. For example,

  • English has three cases: nominative (subject) they, accusative (object) them, and possessive (owner) their and theirs.
  • In our good old Latin, there’re six cases: nominative (subject) dominus ‘master’, accusative (direct object) dominum, dative (indirect object) dominō ‘for a master’, genitive (owner) dominī ‘of a master’, ablative (instrument) dominō ‘with a master’, and vocative (calling) domine ‘Hey, master!’
  • In Japanese, there’re eight cases, some of which resembling English prepositions: topic -wa, nominative (subject) -ga, accusative (direct object) -o, dative (indirect object) -ni, genitive (owner) -no, ablative (from) -kara, lative (to) -e, and instrumental (with) -de.

There’re also two case marking systems, divided by how they treat the arguments of transitive and intransitive verbs.

  • Nominative-Accusative system: For any transitive verb, the subject is marked with the nominative case (i.e. subject), while the object is marked with the accusative case (i.e. direct object). For any intransitive verb, the subject is still marked with the nominative case. The languages of this system include Japanese, Latin, Sanskrit, and many other Indo-European languages. For example, in Japanase, consider the case marking of NOM and ACC in the verbs ‘eat’ and ‘wait’.
    Satōsan-ga gohan-o taberu
    Mr. Sato.NOM rice.ACC eat.PRES
    ‘Mr. Sato is eating rice.’

    vs.

    Satōsan-ga matte iru
    Mr. Sato.NOM wait.PRES
    ‘Mr. Sato is waiting.’

    The nominative case is used in both transitive verbs and intransitive verbs to identify the subject of the sentence. We also refer to the nominative-accusative marking system as symmetric case marking.

  • Ergative-Absolutive system: For any transitive verb, the subject is marked with the absolutive case, while the object is marked with the ergative case. For any intransitive verb, the subject is instead marked with the ergative case. The languages of this system include Tibetan, Dzongkha, Greenlandic Eskimo, etc. For example, in Greenlandic Eskimo, consider the case marking of ABS and ERG in the verbs ‘send’ and ‘come’.
    Juuna-p atuaga-q nassiuppaa
    Juuna.ABS book.ERG send.PAST
    ‘Juuna sent a book (to someone).’

    vs.

    atuaga-q tikissimanngilaq
    book.ERG come.NEG.PRES.PERF
    ‘The book has not come yet.’

    In this system, the ergative case is used to identify the object in the transitive verbs but is used to identify the subject in the intransitive verbs. We also refer to the ergative-absolutive marking system as asymmetric case marking. By the way, Greenlandic Eskimo tends to have a long word, doesn’t it?

Some languages do use both systems in specific circumstances. These languages are called the ergative split.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Morphology: Content Words, Function Words, and Grammaticalization

In the language acquisition process, children learn to imitate their parents’ use of language. They start out by babbling, trying to utter simple sounds of consonants and vowels. As they grow up they start to mix and match those sounds to make a word. But before they can achieve an adult grammar (a fully developed mental grammar), there’s a critical stage, which all children have to encounter—the telegraphic stage, in which they attempt to put simple words into a sentence, like “Mom give cookie” (‘Mom, give me a cookie.’) and “Doggie not bite” (‘The dog doesn’t bite.’). The words they use in this stage are called content words.

Content words are lexical morphemes that have a semantic content; i.e. they have a particular meaning on its own. They are usually open class words because new content words can be easily included to the language. For example, nouns, verbs, adjectives, and adverbs are content words, because they all refer to semantic concepts. However, we also consider derivational affixes and negation as content words because they change the meaning of a base form.

On the other hand, function words or grammar words are lexical morphemes that have a grammatical relation rather than refer to a semantic concept. They just have to be there to make a grammatical sentence. For instance, articles, conjunctions, prepositions, auxiliaries, interjections, particles, and inflectional affixes are function words.

Most of the time, it’s quite easy to distinguish the content words from the function words. Words that refer to an object, an abstract idea, an action, an attribute, and a manner are said to be content words. Words that don’t refer to any meaning but must be there to make a grammatical sentence are function words. But some words appear to be both! For example, ‘will’ as a noun (content) means a motivation to do something, while as an auxiliary (function) conveys the futurity of an action. In this case, we say that the word ‘will’ is in the process of grammaticalization.

Grammaticalization is a process of language change whereby a content word (or a cluster of content words) becomes a function word. This process takes place when a content word is used so frequently that it starts losing its core meaning over time.

grammaticalization
Grammaticalization process

Grammaticalization is characterized by the following processes.

  • Semantic bleaching (desemanticization): a word loses its semantic content. As a content word is frequently used, it establishes a structure with surrounding words and becomes a partial function word. As its functionality strengthens, the semantic content gradually disappears.
  • Morphological reduction (decategorization): a word changes its content-bearing category to a grammatical structure. This process is a result of semantic bleaching.
  • Phonetic erosion: a word loses its phonological properties as a free morpheme to become a bound morpheme, such as I’m going to > I’m gonna > I’mma. Bernd and Kuteva (2002, 2007) propose four kinds of phonetic erosion: the loss of phonetic segments (being full syllable), the loss of suprasegmentals (stress, tones, or intonation), the loss of phonetic autonomy (being an independent syllable), and phonetic simplification.
  • Obligatorification: when a content word is used in a specific context in a specific way, it may become more grammatical over time. [obligatory means necessary.]

In the process of grammaticalization, a content word transforms itself into a function words over time. Hopper and Traugott (2003) propose the cline of grammaticalization as follows.

content word ⇒ function word ⇒ clitic (contraction of full word) ⇒ inflectional affix

The above process is also known as the cycle of categorial degrading (Givon, 1971; Reighard, 1978; Wittmann, 1983). For example, the auxiliary ‘will’ follows through the cline of grammaticalization shown below.

grammaticalization full
Grammaticalization of ‘will’ (Aitchison, 2001)

 

In Old English, the verb willan means to want or to desire. The verb was then grammaticalized to become the auxiliary will in Middle English and Present Day English. Later due to its frequent use, it becomes contracted into clitic ’ll. It’s assumed that it may become an inflectional affix indicating the future tense some time in the future.

Unidirectional Hypothesis. Most linguists assume that process follows Hopper and Traugott’s (2003) cline of grammaticalization. However, this assumption is challenged by very rare counterexamples of degrammaticalization, where several function words become content words under specific circumstances. For example, preposition ‘up’ is degrammaticalized to become a verb, as in “The company upped our salaries by 10%”.

References

  • Heine, Bernd and Tania Kuteva. The Genesis of Grammar. Oxford: Oxford University Press, 2007.
  • Heine, Bernd and Tania Kuteva. World lexicon of grammaticalization. Cambridge: Cambridge University Press, 2002.
  • Givon, Talmy. “Historical syntax and synchronic morphology: an archaeologist’s field trip”, Papers from the Regional Meetings of the Chicago Linguistic Societv, 1971, 7, 394- 415.
  • Hopper, Paul J. and Elizabeth Traugott. Grammaticalization. Cambridge: Cambridge University Press, 2003.
  • Reighard, John. “Contraintes sur le changement syntaxique”, Cahiers de linguistique de l’Université du Québec, 1978, 8, 407-36.
  • Wittmann, Henri. “Les réactions en chaîne en morphologie diachronique.” Actes du Colloque de la Société internationale de linguistique fonctionnelle 10.285-92. Québec: Presses de l’Université Laval. 1983.
  • Aitchison, Jean. Language Change, Progress or Decay? Cambridge: Cambridge University Press, 2001.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.

Morphology: Allomorphs and Apophony

In the last episode, we learned the affixation process, where we add bound morphemes (affixes) to a free morpheme (root) to form a new word. When a root combines with an optional derivational affix, it becomes a base form, such as teach (root) + -er (derivational affix) > teacher (base form). When a base form combines with an inflectional affix, it becomes an inflected form, e.g. teacher (base form) + -s (inflectional affix) > teachers (inflected form). However, affixes of the same function may appear in different forms. For example, the plurality affixes in English are -s (dogs), -es (kisses), -en (oxen), etc. The question is: how can we group them? That’s where the notion of allomorph comes in.

Allomorphs are morphological variants of a particular morpheme. For example, the plurality in English has the following allomorphs:

  • [-s] allomorph: cat + [-s] > cats
  • [-z] allomorph: dog + [-z] > dogs
  • [-əz] allomorph: kiss + [-əz] > kisses
  • vowel-change allomorph: man > men
  • zero allomorph: sheep > sheep
  • -en allomorph: ox > oxen

All of them are grouped into the plurality morpheme, denoted by [+PLU]. Note that these affixes are bracketed and use the plus sign ‘+’ instead.

Among allomorphs, vowel-change allomorph, or apophony, is a bit harder to identify as an explicit morpheme. Apophony is the change of sounds within a word that indicates grammatical information. For example, sing > sang > sung change the verb tenses and tooth > teeth change the number. There are three types of apophony.

  • Vowel gradation: Vowels change their grade or class to identify grammatical information. Two prominent phenomena are umlaut and ablaut.
    • In umlaut, a vowel within a word changes its class from back to front to identify plurality, e.g. in German mann [man] ‘man’ > männer [ˈmænnə] ‘men’. This phenomenon in English is also attested, such as goose (back vowel [u]) > geese (front vowel [i]). The name ‘umlaut’ is the Germanic name of the diaresis ¨ above the affected vowel.
    • Ablaut (ab- means ‘down’ and laut means ‘sound’) is a more general concept of the umlaut, as the vowel downgrades to identify grammatical information, such as tense and number. For example, the vowel of English strong verbs degrade from the front class to the middle class and to the back class to identify the tense, e.g. sing (front [i]) > sang (middle [æ]) > sung (back [a]) and fly (middle [aɪ]) > flew (back [u]) > flown (back [oʊ]).
  • Prosodic apophony: Prosodic elements (e.g. stress, duration, and tone) change their class to identify grammatical information. For example, the English stress changes its place in any pair of related noun (stressed on the first syllable) and verb (stressed on the second syllable), such as présent (n.) vs. presént (v.), ínsult (n.) vs. insúlt (v.), and récord (n.) vs. recórd (v.).
  • Consonant apophony: Some consonants mutate to identify grammatical information. For example, lenition (weakening the consonant to become voiced) and palatalization (attaching the [-j] sound to the consonant) transform an intransitive verb into a causative one in Bemba (Kula, 2000), such as koma ‘to be deaf’ > komya ‘to cause to be deaf’, pona ‘to fall’ > ponya ‘to cause to fall’, and luba ‘to be lost’ > lufya ‘to cause to be lost’.

References

  • Kula, Nancy C. (2000). The phonology/morphology interface: Consonant mutations in Bemba. In H. de Hoop & T. van der Wouden (Eds.), Linguistics in the Netherlands 2000(pp. 171–183). Amsterdam: John Benjamins.

Copyright (C) 2018 by Prachya Boonkwan. All rights reserved.

The contents of this blog is protected by U.S., Thai, and International copyright laws. Reproduction and distribution of the contents of this blog without written permission of the author is prohibited.