May 22 (2021) Meeting (Vocabulary Day)
We would like to thank Stuart for the inspiring lectures, and all the participants who contributed to a successful meeting!
In the photo: Stuart McLean (Speaker), Emeritus Professor Paul Nation (Much appreciated for participation and comments!), Louis Lafleur (Moderator), Yu Kanazawa (Coordinator / Event host)
-------------------------------------------------------------------------------------------------------------------------Pre-event announcement below
-------------------------------------------------------------------------------------------------------------------------
日本語の情報は:http://www.let-kansai.org/htdocs/index.php?page_id=74
-------------------------------------------------------------------------
Bio:
Stuart McLean is an Associate Professor at St. Andrew's University. He is presently teaching the Teaching and Learning Vocabulary course at Temple University, Japan, and he has started to act as a Ph.D. supervisor and Ph.D. external examiner for different institutions. He has published in Reading in a Foreign Language, Vocabulary Learning and Instruction, Language Teaching Research, TESOL Quarterly, System, Applied Linguistics, Studies in Second Language Acquisition, Language Assessment Quarterly and Language Testing on subjects related to language assessment, research methods, reading, listening, and vocabulary. He is currently making online self-marking form-recall and meaning-recall (orthographic and phonological) vocabulary levels tests, that allow teachers to create levels tests based on various (a) lists, (b) word-band sizes, (c) band ranges, and (d) sampling ratios. Teachers can download automatically marked responses, actually typed responses, and the time taken to complete responses. Presently tests are designed for Japanese learners studying English, and English speakers learning Spanish (vocableveltest.org). Email: stumc93 [at] gmail [dot] com
[Detailed Topics]
TOPIC 1: Vocabulary levels test and Vocabulary size tests, their different purposes, and different score interpretations
For fluent reading comprehension, language learners need around 98% of the vocabulary within a text (Schmitt et al., 2011). Two methods of determining learner knowledge of the most frequent vocabulary is the use of either a Vocabulary Levels Test (e.g., Schmitt et al., 2001) or a Vocabulary Size Test (e.g., Nation & Beglar, 2007). One common misunderstanding is that a vocabulary size estimate indicates a lexical mastery level. For example, a student with a vocabulary size of 2,000 words knows all of the words within the first 2,000-words of English. Despite previous attempts at distinguishing between the constructs of vocabulary size and vocabulary levels (McLean & Kramer, 2015; Nation, 2016), the confusion between remains (e.g., Nation, 2012; Nguyen & Nation, 2011; Pujadas & Muñoz, 2020). This presentation clarifies what vocabulary size and levels tests are designed to measure and empirically demonstrates the distinction between these constructs, using a subsample (n=62) of a larger pool of VST data collected from 3,427 university students across Japan (McLean et al., 2014). Specifically, I will demonstrate that vocabulary size estimates often do not indicate knowledge of even the most frequent vocabulary levels. I will show that none of the 10 participants with vocabulary size estimates of exactly 6,000 words demonstrated mastery of the first 6,000 words of English, and only three demonstrated mastery of the first two 1,000-word bands. Similarly, only one of 52 participants with vocabulary size estimates of 2,000 words demonstrated mastery of the first 1,000-words.
Related paper: McLean, S., & Kramer, B. (2015). The Creation of a New Vocabulary Levels Test. Shiken, 19(2), 1-11. http://teval.jalt.org/node/33
TOPIC 2: McLean, S. (2021). The coverage comprehension model, its importance to pedagogy and research, and threats to the validity with which it is operationalized. Reading in a Foreign Language, 33(1), 126-140. https://nflrc.hawaii.edu/rfl/item/528 OPEN ACCESS
When learners can comprehend 98% or more of the tokens within a text, the lexical difficulty of the text is unlikely to inhibit reading comprehension (Schmitt et al., 2011). This phenomenon will be referred to as the Coverage Comprehension Model (CCM). The CCM is present in countless articles that describe the percentage of tokens necessary to comprehend reading materials (e.g., Nation, 2006). Further, numerous studies operationalize the CCM to provide evidence that participants were able to comprehend reading materials (e.g., Feng & Webb, 2020) by estimating (a) the lexical difficulty of a text and (b) the lexical mastery level of a learner. However, the validity with which the CCM is operationalized is limited by the following four assumptions; (a) 26 out of 30 words on a levels test is an appropriate threshold for mastery of a 1,000-word band; (b) the word counting unit used when estimating the lexical difficulty of a text and the lexical ability of a learner is appropriate for the target learners; (c) the item format used in levels tests can appropriately capture the type of vocabulary knowledge necessary when reading; and (d) the number of items on a vocabulary levels test accurately represents the difficulty of the 1,000-word band. This paper applies the findings of research to evaluate the validity of the first two assumptions, and concludes that the validity with which the CCM is operationalized in research is limited.
Related paper
McLean, S. (2021). The coverage comprehension model, its importance to pedagogy and research, and threats to the validity with which it is operationalized. Reading in a Foreign Language, 33(1), 126-140. https://nflrc.hawaii.edu/rfl/item/528 OPEN ACCESS
TOPIC 3: Word counting units (WCU): Japanese university students’ knowledge of derivational forms
In L2 English research, the most often discussed word counting units (WCU) are (a) the type, an orthographic form; (b) the lemma, a base word of a particular part of speech (POS) and inflectional forms; (c) the flemma, a base word form, and inflectional forms, regardless of POS; (d) and the Word Family (WF6), a base word form, inflectional forms, and derivational forms regardless of POS to level 6 of Bauer and Nation’s affix criteria. It should be stressed that flemmas are not lemmas, flemmas are often wrongly labeled as lemmas, and research and pedagogy will benefit from the accurate labeling of WCUs. WCUs are important because of assumptions involving the ability of English learners to comprehend derivational forms. These assumptions directly affect (a) the coverage that 1,000- word bands provide, and (b) the number of associated inflectional and derivational word forms
that are assumed to be comprehensible. This presentation will argue that the dominant use of WF6 is the result of convention and despite the L2 research. I will provide both sides of the augment and the limitations of both views. It is suggested that the way forward is for researchers and teachers to justify their use of a WCU.
Related papers
McLean, S. (2021). The coverage comprehension model, its importance to pedagogy and research, and threats to the validity with which it is operationalized. Reading in a Foreign Language, 33(1), 126-140. https://nflrc.hawaii.edu/rfl/item/528 OPEN ACCESS
McLean, S. (2018). Evidence for the adoption of the flemma as an appropriate word counting unit. Applied Linguistics, 39(6), 823-845. https://doi.org/10.1093/applin/amw050
Brown, D., Stoeckel, T., Mclean, S., & Stewart, J. (2020). The Most Appropriate Lexical Unit for L2 Vocabulary Research and Pedagogy: A Brief Review of the Evidence. Applied Linguistics.https://doi.org/10.1093/applin/amaa061
Stoeckel, T., McLean. S., & Nation., P. (2020). Limitations of Size and Levels Tests of Written Receptive Vocabulary Knowledge. Studies in Second Language Acquisition, 1-23. https://doi.org/10.1017/S027226312000025X
Laufer, B., & Cobb, T. (2020). How much knowledge of derived words is needed for reading?. Applied Linguistics, 41(6), 971-998. https://doi.org/10.1093/applin/amz051
TOPIC 4: Question types: Meaning-recall questions better capture the type of vocabulary necessary for reading
Vocabulary’s relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which modalities of vocabulary knowledge have the strongest correlations to reading proficiency, and observed differences have often been statistically non-significant. The present research employs a bootstrapping approach to reach a clearer understanding of relationships between various modalities of vocabulary knowledge to reading proficiency. Test-takers (N = 103) answered 1000 vocabulary test items spanning the third 1000 most frequent English words in the New General Service List corpus (Browne, Culligan, & Phillips, 2013). Items were answered under four modalities: Yes/No checklists, form recall, meaning recall, and meaning recognition. These pools of test items were then sampled with replacement to create 1000 simulated tests ranging in length from five to 200 items and the results were correlated to the Test of English for International Communication (TOEIC.) Reading scores. For all examined test lengths, meaning-recall vocabulary tests had the highest average correlations to reading proficiency, followed by form-recall vocabulary tests. The results indicated that tests of vocabulary recall are stronger predictors of reading proficiency than tests of vocabulary recognition, despite the theoretically closer relationship of vocabulary recognition to reading.
Related Papers
McLean, S., Stewart, J., & Batty, A. O. (2020). Predicting L2 reading proficiency with modalities of vocabulary knowledge: A bootstrapping approach. Language Testing, 37(3), 389-411.
Zhang, S., & Zhang, X. (2020). The relationship between vocabulary knowledge and L2 reading/listening comprehension: A meta-analysis. Language Teaching Research, 1362168820913998.
Laufer, B., & Aviad–Levitzky, T. A. M. I. (2017). What type of vocabulary knowledge predicts reading comprehension: Word meaning recall or word meaning recognition?. The Modern Language Journal, 101(4), 729-741.
Stoeckel, T., McLean. S., & Nation., P. (2020). Limitations of Size and Levels Tests of Written Receptive Vocabulary Knowledge. Studies in Second Language Acquisition, 1-23. https://doi.org/10.1017/S027226312000025X
TOPIC 5: Sampling ratios: How many items do we need to accurately represent a 1,000-word band
Gyllstad, H., McLean, S., & Stewart, J. (2020). Using confidence intervals to determine adequate item sample sizes for vocabulary tests: An essential but overlooked practice. Language Testing. https://doi.org/10.1177/0265532220979562 OPEN ACCESS
Stoeckel, T., McLean. S., & Nation., P. (2020). Limitations of Size and Levels Tests of Written Receptive Vocabulary Knowledge. Studies in Second Language Acquisition, 1-23. https://doi.org/10.1017/S027226312000025X
Gyllstad, H., Vilkaitė, L., Schmitt, N. (2015). Assessing vocabulary size through multiple-choice formats: Issues with guessing and sampling rates. ITL-International Journal of Applied Linguistics, 166(2), 278–306. https://doi.org/10.1075/itl.166.2.04gyl
TOPIC 6: Workshop: Using self-marking online meaning-recall (reading and listening) and form-recall vocabulary tests to (a) match learners with appropriate materials, (b) gain data for research, and (c) motivate students to study vocabulary each week.
In this workshop, participants can make vocabulary tests for their students or research www.vocableveltest.org participants to use. I will go through the various options (list, word counting unit, question type, sampling ratio, feedback) available to teachers and researchers. I will also explain the advantages and disadvantages of the different question types, and the advantages and disadvantages of using meaning-recall tests relative to MC or YES/NO tests.
Comments
Post a Comment