ICALL Research Group
Oberseminar, Prof. Detmar Meurers, Summer 2010
Die Gruppe der Konnektregierer umfasst prototypisch die Nebensatzeinleiter des Deutschen, die einerseits eine spezifische semantische Relation aufweisen und andererseits spezifische syntaktische Anforderungen an das interne Konnekt stellen (vgl. das Merkmal der Konnektrektion im Handbuch der Konnektoren). Wie gestaltet sich der Gebrauch dieser Funktionswörter in der Lernersprache? Treten dort typischerweise Fehler auf? Wie lassen sich die Kontexte der entsprechenden Lerneräußerungen präzise beschreiben? Und gibt es Differenzen zum sprachlichen Verhalten von L1-Sprechern des Deutschen? An der Textsorte der Zusammenfassungen kann zudem gezeigt, welchen Einfluss die jeweilige zusammenzufassende Vorlage bei der Textproduktion hat. Empirische Basis der Analyse ist das fehlerannotierte Lernerkorpus Falko, und zwar das Subkorpus der Textzusammenfassungen.
This longitudinal study investigates the second language (L2) development in college-level learners of German. For decades, second language acquisition (SLA) researchers have been concerned with finding a uniform “developmental index” across all languages and instruction settings for measuring learner progress in L2 speaking and writing. Suggested measures were based on counting of words, sentences, and other surface structure units in learner texts and comparing these numbers across proficiency levels. Skehan (1989) summarized such general measures in his model of three proficiency dimensions: complexity, accuracy, and fluency, or CAF. More recently, the need for supplementing general CAF measures with more specific indicators of language development as well as diversifying research methods has been recognized. In particular, calls have been made for longitudinal tracking the same learners over longer periods of time; going beyond surface features and focusing on specific morphological, lexical, and syntactic patterns of learner language; dynamical descriptions accounting for non-linearity and variability in L2 acquisition; and applying interdisciplinary research methods including corpus linguistics and computational linguistics. This study responds to these calls by collecting, annotating, and analyzing a written electronic corpus of learner productions elicited at dense time intervals starting from the novice level and continuing over four semesters of study. I will present some preliminary results using both general CAF indicators and more specific characteristics of learner language obtained by corpus analysis methods. I will conclude by briefly describing the goals of the joint corpus annotation project of the University of Kansas and the University of Tübingen.
In this talk, speaking from the perspective of Second Language Acquisition (SLA) research, I will briefly sketch the recent history of attempts to manipulate the language to which learners are exposed with the intention of facilitating development. The focus will be on the need for more precise models of how language input is processed. Relevant aspects of the MOGUL framework will be briefly introduced to obtain a clearer idea, based in current thinking in cognitive science, of how perception and emotion affect the growth of linguistic knowledge in the individual. This should enable much more precise research into the effectiveness of various ways of enhancing the language input to the learner one way.
Useful preliminary reading would include:
From the beginning of the didactics history, it has been highlighted the importance of the assessment in the teaching process. Hence, the research field called Computer Assisted Assessment (CAA) to study how the computer can aid in the evaluation of students’ learning process has been developed. In particular, CAA of free-text answers has received a great deal of work during the last years due to the necessity of evaluating the deep understanding of the lessons’ concepts that, according to most educators and researchers, cannot be done by only MCQ testing.
In this talk, it is reviewed the state-of-art of the field: the techniques used, the existing free-text scoring systems and a comparison of all of these systems in compliance with the currently available evaluation metrics. Finally, it is also presented our approach based on the use of statistics, Latent Semantic Analysis and Genetic Algorithms both to evaluate Spanish and English short free-text students’ answers.
_________________________________________________________________________________
Last updated: December 31, 2011