Learner corpus research plenary #cl2015

Learner corpus research: a fast-growing interdisciplinary field

Sylviane Granger

IMG_20150722_100646

 

LCR IS an interdisciplinary research

Design: learner and taks variables to control

Not only English language

Method: CIA (Granger, 1996) and computer-aided error analysis

Wider spectrum of linguistic analysis

Interpretation: focus on transfer but this is changing; growing integration of SLA theory

Applications: few up-and-running resources but great potential

Version 3 (2016 or 2017) around 30 L1s as opposed to 11 L1s in Version 1

Learner corpora is a powerful heuristic resource

Corpus techniques make it possible to uncover new dimensions of learner language and lead to the formulation of new research questions: the L2 phrasicon (word combinations).

Prof. Granger brings up Leech’s preface to Learner English on Computer (1998)

Gradual change from mute corpora to sound aligned corpora

POS tagging has improved so much

Error-tagging: wide range of error tagging systems: multi-layer annotation systems

Parsing of learner data (90% accuracy Geertzen et al. 2014)

Static learner corpora vs monito corpora

CMC learner corpus (Marchand 2015)

Granger (2009) paper on the learner research field:

Granger, Sylviane. “The contribution of learner corpora to second language acquisition and foreign language teaching.” Corpora and language teaching 33 (2009): 13.

 

CIA V2 Granger (2015): a new model

SLA researchers are more interested in corpus data and corpus linguists are more familiar with SLA grounding

Implications are much more numerous than applications

Links with NLP: spell and gramar checking, learner feedback, native language id, etc.

Multiple perspectives on the same resource: richer insights and more powerful tools

Phraseology

Louvain English for Academic Purposes Dictionary (LEAD)

web-based

corpus based

descriptions of cross-disciplinary academic vocabulary

1200 lexical times around 18 functions (contrast, illustrate, quote, refer, etc.)

A really exciting application

 

 

 

 

 

 

 

 

MA of L2 learner English

Corpus Linguistics 2015, University of Lancaster, 21-24 July

IMG_20150722_083955

Yu Yuan:
“Exploring the variation in world Learner Englishes: A multidimensional analysis of L2 written corpora”

109 features included in the analysis

RQ:

Can Biber’s model be extended?

How do features co-occur in learner English?

 

Data

ICLE 1.0 (Granger, 2002)

SWEECL 2.0 (Wen & Wang, 2008)

 

Tools

MA tagger Nini (2014) Manual here. Software (Windows) here.

Stanford Corenlp

R

Pythin scripts

 

Method

Kaisser’s criteria + Scree test for Factor Analysis

 

Results

10 dimensions stand out

Dimensions are largely epistemological, rhetorical and syntactical.

 

1.6 billion word Hansard Corpus available

 

Through the corpora list & Prof. Mark Davies

::::::::::::::::::::::::::::::::::::

We are pleased to announce the release of the 1.6 billion word Hansard Corpus . The corpus is part of the SAMUELS project and has been funded by the AHRC (UK).

The Hansard Corpus contains 1.6 billion words from 7.6 million speeches in the British Parliament from 1803-2005. The corpus is semantically tagged, which allows for powerful meaning-based searches. In addition, users can create “virtual corpora” by speaker, time period, House of Parliament, and party in power, and compare across these corpora.

As with all of the other BYU corpora, the corpus allows queries by part of speech, lemma, synonym, customized word lists, and by section of the corpus (e.g. which words or phrases appear in one time period much more than in another). In terms of visualization, it allows users to view frequency listings (matching words and phrases), chart displays (overall frequency by time period), collocates (including comparisons between collocates of contrasting node words), and re-sortable concordance lines.

The end result is a corpus that will be of value not only to linguists (as the largest structured corpus of historical British English from the 1800s-1900s), but hopefully to historians, political scientists, and others as well.

http://www.hansard-corpus.org

============================================

Mark Davies
Professor of Linguistics / Brigham Young University
http://davies-linguistics.byu.edu/

New Directions in Corpus-based Translation Studies

Through the Corpora List

:::::::::::::::::::::::::::::::::::::

The “Language Science Press” has just published the following open access book in their series “Translation and Multilingual NLP”:

“NEW DIRECTIONS IN CORPUS-BASED TRANSLATION STUDIES” by Claudio Fantinuoli & Federico Zanettin (eds.)

Please download your free copy from http://langsci-press.org/catalog/book/76

ABSTRACT

Corpus-based translation studies has become a major paradigm and research methodology and has investigated a wide variety of topics in the last two decades. The contributions to this volume add to the range of corpus-based studies by providing examples of some less explored applications of corpus analysis methods to translation research. They show that the area keeps evolving as it constantly opens up to different frameworks and approaches, from appraisal theory to process-oriented analysis, and encompasses multiple translation settings, including (indirect) literary translation, machine(-assisted) translation and the practical work of professional legal translators. The studies included in the volume also expand the range of application of corpus applications in terms of the tools used to accomplish the research tasks outlined.

Free ngram databases from COW14 web corpora

From the corpora list

::::::::::::::::::::::::::::::

We are pleased to announce the release of the first very large ngram databases derived from the giga-token COW14 web corpora. They are completely free (CC-BY) and can be downloaded without registration. We have applied no frequency thresholds whatsoever. In addition to the counted ngram lists, we offer raw versions such that everybody can create their own version. The raw ngrams also contain additional information (crawl year, top-level domain, country geolocation).

There are also English dependency bigrams (based on Malt parses) containing words, their heads, and the dependency relation between them.

For end-users, there are also word and lemma frequency lists with some convenient frequency measures, optionally with a frequency threshold of 10 (smaller files, easier handling).

——————————————————————–

LICENSE AND REFERENCES

License Creative Commons Attribution 4.0 International
References http://corporafromtheweb.org/category/cow-citation/

Please tell us whenever you publish work based on COW:
https://webcorpora.org/publication/

DOWNLOAD

http://hpsg.fu-berlin.de/cow/ngrams/
http://hpsg.fu-berlin.de/cow/frequencies/

ORIGIN AND ORIGINAL CORPUS SIZES

The ngrams are derived from the COW14AX sentence-shuffled corpora.

Information http://corporafromtheweb.org/category/corpora/
Interface https://webcorpora.org/

English 9,578,828,861 tokens (International)
German 11,660,894,000 tokens (AT, CH, DE)
Spanish 3,680,794,644 tokens (International)
Swedish 4,842,753,707 tokens (FI, SV)

FREQUENCY LISTS

Languages English, German, Spanish, Swedish
Versions Lemma, Lemma + POS, Word, Word + POS
Thresholds no threshold; raw frequency > 9
Measures raw frequency, absolute rank, frequency per million,
log-frequency per million, frequency band

NGRAMS

N 1 .. 5
Languages English, German, Spanish, Swedish
Versions Raw, Word, Word + POS, Lemma (except Swedish)

DEPENDENCY BIGRAMS

Languages English (German soon, maybe Swedish)
Versions Raw, Word, Word + POS, Lemma, Lemma + POS

CFP Posters on late-breaking results June 15 deadline

Through the corpora list

:::::::::::::::::::::::::::::::::
CORPUS LINGUISTICS 2015

The CL2015 organising committee is pleased to issue a call for posters on late-breaking results on any of the topics in the conference’s scope. By “late-breaking” we mean research which was not at a sufficiently advanced stage for an abstract submission to be made in the main submission cycle, but which has now reached that point.

We anticipate that the research in question will still be in its earliest phases. “Late-breaking results” include – but are not necessarily limited to – pilot study results, corpus creation activities currently in hand, newly-developed software, and so on.

· Abstracts should be 400-750 words in length. They must be formatted using the conference stylesheet (available to download from http://ucrel.lancs.ac.uk/cl2015/call.php )

· We especially encourage submission of abstracts from early-career researchers, including postgraduate research students and postdoctoral researchers.

· Abstracts which were previously submitted for the January deadline, and not accepted, are NOT eligible to be resubmitted.

· Abstracts should be submitted by email to cl2015@lancaster.ac.uk by 15th June 2014.

· As with all presentations, at least one author of any late-submission poster must attend the conference.
For more details see http://ucrel.lancs.ac.uk/cl2015

An archive copy of the previously-circulated CL2015 Call for Participation may be found here: http://ucrel.lancs.ac.uk/cl2015/doc/CL2015-CallParticipation.pdf

Andrew Hardie, Tony McEnery, Amanda Potts, Vaclav Brezina, and Paul Rayson
The CL2015 Organising Committee